<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN" "JATS-journalpublishing1-4.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article" dtd-version="1.4" xml:lang="zh">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">aam</journal-id>
      <journal-title-group>
        <journal-title>Advances in Applied Mathematics</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2324-8009</issn>
      <issn pub-type="ppub">2324-7991</issn>
      <publisher>
        <publisher-name>汉斯出版社</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.12677/aam.2026.154148</article-id>
      <article-id pub-id-type="publisher-id">aam-139230</article-id>
      <article-categories>
        <subj-group>
          <subject>Article</subject>
        </subj-group>
        <subj-group>
          <subject>数学与物理</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>基于临床–影像多模态融合的乳腺MRI复发 风险预测</article-title>
        <trans-title-group xml:lang="en">
          <trans-title>Recurrence Risk Prediction on Breast MRI via Clinical-Image Multimodal Fusion</trans-title>
        </trans-title-group>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name name-style="eastern">
            <surname>郝</surname>
            <given-names>凯亮</given-names>
          </name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
      </contrib-group>
      <aff id="aff1"><label>1</label> 青岛大学数学与统计学院，山东 青岛 </aff>
      <pub-date pub-type="epub">
        <day>31</day>
        <month>03</month>
        <year>2026</year>
      </pub-date>
      <pub-date pub-type="collection">
        <month>03</month>
        <year>2026</year>
      </pub-date>
      <volume>15</volume>
      <issue>04</issue>
      <fpage>182</fpage>
      <lpage>191</lpage>
      <history>
        <date date-type="received">
          <day>03</day>
          <month>03</month>
          <year>2026</year>
        </date>
        <date date-type="accepted">
          <day>27</day>
          <month>03</month>
          <year>2026</year>
        </date>
        <date date-type="published">
          <day>08</day>
          <month>04</month>
          <year>2026</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>© 2026 Hans Publishers Inc. All rights reserved.</copyright-statement>
        <copyright-year>2026</copyright-year>
        <license license-type="open-access">
          <license-p> This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link> ). </license-p>
        </license>
      </permissions>
      <self-uri content-type="doi" xlink:href="https://doi.org/10.12677/aam.2026.154148">https://doi.org/10.12677/aam.2026.154148</self-uri>
      <abstract>
        <p>乳腺癌复发风险评估对于术后随访管理、辅助治疗调整及高危患者的早期干预具有重要意义。为比较临床模型、影像模型及临床–影像多模态融合模型在乳腺MRI复发风险预测中的表现，并验证融合策略在低阳性率场景下的应用价值，本文基于Duke-Breast-Cancer-MRI公开队列，完成临床变量清洗、编码与筛选以及病灶ROI三通道输入(Pre/Post/Sub)构建，并采用固定随机种子进行分层划分，得到训练集、验证集和测试集，样本数分别为588、148和184。临床变量筛选在训练集内依次进行低方差剔除、高相关特征过滤及单变量相关性筛选，其中低方差阈值设为0.01，特征间绝对相关系数阈值设为0.90，最终保留67个临床变量进入建模。针对阳性样本比例约9.46%的类别不平衡问题，在训练阶段结合加权采样、Focal Loss与类别权重进行优化，在推理阶段采用“召回优先 + 特异度下限0.75”的阈值选择策略。结果显示，在测试集上，Fusion-DualMeta模型的AUC、AUPRC、Sensitivity、Specificity、F1值和MCC分别为0.8633、0.2801、0.9412、0.7844、0.4638和0.4667；与Clinical-XGBoost模型相比，其AUC、F1值、Sensitivity和MCC分别提高0.0673、0.1480、0.4118和0.2253；与Image-EmbROI模型相比，上述指标分别提高0.0902、0.1951、0.4118和0.2818，且假阴性病例数由8例降至1例。结合样本规模、类别不平衡程度及模型可解释性需求，本文采用基于元学习的后融合策略，并对其与特征拼接、注意力机制融合及张量积融合等方法的适用性进行了讨论。研究表明，在保持特异度约束的前提下，临床–影像多模态融合模型能够显著增强乳腺癌复发高风险患者的识别能力，可为乳腺癌随访筛查与辅助决策提供更具实用价值的技术支持。</p>
      </abstract>
      <trans-abstract xml:lang="en">
        <p>Assessment of breast cancer recurrence risk is of great importance for postoperative follow-up management, adjuvant treatment adjustment, and early intervention in high-risk patients. To compare the performance of clinical, imaging, and clinical-image multimodal fusion models for recurrence prediction on breast MRI, this study was conducted on the Duke-Breast-Cancer-MRI public cohort. Clinical variables were cleaned, encoded, and filtered, and tumor-centered ROI three-channel inputs (Pre/Post/Sub) were constructed. The data were split into training, validation, and test sets with sizes of 588, 148, and 184, respectively. Clinical variable screening was performed on the training set only, including low-variance removal (threshold = 0.01), high-correlation filtering (|r| &gt; 0.90), and univariate relevance screening, resulting in 67 retained variables. To address the strong class imbalance with a recurrence-positive rate of about 9.46%, weighted sampling, Focal Loss, and class weighting were adopted during training, while a recall-prioritized thresholding strategy with a minimum specificity constraint of 0.75 was applied during inference. On the test set, the Fusion-DualMeta model achieved an AUC of 0.8633, an AUPRC of 0.2801, a sensitivity of 0.9412, a specificity of 0.7844, an F1 score of 0.4638, and an MCC of 0.4667. Compared with Clinical-XGBoost, the improvements in AUC, F1 score, sensitivity, and MCC were 0.0673, 0.1480, 0.4118, and 0.2253, respectively. Compared with Image-EmbROI, the corresponding gains were 0.0902, 0.1951, 0.4118, and 0.2818, with false negatives reduced from 8 to 1. Considering sample size, class imbalance, and interpretability, a meta-learning-based late-fusion strategy was adopted and discussed against other multimodal fusion approaches. The results indicate that under a specificity-constrained setting, clinical-image multimodal fusion can substantially improve the identification of high-risk recurrence patients and may provide useful support for follow-up screening and decision-making in breast cancer care.</p>
      </trans-abstract>
      <kwd-group kwd-group-type="author-generated" xml:lang="zh">
        <kwd>乳腺MRI</kwd>
        <kwd>复发预测</kwd>
        <kwd>多模态融合</kwd>
        <kwd>类别不平衡</kwd>
        <kwd>XGBoost</kwd>
      </kwd-group>
      <kwd-group kwd-group-type="author-generated" xml:lang="en">
        <kwd>Breast MRI</kwd>
        <kwd>Recurrence Prediction</kwd>
        <kwd>Multimodal Fusion</kwd>
        <kwd>Class Imbalance</kwd>
        <kwd>XGBoost</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec1">
      <title>1. 引言</title>
      <p>乳腺癌复发风险评估直接关系到术后随访频率、辅助治疗方案调整及高危患者的早期干预，因此建立准确、稳定的复发预测模型具有重要临床意义。已有研究表明，基于乳腺MRI的影像特征分析能够从病灶形态、强化模式及肿瘤异质性中提取与预后相关的信息，为复发风险预测提供重要依据[<xref ref-type="bibr" rid="B1">1</xref>]。同时，公开影像数据库的建设也为可重复、可验证的模型研究提供了数据基础，其中Duke-Breast-Cancer-MRI队列因同时提供MRI影像与临床信息而具有较高研究价值[<xref ref-type="bibr" rid="B2">2</xref>]。在建模方法方面，XGBoost等集成学习模型在结构化临床数据处理中表现稳定，适用于处理中小样本、缺失值较多及特征维度较复杂的二分类问题[<xref ref-type="bibr" rid="B3">3</xref>]；而EfficientNet等卷积网络则能够较好地提取医学图像中的深层视觉表征[<xref ref-type="bibr" rid="B4">4</xref>]。此外，针对不平衡分类任务，Focal Loss能够降低易分样本主导效应，从而提升模型对少数类的关注[<xref ref-type="bibr" rid="B5">5</xref>]。在评价层面，MCC、AUPRC等指标被认为比单纯准确率更适合用于阳性率较低的医学预测问题[<xref ref-type="bibr" rid="B6">6</xref>]-[<xref ref-type="bibr" rid="B8">8</xref>]。</p>
      <p>基于上述研究基础，乳腺癌复发预测逐步从单一临床变量分析扩展到影像特征建模，再进一步发展到临床–影像多模态联合分析。已有研究提示，临床病理因素可提供较稳定的群体层面风险信息，而MRI视觉表征更有助于刻画肿瘤内部异质性和强化行为，两者可能具有互补性[<xref ref-type="bibr" rid="B9">9</xref>]-[<xref ref-type="bibr" rid="B11">11</xref>]。近年来，面向乳腺癌复发风险评估的多模态深度学习研究也开始增多，提示融合模型有望进一步提升预测表现[<xref ref-type="bibr" rid="B12">12</xref>][<xref ref-type="bibr" rid="B13">13</xref>]。</p>
      <p>现有复发预测研究仍存在两方面不足：一是部分研究依赖私有数据或缺少统一的数据处理流程，尤其在临床变量筛选步骤中，未明确说明低方差剔除、高相关特征过滤及单变量筛选所采用的具体阈值，影响结果的可复现性；二是在多模态建模中，虽然融合模型通常优于单模态模型，但对不同融合策略的适用场景讨论仍不充分，尤其缺少对特征拼接、注意力机制融合、张量积融合及后融合等方法优缺点的说明。与此同时，在复发样本占比较低的条件下，若仅关注总体准确率，也容易掩盖对高风险病例的漏检问题，而临床筛查场景通常更强调对阳性病例的优先识别[<xref ref-type="bibr" rid="B6">6</xref>]-[<xref ref-type="bibr" rid="B8">8</xref>]。</p>
      <p>基于此，本文围绕Duke-Breast-Cancer-MRI公开队列构建临床分支、影像分支与融合分支三类模型，在统一数据划分、统一评价指标和统一阈值决策原则下开展对比研究。本文旨在建立从数据清洗到模型评估的可复现实验流程，明确临床变量筛选标准，比较不同模态模型在AUC、AUPRC、F1值、MCC及混淆矩阵等方面的差异，并进一步讨论为何在本研究中选用基于元学习的后融合策略，以及该策略在低阳性率乳腺癌复发筛查场景中的应用价值。</p>
    </sec>
    <sec id="sec2">
      <title>2. 资料与方法</title>
      <sec id="sec2dot1">
        <title>2.1. 数据来源与研究终点</title>
        <p>本研究采用Duke-Breast-Cancer-MRI公开队列作为研究数据来源[<xref ref-type="bibr" rid="B2">2</xref>]。该队列包含乳腺癌患者的MRI影像资料及相应临床信息，能够为乳腺癌复发风险预测提供影像表型和结构化变量两类数据支持。结合本研究任务，对原始数据进行样本筛选、变量整理及标签对齐后，构建用于复发风险分类的研究队列。</p>
        <p>根据既定纳入标准，最终获得可用于建模分析的样本共920例，其中训练集、验证集和测试集分别为588例、148例和184例。数据集划分采用固定随机种子下的分层抽样方法，以保证不同子集中阳性与阴性样本分布基本一致。总体样本中复发阳性比例约为9.46%，呈现明显类别不平衡特征，因此在模型训练与评价过程中需要重点关注少数类样本的识别能力。</p>
      </sec>
      <sec id="sec2dot2">
        <title>2.2. 临床数据预处理</title>
        <p>临床分支以公开临床特征表为输入，首先统一列名、异常值和缺失值标记；随后保留可被数值化编码的结构化变量，并在训练集内部完成缺失填补、特征筛选与数值转换，以避免信息泄露。</p>
        <p>临床变量筛选共分三步进行。第一步为低方差剔除：对经编码后的候选变量计算训练集内方差，删除方差小于0.01的特征，以去除几乎不提供区分信息的变量。第二步为高相关特征过滤：对保留变量计算两两相关系数，当绝对相关系数大于0.90时，删除其中与结局单变量相关性较弱的变量，以减少多重共线性影响。第三步为单变量相关性筛选：计算各变量与复发标签之间的单变量相关性，按照绝对相关性从高到低排序，最终保留前67个变量进入建模。所有筛选阈值均在训练集内确定后固定应用于验证集和测试集。</p>
        <p>在变量表示方面，连续变量采用训练集内中位数进行缺失填补，并进行标准化处理；分类变量统一编码，并对缺失或未定义类别设置显式的“Unknown”映射，以保持表示完整性。最终得到的67个临床变量覆盖人口学信息、生物标志物、肿瘤分期分级、影像元数据、治疗信息及肿瘤部位等主要风险相关维度(见表1)。</p>
        <p><bold>Table 1.</bold> Grouping statistics of clinical input variables</p>
        <p><bold>表</bold><bold>1.</bold> 临床输入变量分组统计</p>
        <table-wrap id="tbl1">
          <label>Table 1</label>
          <table>
            <tbody>
              <tr>
                <td>Group</td>
                <td>
                  <bold>FeatureCount</bold>
                </td>
                <td>
                  <bold>AvgMissingRate</bold>
                </td>
              </tr>
              <tr>
                <td>Biomarker</td>
                <td>28</td>
                <td>15.6%</td>
              </tr>
              <tr>
                <td>ImagingMeta</td>
                <td>12</td>
                <td>5.9%</td>
              </tr>
              <tr>
                <td>Demographics</td>
                <td>7</td>
                <td>34.6%</td>
              </tr>
              <tr>
                <td>Other</td>
                <td>7</td>
                <td>0.4%</td>
              </tr>
              <tr>
                <td>Tumor</td>
                <td>7</td>
                <td>18.8%</td>
              </tr>
              <tr>
                <td>Outcome/Time</td>
                <td>6</td>
                <td>11.9%</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
      </sec>
      <sec id="sec2dot3">
        <title>2.3. 影像输入构建</title>
        <p>影像分支以病灶区域为核心输入单元。根据已有病灶定位信息提取感兴趣区域(region of interest, ROI)，并构建三通道输入，包括治疗前增强图像(Pre)、增强后图像(Post)及差值图像(Sub)。其中，差值图像用于突出强化变化信息，以增强模型对病灶动态增强特征的表征能力[<xref ref-type="bibr" rid="B1">1</xref>][<xref ref-type="bibr" rid="B9">9</xref>]。为保证模型输入一致性，所有ROI图像在送入网络前均进行尺寸统一及数值归一化处理(见<xref ref-type="fig" rid="fig1">图1</xref>)。</p>
        <fig id="fig1">
          <label>Figure 1</label>
          <graphic xlink:href="https://html.hanspub.org/file/2625021-rId12.jpeg?20260408033248" />
        </fig>
        <p><bold>Fi</bold><bold>gure 1.</bold> Example of Pre/Post/Sub channels and 3-channel fusion input for lesion ROI</p>
        <p><bold>图</bold><bold>1.</bold> 病灶ROI的Pre/Post/Sub及三通道融合输入示例</p>
      </sec>
      <sec id="sec2dot4">
        <title>2.4. 数据划分与类别不平衡处理</title>
        <p>本文采用固定随机种子(seed = 42)进行分层划分，训练集、验证集和测试集规模分别为588、148和184例，各子集阳性比例基本一致，见表2。针对阳性样本稀缺的问题，图像训练阶段采用WeightedRandomSampler以提升少数类采样概率，损失函数采用Binary Focal Loss以降低易分类负样本的主导效应[<xref ref-type="bibr" rid="B5">5</xref>]；临床树模型则通过scale_pos_weight引入类别权重[<xref ref-type="bibr" rid="B3">3</xref>][<xref ref-type="bibr" rid="B7">7</xref>]。在模型选择阶段，本文采用“召回优先 + 特异度下限0.75”的阈值决策原则，以更贴近复发高危筛查场景。</p>
        <p><bold>Table 2.</bold> Dataset split and positive rate</p>
        <p><bold>表</bold><bold>2.</bold> 数据集划分与阳性比例</p>
        <table-wrap id="tbl2">
          <label>Table 2</label>
          <table>
            <tbody>
              <tr>
                <td>
                  <bold>Split</bold>
                </td>
                <td>
                  <bold>N</bold>
                </td>
                <td>
                  <bold>Positive</bold>
                </td>
                <td>
                  <bold>Negative</bold>
                </td>
                <td>
                  <bold>PosRate</bold>
                </td>
              </tr>
              <tr>
                <td>Train</td>
                <td>588</td>
                <td>56</td>
                <td>532</td>
                <td>9.52%</td>
              </tr>
              <tr>
                <td>Val</td>
                <td>148</td>
                <td>14</td>
                <td>134</td>
                <td>9.46%</td>
              </tr>
              <tr>
                <td>Test</td>
                <td>184</td>
                <td>17</td>
                <td>167</td>
                <td>9.24%</td>
              </tr>
              <tr>
                <td>Total</td>
                <td>920</td>
                <td>87</td>
                <td>833</td>
                <td>9.46%</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
      </sec>
      <sec id="sec2dot5">
        <title>2.5. 模型构建</title>
        <p>2.5.1. Clinical-XGBoost</p>
        <p>临床最终模型采用XGBoost二分类框架[<xref ref-type="bibr" rid="B3">3</xref>]，核心超参数设置为n_estimators = 300、max_depth = 5、learning_rate = 0.01、colsample_bytree = 0.75，并根据训练集阳性比例动态计算scale_pos_weight。模型输出为复发概率，并通过验证集阈值搜索确定测试阶段的最终决策点。</p>
        <p>2.5.2. Image-EmbROI</p>
        <p>图像最终模型采用“双分支融合”策略：其一为迁移学习深度嵌入分支，使用EfficientNet-B0提取视觉表征，再经标准化、PCA与Logistic回归得到概率输出[<xref ref-type="bibr" rid="B4">4</xref>]；其二为ROI统计分支，在病灶与背景区域上构建强度均值、标准差、分位数及差值/比值等统计特征，并使用XGBoost建模。两分支概率通过验证集搜索得到的权重进行线性融合，形成Image-EmbROI输出。</p>
        <p>2.5.3. Fusion-DualMeta</p>
        <p>融合模型采用轻量级元学习策略，不直接拼接高维原始特征，而是以前序模型输出概率为核心输入，包括临床概率p_clin、图像旧分支概率p_img_old以及图像新分支概率p_img_new，并进一步构造平方项、乘积项和两两绝对差等高阶交互特征[<xref ref-type="bibr" rid="B12">12</xref>][<xref ref-type="bibr" rid="B13">13</xref>]。最终以Logistic回归完成融合判别，使模型在保持解释性的同时增强泛化稳定性。</p>
        <p>本文采用该后融合策略，主要基于以下考虑。首先，特征拼接属于早融合方法，虽然实现简单，但在本研究样本规模有限、影像与临床特征维度差异较大的情况下，易引入维度膨胀和冗余噪声，增加过拟合风险。其次，注意力机制融合能够自适应学习不同模态的重要性，但通常需要更大样本量和更复杂的端到端训练过程，对当前920例样本且阳性率不足10%的场景未必最优。再次，张量积融合能够显式建模跨模态高阶交互，但参数量增长较快，对数据规模和训练稳定性要求更高。相比之下，本文采用的元学习后融合方法仅以各单模态模型输出概率及其有限高阶交互作为输入，一方面能够保留临床分支与影像分支各自已经学到的有效判别信息，另一方面能够在低维空间中完成跨模态整合，更适合中小样本、不平衡分类及强调可解释性的应用场景。</p>
      </sec>
      <sec id="sec2dot6">
        <title>2.6. 评价指标</title>
        <p>为全面评价不同模型在类别不平衡场景下的预测性能，本文采用受试者工作特征曲线下面积(area under the receiver operating characteristic curve, AUC)、精确率–召回率曲线下面积(area under the precision-recall curve, AUPRC)、敏感度(Sensitivity)、特异度(Specificity)、F1值及Matthews相关系数(MCC)作为主要评价指标[<xref ref-type="bibr" rid="B6">6</xref>]-[<xref ref-type="bibr" rid="B8">8</xref>]。</p>
        <p>其中，AUC用于衡量模型整体区分阳性与阴性样本的能力；AUPRC更适用于阳性样本比例较低的任务，可更直观地反映模型对少数类样本的识别表现；Sensitivity和Specificity分别反映模型对阳性样本的检出能力和对阴性样本的排除能力；F1值综合考虑精确率与召回率的平衡；MCC则能够在类别不平衡条件下较全面地评价分类性能[<xref ref-type="bibr" rid="B6">6</xref>]。此外，本文还结合混淆矩阵分析假阳性与假阴性样本分布，以进一步评估模型在临床筛查场景中的应用价值。</p>
      </sec>
    </sec>
    <sec id="sec3">
      <title>3. 结果</title>
      <sec id="sec3dot1">
        <title>3.1. 三模型主结果对比</title>
        <p>测试集结果见表3。整体来看，Clinical-XGBoost和Image-EmbROI均表现出较为稳定的分类能力，可作为乳腺癌复发风险预测任务中的单模态基线模型。其中，Clinical-XGBoost反映了结构化临床变量在风险分层中的基础价值，说明年龄、分期及相关临床特征对复发风险判断具有一定贡献；Image-EmbROI则体现了MRI病灶区域视觉表征在捕捉肿瘤内部异质性方面的优势，表明影像信息同样能够为复发预测提供有效支持[<xref ref-type="bibr" rid="B9">9</xref>]-[<xref ref-type="bibr" rid="B11">11</xref>]。在此基础上，Fusion-DualMeta在AUC、AUPRC、Sensitivity、F1值和MCC等多个关键指标上均取得最佳结果，显示出临床信息与影像特征具有明显的互补性[<xref ref-type="bibr" rid="B12">12</xref>][<xref ref-type="bibr" rid="B13">13</xref>]。相较于单一模态模型，多模态融合不仅提高了整体判别能力，也增强了模型在类别不平衡条件下对阳性样本的识别效果，说明融合策略能够更全面地表征影响乳腺癌复发的多维信息。</p>
        <p>尤其值得注意的是，Fusion-DualMeta在召回率方面达到0.9412，明显高于Clinical-XGBoost和Image-EmbROI的0.5294。这表明融合模型能够识别出绝大多数复发病例，在高危筛查任务中具有更高的实际应用价值。对于乳腺癌复发风险评估而言，漏检往往意味着高风险患者未能被及时纳入强化随访或进一步干预，因此模型对阳性样本的检出能力具有优先意义。从这一角度看，融合模型在Sensitivity上的显著提升，不仅是统计指标上的改善，也意味着其在临床应用场景中更有利于实现对复发高危人群的前置识别。与此同时，Fusion-DualMeta在F1值和MCC上也取得最高水平，说明该模型并非单纯通过牺牲整体平衡性换取高召回，而是在正负样本综合判别上实现了更优结果(见<xref ref-type="fig" rid="fig2">图2</xref>)。</p>
        <p><bold>Table 3.</bold> Main comparison of the three final models</p>
        <p><bold>表</bold><bold>3.</bold> 三个最终模型的主结果对比</p>
        <table-wrap id="tbl3">
          <label>Table 3</label>
          <table>
            <tbody>
              <tr>
                <td>
                  <bold>Method</bold>
                </td>
                <td>
                  <bold>AUC</bold>
                </td>
                <td>
                  <bold>AUPRC</bold>
                </td>
                <td>
                  <bold>Sensitivity</bold>
                </td>
                <td>
                  <bold>Specificity</bold>
                </td>
                <td>
                  <bold>F1</bold>
                </td>
                <td>
                  <bold>MCC</bold>
                </td>
              </tr>
              <tr>
                <td>Clinical-XGBoost</td>
                <td>0.7961</td>
                <td>0.2612</td>
                <td>0.5294</td>
                <td>0.8144</td>
                <td>0.3158</td>
                <td>0.2414</td>
              </tr>
              <tr>
                <td>Fusion-DualMeta</td>
                <td>0.8633</td>
                <td>0.2801</td>
                <td>0.9412</td>
                <td>0.7844</td>
                <td>0.4638</td>
                <td>0.4667</td>
              </tr>
              <tr>
                <td>Image-EmbROI</td>
                <td>0.7732</td>
                <td>0.2288</td>
                <td>0.5294</td>
                <td>0.7545</td>
                <td>0.2687</td>
                <td>0.1848</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
        <fig id="fig2">
          <label>Figure 2</label>
          <graphic xlink:href="https://html.hanspub.org/file/2625021-rId13.jpeg?20260408033250" />
        </fig>
        <p><bold>Figure 2.</bold> Comparison of core metrics among the three models</p>
        <p><bold>图</bold><bold>2.</bold> 三模型核心评价指标对比</p>
      </sec>
      <sec id="sec3dot2">
        <title>3.2. ROC与PR曲线分析</title>
        <p>ROC曲线如<xref ref-type="fig" rid="fig3">图3</xref>所示。可以看出，Fusion-DualMeta的ROC曲线整体位于两种单模态模型之上，对应AUC达到0.8633，优于Clinical-XGBoost和Image-EmbROI。这表明在不同分类阈值下，融合模型均表现出更强的区分能力，即其对复发与非复发样本的排序能力更优。从判别分析角度看，AUC的提升说明融合模型在更广泛的阈值范围内保持了较好的稳定性，不依赖单一阈值点即可展现优势。对于临床应用而言，这种特性意味着模型在面对不同风险控制要求时具有更好的适应性，既可用于偏重高召回的筛查场景，也可在一定程度上支持更审慎的风险分层判断。</p>
        <fig id="fig3">
          <label>Figure 3</label>
          <graphic xlink:href="https://html.hanspub.org/file/2625021-rId14.jpeg?20260408033250" />
        </fig>
        <p><bold>Figure 3.</bold> ROC curves of the three models</p>
        <p><bold>图</bold><bold>3.</bold> 三模型ROC曲线对比</p>
        <fig id="fig4">
          <label>Figure 4</label>
          <graphic xlink:href="https://html.hanspub.org/file/2625021-rId15.jpeg?20260408033250" />
        </fig>
        <p><bold>Figure 4.</bold> PR curves of the three models</p>
        <p><bold>图</bold><bold>4.</bold> 三模型PR曲线对比</p>
        <p>进一步观察PR曲线(<xref ref-type="fig" rid="fig4">图4</xref>)可以发现，Fusion-DualMeta在低召回和中高召回区间总体保持更优的精确率水平，对应AUPRC达到0.2801，也高于两种单模态模型。由于本研究中复发阳性样本占比仅约9.46%，类别不平衡较为明显，在这种情况下，PR曲线较ROC曲线更能反映模型对阳性类别的真实识别能力[<xref ref-type="bibr" rid="B6">6</xref>][<xref ref-type="bibr" rid="B7">7</xref>]。融合模型在PR曲线上的优势说明，其不仅扩大了真阳性样本的覆盖范围，而且在提升召回率的同时较好控制了无效阳性预测的增长。这意味着模型在识别更多复发病例的同时，并未出现过度激进的正类判断，从而在一定程度上抑制了正类识别过程中的噪声累积。换言之，Fusion-DualMeta在不平衡数据环境下实现了更合理的精确率–召回率平衡，这也是其AUPRC和F1值得以同步提升的重要原因。</p>
      </sec>
      <sec id="sec3dot3">
        <title>3.3. 漏检控制与融合增益</title>
        <p><xref ref-type="fig" rid="fig5">图5</xref>给出了三种模型在测试集上的混淆矩阵。结果显示，与Clinical-XGBoost和Image-EmbROI相比，Fusion-DualMeta将假阴性数量由8例降至1例，同时真阳性数量由9例提升至16例，说明融合模型对于复发样本的识别能力获得了实质性增强。从临床视角看，假阴性减少具有更为重要的现实意义，因为假阴性病例意味着真实高风险患者被误判为低风险，从而可能错失后续强化监测、辅助治疗调整或早期干预机会。融合模型仅保留1例假阴性，表明其在“尽量少漏检”的目标上取得了明显改进。虽然与此相伴，假阳性数量较单模态模型略有增加，但其特异度仍保持在0.7844，说明模型并未因追求高召回而导致阴性样本识别能力明显失衡。对于复发风险筛查任务而言，这种以小幅增加误检为代价，换取显著降低漏检的结果，更符合临床实际需求，也更具应用可接受性(见表4)。</p>
        <p><bold>Table 4.</bold> Performance gain of the fusion model over single-modality models</p>
        <p><bold>表</bold><bold>4.</bold> 融合模型相对单模态模型的性能增益</p>
        <table-wrap id="tbl4">
          <label>Table 4</label>
          <table>
            <tbody>
              <tr>
                <td>
                  <bold>Compare</bold>
                </td>
                <td>
                  <bold>AUC Gain</bold>
                </td>
                <td>
                  <bold>F1 Gain</bold>
                </td>
                <td>
                  <bold>Sensitivity Gain</bold>
                </td>
                <td>
                  <bold>MCC Gain</bold>
                </td>
              </tr>
              <tr>
                <td>Fusion-DualMeta vs Clinical-XGBoost</td>
                <td>+0.0673</td>
                <td>+0.1480</td>
                <td>+0.4118</td>
                <td>+0.2253</td>
              </tr>
              <tr>
                <td>Fusion-DualMeta vs Image-EmbROI</td>
                <td>+0.0902</td>
                <td>+0.1951</td>
                <td>+0.4118</td>
                <td>+0.2818</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
        <fig id="fig5">
          <label>Figure 5</label>
          <graphic xlink:href="https://html.hanspub.org/file/2625021-rId16.jpeg?20260408033250" />
        </fig>
        <p><bold>Figure 5.</bold> Confusion matrices of the three models on the test set</p>
        <p><bold>图</bold><bold>5.</bold> 三模型在测试集上的混淆矩阵对比</p>
      </sec>
    </sec>
    <sec id="sec4">
      <title>4. 讨论</title>
      <p>本文结果表明，临床信息与MRI视觉证据之间具有较好的互补性。临床分支主要反映年龄、分子标志物、分期分级及治疗信息等结构化风险特征，能够提供相对稳定的群体统计信息；影像分支则侧重病灶强化模式、边界及减影差异等局部表征，更有助于刻画肿瘤异质性[<xref ref-type="bibr" rid="B9">9</xref>]-[<xref ref-type="bibr" rid="B11">11</xref>]。Fusion-DualMeta通过对两类信息进行协同建模，在AUC、F1值、Sensitivity及MCC等指标上均优于单模态模型，说明多模态融合能够更全面地提取与乳腺癌复发相关的风险信息[<xref ref-type="bibr" rid="B12">12</xref>][<xref ref-type="bibr" rid="B13">13</xref>]。需要指出的是，本文采用元学习进行后融合，并非默认其在所有多模态任务中均优于其他策略，而是综合本研究数据特征后作出的方法学选择。对于特征拼接方法，其优势在于实现直接、流程简洁，但在本研究中，临床变量与深度影像表征维度差异较大，直接拼接容易带来维度膨胀、特征尺度不一致及过拟合风险。对于注意力机制融合，其优势在于能够动态建模不同模态的重要性分配，但这类方法往往依赖更充足的训练样本和更复杂的端到端优化流程；在阳性样本较少的条件下，训练稳定性和泛化能力可能受到影响。对于张量积融合，其可以更充分地表达跨模态高阶交互，但通常伴随参数规模快速增长，对样本量、算力和正则化设计要求更高。相比之下，本文所采用的后融合元学习方法以单模态输出概率及少量交互项作为输入，一方面降低了融合层参数复杂度，另一方面保留了各单模态模型的独立判别能力，并使最终模型具有更好的可解释性和部署便利性。</p>
      <p>从应用场景看，复发筛查任务中漏检代价通常高于误检代价，因此本文采用“召回优先 + 特异度下限”的阈值策略，而非单纯追求Accuracy最大化[<xref ref-type="bibr" rid="B6">6</xref>]-[<xref ref-type="bibr" rid="B8">8</xref>]。结果显示，在Specificity仍保持0.78以上的情况下，融合模型将假阴性由8例降至1例，表明该模型更有利于优先识别潜在高危患者。这种以较小误检代价换取明显漏检下降的结果，更符合乳腺癌术后高风险筛查的实际需求。</p>
      <p>本文仍存在一定局限。首先，研究仅基于单一公开队列完成验证，缺乏多中心外部测试；其次，影像预处理仍依赖病灶框注释，自动ROI提取的稳定性仍有提升空间；再次，本文将复发问题处理为二分类任务，尚未纳入时间到事件信息[<xref ref-type="bibr" rid="B10">10</xref>]-[<xref ref-type="bibr" rid="B13">13</xref>]。后续可结合外部验证、自动病灶定位以及生存分析建模等方向进一步完善。</p>
    </sec>
    <sec id="sec5">
      <title>5. 结论</title>
      <p>本文基于Duke-Breast-Cancer-MRI公开队列，建立了从临床变量预处理、ROI三通道输入构建到临床模型、影像模型及融合模型对比评估的完整流程。结果表明，在类别不平衡且强调漏检控制的条件下，Fusion-DualMeta相较于Clinical-XGBoost和Image-EmbROI取得了更优的AUC、F1值、Sensitivity和MCC，尤其在假阴性控制方面优势明显。</p>
      <p>综合方法复杂度、样本规模、不平衡特征及临床可解释性要求，本文认为基于元学习的后融合策略更适合当前研究场景。研究结果说明，临床结构化变量与MRI视觉特征具有明显互补性，多模态融合能够更有效地识别乳腺癌复发高风险患者，可为术后随访筛查和辅助决策提供具有应用潜力的技术支持。</p>
    </sec>
    <sec id="sec6">
      <title>致 谢</title>
      <p>感谢TCIA与Duke队列提供公开可用的数据资源，感谢相关开源工具和研究工作为本研究流程复现提供支持。</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <title>References</title>
      <ref id="B1">
        <label>1.</label>
        <citation-alternatives>
          <mixed-citation publication-type="journal">Saha, A., Harowicz, M.R., Grimm, L.J., Kim, C.E., Ghate, S.V., Walsh, R., <italic>et al</italic>. (2018) A Machine Learning Approach to Radiogenomics of Breast Cancer: A Study of 922 Subjects and 529 DCE-MRI Features. <italic>British</italic><italic>Journal</italic><italic>of</italic><italic>Cancer</italic>, 119, 508-516. https://doi.org/10.1038/s41416-018-0185-8 <pub-id pub-id-type="doi">10.1038/s41416-018-0185-8</pub-id><pub-id pub-id-type="pmid">30033447</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41416-018-0185-8">https://doi.org/10.1038/s41416-018-0185-8</ext-link></mixed-citation>
          <element-citation publication-type="journal">
            <person-group person-group-type="author">
              <string-name>Saha, A.</string-name>
              <string-name>Harowicz, M.R.</string-name>
              <string-name>Grimm, L.J.</string-name>
              <string-name>Kim, C.E.</string-name>
              <string-name>Ghate, S.V.</string-name>
              <string-name>Walsh, R.</string-name>
            </person-group>
            <year>2018</year>
            <article-title>A Machine Learning Approach to Radiogenomics of Breast Cancer: A Study of 922 Subjects and 529 DCE-MRI Features</article-title>
            <source>British Journal of Cancer</source>
            <volume>119</volume>
            <pub-id pub-id-type="doi">10.1038/s41416-018-0185-8</pub-id>
            <pub-id pub-id-type="pmid">30033447</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B2">
        <label>2.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">The Cancer Imaging Archive (TCIA) (2022) Duke-Breast-Cancer-MRI: Dynamic Contrast-Enhanced Magnetic Reso-nance Images of Breast Cancer Patients with Tumor Locations.</mixed-citation>
          <element-citation publication-type="other">
            <year>2022</year>
            <article-title>Duke-Breast-Cancer-MRI: Dynamic Contrast-Enhanced Magnetic Reso-nance Images of Breast Cancer Patients with Tumor Locations</article-title>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B3">
        <label>3.</label>
        <citation-alternatives>
          <mixed-citation publication-type="confproc">Chen, T. and Guestrin, C. (2016) XGBoost: A Scalable Tree Boosting System. <italic>Proceedings of the</italic>22 <italic>nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</italic>, San Francisco, 13-17 August 2016, 785-794. https://doi.org/10.1145/2939672.2939785 <pub-id pub-id-type="doi">10.1145/2939672.2939785</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/2939672.2939785">https://doi.org/10.1145/2939672.2939785</ext-link></mixed-citation>
          <element-citation publication-type="confproc">
            <person-group person-group-type="author">
              <string-name>Chen, T.</string-name>
              <string-name>Guestrin, C.</string-name>
              <string-name>Mining, S</string-name>
            </person-group>
            <year>2016</year>
            <article-title>XGBoost: A Scalable Tree Boosting System</article-title>
            <source>Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
            <volume>13</volume>
            <pub-id pub-id-type="doi">10.1145/2939672.2939785</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B4">
        <label>4.</label>
        <citation-alternatives>
          <mixed-citation publication-type="confproc">Tan, M. and Le, Q.V. (2019) EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. <italic>Proceedings of the</italic> 36 <italic>th International Conference on Machine Learning</italic>, Long Beach, 9-15 June 2019, 6105-6114.</mixed-citation>
          <element-citation publication-type="confproc">
            <person-group person-group-type="author">
              <string-name>Tan, M.</string-name>
              <string-name>Le, Q.V.</string-name>
              <string-name>Learning, L</string-name>
            </person-group>
            <year>2019</year>
            <article-title>EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</article-title>
            <source>Proceedings of the 36th International Conference on Machine Learning</source>
            <volume>9</volume>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B5">
        <label>5.</label>
        <citation-alternatives>
          <mixed-citation publication-type="confproc">Lin, T., Goyal, P., Girshick, R., He, K. and Dollar, P. (2017) Focal Loss for Dense Object Detection. 2017 <italic>IEEE International Conference on Computer Vision</italic> ( <italic>ICCV</italic>), Venice, 22-29 October 2017, 2980-2988. https://doi.org/10.1109/iccv.2017.324 <pub-id pub-id-type="doi">10.1109/iccv.2017.324</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/iccv.2017.324">https://doi.org/10.1109/iccv.2017.324</ext-link></mixed-citation>
          <element-citation publication-type="confproc">
            <person-group person-group-type="author">
              <string-name>Lin, T.</string-name>
              <string-name>Goyal, P.</string-name>
              <string-name>Girshick, R.</string-name>
              <string-name>He, K.</string-name>
              <string-name>Dollar, P.</string-name>
            </person-group>
            <year>2017</year>
            <article-title>Focal Loss for Dense Object Detection</article-title>
            <source>2017 IEEE International Conference on Computer Vision (ICCV)</source>
            <volume>22</volume>
            <pub-id pub-id-type="doi">10.1109/iccv.2017.324</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B6">
        <label>6.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Chicco, D. and Jurman, G. (2020) The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation. <italic>BMC</italic><italic>Genomics</italic>, 21, Article No. 6. https://doi.org/10.1186/s12864-019-6413-7 <pub-id pub-id-type="doi">10.1186/s12864-019-6413-7</pub-id><pub-id pub-id-type="pmid">31898477</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12864-019-6413-7">https://doi.org/10.1186/s12864-019-6413-7</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Chicco, D.</string-name>
              <string-name>Jurman, G.</string-name>
            </person-group>
            <year>2020</year>
            <article-title>The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation</article-title>
            <source>BMC Genomics</source>
            <volume>21</volume>
            <elocation-id>No</elocation-id>
            <pub-id pub-id-type="doi">10.1186/s12864-019-6413-7</pub-id>
            <pub-id pub-id-type="pmid">31898477</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B7">
        <label>7.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Haibo He, and Garcia, E.A. (2009) Learning from Imbalanced Data. <italic>IEEE</italic><italic>Transactions</italic><italic>on</italic><italic>Knowledge</italic><italic>and</italic><italic>Data</italic><italic>Engineering</italic>, 21, 1263-1284. https://doi.org/10.1109/tkde.2008.239 <pub-id pub-id-type="doi">10.1109/tkde.2008.239</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/tkde.2008.239">https://doi.org/10.1109/tkde.2008.239</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Garcia, E.A.</string-name>
            </person-group>
            <year>2009</year>
            <article-title>Learning from Imbalanced Data</article-title>
            <source>IEEE Transactions on Knowledge and Data Engineering</source>
            <volume>21</volume>
            <pub-id pub-id-type="doi">10.1109/tkde.2008.239</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B8">
        <label>8.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Thakur, D., Gera, T., Bhardwaj, V., Mazen, R., Lasisi, A. and Engida, T. (2025) A Comparative Study on Advanced Predictive Modeling of Thyroid Cancer Recurrence Using Multi Algorithmic Machine Learning Frameworks. <italic>Scientific</italic><italic>Reports</italic>, 16, Article No. 3385. https://doi.org/10.1038/s41598-025-33396-7 <pub-id pub-id-type="doi">10.1038/s41598-025-33396-7</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-025-33396-7">https://doi.org/10.1038/s41598-025-33396-7</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Thakur, D.</string-name>
              <string-name>Gera, T.</string-name>
              <string-name>Bhardwaj, V.</string-name>
              <string-name>Mazen, R.</string-name>
              <string-name>Lasisi, A.</string-name>
              <string-name>Engida, T.</string-name>
            </person-group>
            <year>2025</year>
            <article-title>A Comparative Study on Advanced Predictive Modeling of Thyroid Cancer Recurrence Using Multi Algorithmic Machine Learning Frameworks</article-title>
            <source>Scientific Reports</source>
            <volume>16</volume>
            <elocation-id>No</elocation-id>
            <pub-id pub-id-type="doi">10.1038/s41598-025-33396-7</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B9">
        <label>9.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Li, H., Zhu, Y., Burnside, E.S., Drukker, K., Hoadley, K.A., Fan, C., <italic>et al</italic>. (2016) MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of Mammaprint, Oncotype DX, and PAM50 Gene Assays. <italic>Radiology</italic>, 281, 382-391. https://doi.org/10.1148/radiol.2016152110 <pub-id pub-id-type="doi">10.1148/radiol.2016152110</pub-id><pub-id pub-id-type="pmid">27144536</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1148/radiol.2016152110">https://doi.org/10.1148/radiol.2016152110</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Li, H.</string-name>
              <string-name>Zhu, Y.</string-name>
              <string-name>Burnside, E.S.</string-name>
              <string-name>Drukker, K.</string-name>
              <string-name>Hoadley, K.A.</string-name>
              <string-name>Fan, C.</string-name>
              <string-name>Mammaprint, O</string-name>
            </person-group>
            <year>2016</year>
            <article-title>MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of Mammaprint, Oncotype DX, and PAM50 Gene Assays</article-title>
            <source>Radiology</source>
            <volume>281</volume>
            <pub-id pub-id-type="doi">10.1148/radiol.2016152110</pub-id>
            <pub-id pub-id-type="pmid">27144536</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B10">
        <label>10.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Koh, J., Lee, E., Han, K., Kim, S., Kim, D., Kwak, J.Y., <italic>et al</italic>. (2020) Three-Dimensional Radiomics of Triple-Negative Breast Cancer: Prediction of Systemic Recurrence. <italic>Scientific</italic><italic>Reports</italic>, 10, Article No. 2976. https://doi.org/10.1038/s41598-020-59923-2 <pub-id pub-id-type="doi">10.1038/s41598-020-59923-2</pub-id><pub-id pub-id-type="pmid">32076078</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-020-59923-2">https://doi.org/10.1038/s41598-020-59923-2</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Koh, J.</string-name>
              <string-name>Lee, E.</string-name>
              <string-name>Han, K.</string-name>
              <string-name>Kim, S.</string-name>
              <string-name>Kim, D.</string-name>
              <string-name>Kwak, J.Y.</string-name>
            </person-group>
            <year>2020</year>
            <article-title>Three-Dimensional Radiomics of Triple-Negative Breast Cancer: Prediction of Systemic Recurrence</article-title>
            <source>Scientific Reports</source>
            <volume>10</volume>
            <elocation-id>No</elocation-id>
            <pub-id pub-id-type="doi">10.1038/s41598-020-59923-2</pub-id>
            <pub-id pub-id-type="pmid">32076078</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B11">
        <label>11.</label>
        <citation-alternatives>
          <mixed-citation publication-type="journal">Xu, K., Hua, M., Mai, T., Ren, X., Fang, X., Wang, C., <italic>et al</italic>. (2024) A Multiparametric MRI-Based Radiomics Model for Stratifying Postoperative Recurrence in Luminal B Breast Cancer. <italic>Journal</italic><italic>of</italic><italic>Imaging</italic><italic>Informatics</italic><italic>in</italic><italic>Medicine</italic>, 37, 1475-1487. https://doi.org/10.1007/s10278-023-00923-9 <pub-id pub-id-type="doi">10.1007/s10278-023-00923-9</pub-id><pub-id pub-id-type="pmid">38424277</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10278-023-00923-9">https://doi.org/10.1007/s10278-023-00923-9</ext-link></mixed-citation>
          <element-citation publication-type="journal">
            <person-group person-group-type="author">
              <string-name>Xu, K.</string-name>
              <string-name>Hua, M.</string-name>
              <string-name>Mai, T.</string-name>
              <string-name>Ren, X.</string-name>
              <string-name>Fang, X.</string-name>
              <string-name>Wang, C.</string-name>
            </person-group>
            <year>2024</year>
            <article-title>A Multiparametric MRI-Based Radiomics Model for Stratifying Postoperative Recurrence in Luminal B Breast Cancer</article-title>
            <source>Journal of Imaging Informatics in Medicine</source>
            <volume>37</volume>
            <pub-id pub-id-type="doi">10.1007/s10278-023-00923-9</pub-id>
            <pub-id pub-id-type="pmid">38424277</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B12">
        <label>12.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Zhang, R., Wang, K., Wang, S., Wang, C., Cao, T., Ci, C., <italic>et al</italic>. (2025) Multimodal Deep Learning Model for Prediction of Breast Cancer Recurrence Risk and Correlation with Oncotype DX. <italic>Breast</italic><italic>Cancer</italic><italic>Research</italic>, 27, Article No. 178. https://doi.org/10.1186/s13058-025-02129-z <pub-id pub-id-type="doi">10.1186/s13058-025-02129-z</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s13058-025-02129-z">https://doi.org/10.1186/s13058-025-02129-z</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Zhang, R.</string-name>
              <string-name>Wang, K.</string-name>
              <string-name>Wang, S.</string-name>
              <string-name>Wang, C.</string-name>
              <string-name>Cao, T.</string-name>
              <string-name>Ci, C.</string-name>
            </person-group>
            <year>2025</year>
            <article-title>Multimodal Deep Learning Model for Prediction of Breast Cancer Recurrence Risk and Correlation with Oncotype DX</article-title>
            <source>Breast Cancer Research</source>
            <volume>27</volume>
            <elocation-id>No</elocation-id>
            <pub-id pub-id-type="doi">10.1186/s13058-025-02129-z</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B13">
        <label>13.</label>
        <citation-alternatives>
          <mixed-citation publication-type="journal">Yu, Y., Ren, W., Mao, L., Ouyang, W., Hu, Q., Yao, Q., <italic>et al</italic>. (2025) MRI-Based Multimodal AI Model Enables Prediction of Recurrence Risk and Adjuvant Therapy in Breast Cancer. <italic>Pharmacological</italic><italic>Research</italic>, 216, Article ID: 107765. https://doi.org/10.1016/j.phrs.2025.107765 <pub-id pub-id-type="doi">10.1016/j.phrs.2025.107765</pub-id><pub-id pub-id-type="pmid">40345352</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.phrs.2025.107765">https://doi.org/10.1016/j.phrs.2025.107765</ext-link></mixed-citation>
          <element-citation publication-type="journal">
            <person-group person-group-type="author">
              <string-name>Yu, Y.</string-name>
              <string-name>Ren, W.</string-name>
              <string-name>Mao, L.</string-name>
              <string-name>Ouyang, W.</string-name>
              <string-name>Hu, Q.</string-name>
              <string-name>Yao, Q.</string-name>
            </person-group>
            <year>2025</year>
            <article-title>MRI-Based Multimodal AI Model Enables Prediction of Recurrence Risk and Adjuvant Therapy in Breast Cancer</article-title>
            <source>Pharmacological Research</source>
            <volume>216</volume>
            <fpage>107765</fpage>
            <elocation-id>ID</elocation-id>
            <pub-id pub-id-type="doi">10.1016/j.phrs.2025.107765</pub-id>
            <pub-id pub-id-type="pmid">40345352</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
    </ref-list>
  </back>
</article>