4.7 Article

Deep learning radiopathomics based on preoperative US images and biopsy whole slide images can distinguish between luminal and non-luminal tumors in early-stage breast cancers

期刊

EBIOMEDICINE
卷 94, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.ebiom.2023.104706

关键词

Breast cancer; Ultrasound; Whole slide imaging; Deep learning

向作者/读者索取更多资源

A deep learning radiopathomics model was developed to predict the molecular subtypes of early-stage breast cancers using preoperative ultrasound images and biopsy slides. The model showed excellent diagnostic performance in both internal validation and external testing, outperforming other deep learning models based on ultrasound images or biopsy slides alone.
Background For patients with early-stage breast cancers, neoadjuvant treatment is recommended for non-luminal tumors instead of luminal tumors. Preoperative distinguish between luminal and non-luminal cancers at early stages will facilitate treatment decisions making. However, the molecular immunohistochemical subtypes based on biopsy specimens are not always consistent with final results based on surgical specimens due to the high intra-tumoral heterogeneity. Given that, we aimed to develop and validate a deep learning radiopathomics (DLRP) model to preoperatively distinguish between luminal and non-luminal breast cancers at early stages based on preoperative ultrasound (US) images, and hematoxylin and eosin (H & E)-stained biopsy slides.Methods This multicentre study included three cohorts from a prospective study conducted by our team and registered on the Chinese Clinical Trial Registry (ChiCTR1900027497). Between January 2019 and August 2021, 1809 US images and 603 H & E-stained whole slide images (WSIs) from 603 patients with early-stage breast cancers were obtained. A Resnet18 model pre-trained on ImageNet and a multi-instance learning based attention model were used to extract the features of US and WSIs, respectively. An US-guided Co-Attention module (UCA) was designed for feature fusion of US and WSIs. The DLRP model was constructed based on these three feature sets including deep learning US feature, deep learning WSIs feature and UCA-fused feature from a training cohort (1467 US images and 489 WSIs from 489 patients). The DLRP model's diagnostic performance was validated in an internal validation cohort (342 US images and 114 WSIs from 114 patients) and an external test cohort (270 US images and 90 WSIs from 90 patients). We also compared diagnostic efficacy of the DLRP model with that of deep learning radiomics model and deep learning pathomics model in the external test cohort.Findings The DLRP yielded high performance with area under the curve (AUC) values of 0.929 (95% CI 0.865-0.968) in the internal validation cohort, and 0.900 (95% CI 0.819-0.953) in the external test cohort. The DLRP also outperformed deep learning radiomics model based on US images only (AUC 0.815 [0.719-0.889], p = 0.027) and deep learning pathomics model based on WSIs only (AUC 0.802 [0.704-0.878], p = 0.013) in the external test cohort.Interpretation The DLRP can effectively distinguish between luminal and non-luminal breast cancers at early stages before surgery based on pretherapeutic US images and biopsy H & E-stained WSIs, providing a tool to facilitate treatment decision making in early-stage breast cancers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据