期刊
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
卷 84, 期 -, 页码 -出版社
ELSEVIER SCI LTD
DOI: 10.1016/j.bspc.2023.104979
关键词
Explainable AI; CNV; Biomarker; Deep learning; Breast cancer subtypes; Gradient
This study aims to discover a set of CNV biomarkers for dissecting molecular heterogeneity in breast cancer. DLmodel, a deep learning model, was built for breast cancer classification and analyzed using explainable AI methods, resulting in 44 CNV biomarkers. Cross-validation showed a classification accuracy of 0.712, and gene set analysis revealed subtype-specific enriched pathways and druggable genes. The efficacy of the identified biomarkers was validated on METABRIC, demonstrating the role of explainable AI in discovering clinically reliable biomarkers.
Breast cancer is a leading cause of cancer-related deaths among women. The multi-omic data has revolutionized the methodology to unravel molecular heterogeneity in breast cancer. As genetic variations captured from Copy Number Variation (CNV) data are considered the most stable amongst the multi-omic data, it leads to robust biomarkers. Thus, this paper targets the discovery of a set of CNV biomarkers for dissecting this heterogeneity. The existing algorithms yield biomarkers, too huge to be interpreted clinically. So, in this paper, we have proposed XAI-CNVMarker-an explainable AI-based post-hoc biomarker discovery framework to discover a small set of interpretable biomarkers. We exploit the power of deep learning to build DLmodel-a deep learning model for breast cancer classification. Subsequently, the trained model is analyzed using different explainable AI methods to arrive at a set of 44 CNV biomarkers. Using 5-fold cross-validation, we obtained a classification accuracy of 0.712 (+/- 0.048) at a 95% confidence interval. Gene set analysis revealed 37 subtype-specific enriched Reactome and Kegg pathways, 21 druggable genes, and 13 biomarkers linked with the prognostic outcome. Finally, we validated the efficacy of the identified biomarkers on METABRIC. Thus, the proposed framework demonstrates the role of explainable AI in discovering clinically reliable biomarkers.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据