4.6 Article

Uncovering the Black Box of Coronary Artery Disease Diagnosis: The Significance of Explainability in Predictive Models

期刊

APPLIED SCIENCES-BASEL
卷 13, 期 14, 页码 -

出版社

MDPI
DOI: 10.3390/app13148120

关键词

explainability; coronary artery disease; machine learning; computer-aided diagnosis

向作者/读者索取更多资源

This article introduces an explainable computer-aided diagnosis system that can help medical experts accurately diagnose cardiovascular diseases, relieving the burden on the National Healthcare Service. The study utilizes a dataset of biometric and clinical information from 571 patients to analyze the prediction process and the significance of each input datum. The findings are compared with the medical literature to evaluate the validity of the prediction process.
Featured Application An explainable computer-aided diagnosis system for cardiovascular diseases can be a valuable tool for primary health care. Medical experts can utilize such tools to pinpoint unhealthy patients accurately and early, hence decongesting the National Healthcare Service (NHS). In recent times, coronary artery disease (CAD) prediction and diagnosis have been the subject of many Medical decision support systems (MDSS) that make use of machine learning (ML) and deep learning (DL) algorithms. The common ground of most of these applications is that they function as black boxes. They reach a conclusion/diagnosis using multiple features as input; however, the user is oftentimes oblivious to the prediction process and the feature weights leading to the eventual prediction. The primary objective of this study is to enhance the transparency and comprehensibility of a black-box prediction model designed for CAD. The dataset employed in this research comprises biometric and clinical information obtained from 571 patients, encompassing 21 different features. Among the instances, 43% of cases of CAD were confirmed through invasive coronary angiography (ICA). Furthermore, a prediction model utilizing the aforementioned dataset and the CatBoost algorithm is analyzed to highlight its prediction making process and the significance of each input datum. State-of-the-art explainability mechanics are employed to highlight the significance of each feature, and common patterns and differences with the medical bibliography are then discussed. Moreover, the findings are compared with common risk factors for CAD, to offer an evaluation of the prediction process from the medical expert's point of view. By depicting how the algorithm weights the information contained in features, we shed light on the black-box mechanics of ML prediction models; by analyzing the findings, we explore their validity in accordance with the medical literature on the matter.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据