期刊
COMPUTERS IN BIOLOGY AND MEDICINE
卷 149, 期 -, 页码 -出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2022.106043
关键词
Explainable machine learning; Interpretable machine learning; Trustworthiness; Healthcare
类别
资金
- Qatar National Research Fund (a member of Qatar Foundation) [13S-0206- 200273]
With the increasing use of machine learning and deep learning in healthcare, the issues of liability, trust, and interpretability of model outputs are becoming more important. The black-box nature of these models hinders their clinical utilization, requiring explanations of model decisions to gain trust from clinicians and patients. The development of explainable machine learning improves model transparency and reliability, and can address ethical problems arising from the use of machine learning in healthcare.
With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据