4.7 Article

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

期刊

INFORMATION FUSION
卷 77, 期 -, 页码 29-52

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2021.07.016

关键词

Explainable AI; Information fusion; Multi-domain information fusion; Weakly supervised learning; Medical image analysis

资金

  1. Hangzhou Economic and Technological Development Area Strategical Grant [Imperial Institute of Advanced Technology]
  2. Project of Shenzhen International Cooperation Foundation [GJHZ20180926165402083]
  3. Clinical Research Project of Shenzhen Health and Family Planning Commission [SZLY2018018]
  4. European Research Council Innovative Medicines Initiative on Development of Therapeutics and Diagnostics Combatting Coronavirus Infections Award `DRAGON: rapiD [H2020-JTI-IMI2 101005122]
  5. AI for Health Imaging Award `CHAIMELEON: Accelerating the Lab to Market Transition of AI Tools for Cancer Management' [H2020-SC1-FA-DTS-2019-1 952172]
  6. British Heart Foundation [TG/18/5/34111, PG/16/78/32402]
  7. UK Research and Innovation [MR/V023799/1]
  8. UKRI [MR/V023799/1] Funding Source: UKRI

向作者/读者索取更多资源

XAI is an emerging research field in machine learning that aims to explain the decision-making process of AI systems. In healthcare, XAI is becoming increasingly important for improving the transparency and explainability of deep learning applications, although the lack of explainability in most AI systems may be a major barrier to successful implementation of AI tools in clinical practice.
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据