4.7 Article

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3027314

关键词

Artificial neural networks; Heating systems; Biomedical imaging; Learning systems; Reliability; Prediction algorithms; Visualization; Explainable artificial intelligence (XAI); interpretability; machine learning (ML); medical information system; survey

资金

  1. Health-AI Division
  2. DAMO Academy
  3. Alibaba Group Holding Ltd., through the Alibaba-NTU Talent Program

向作者/读者索取更多资源

Artificial intelligence and machine learning have shown remarkable performances in various fields, but the challenge of interpretability remains. The medical sector requires higher levels of interpretability to ensure the reliability of machine decisions, and a deeper understanding of the mechanisms behind machine algorithms is needed to advance medical practices.
Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide obviously interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据