4.7 Article

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3027314

Keywords

Artificial neural networks; Heating systems; Biomedical imaging; Learning systems; Reliability; Prediction algorithms; Visualization; Explainable artificial intelligence (XAI); interpretability; machine learning (ML); medical information system; survey

Funding

  1. Health-AI Division
  2. DAMO Academy
  3. Alibaba Group Holding Ltd., through the Alibaba-NTU Talent Program

Ask authors/readers for more resources

Artificial intelligence and machine learning have shown remarkable performances in various fields, but the challenge of interpretability remains. The medical sector requires higher levels of interpretability to ensure the reliability of machine decisions, and a deeper understanding of the mechanisms behind machine algorithms is needed to advance medical practices.
Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide obviously interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available