期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 32, 期 11, 页码 4793-4813出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3027314
关键词
Artificial neural networks; Heating systems; Biomedical imaging; Learning systems; Reliability; Prediction algorithms; Visualization; Explainable artificial intelligence (XAI); interpretability; machine learning (ML); medical information system; survey
类别
资金
- Health-AI Division
- DAMO Academy
- Alibaba Group Holding Ltd., through the Alibaba-NTU Talent Program
Artificial intelligence and machine learning have shown remarkable performances in various fields, but the challenge of interpretability remains. The medical sector requires higher levels of interpretability to ensure the reliability of machine decisions, and a deeper understanding of the mechanisms behind machine algorithms is needed to advance medical practices.
Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide obviously interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据