4.7 Review

Transparency of deep neural networks for medical image analysis: A review of interpretability methods

Journal

COMPUTERS IN BIOLOGY AND MEDICINE
Volume 140, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2021.105111

Keywords

Explainable artificial intelligence; Medical imaging; Explainability; Interpretability; Deep neural networks

Funding

  1. ERC advanced grant (ERC-ADG-2015) [694 812]
  2. European Union's Horizon 2020 research and innovation programme [766 276, 952 172, 952 103, UM 2017-8295, 10 103 434]

Ask authors/readers for more resources

AI is increasingly used in clinical applications for diagnosis and treatment decisions, with deep neural networks showing equal or better performance than clinicians. However, their lack of interpretability calls for the development of methods to ensure their trustworthiness. Nine different types of interpretability methods have been identified for understanding deep learning models in medical image analysis, with ongoing research on improving interpretability and evaluation methods for deep neural networks.
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available