4.0 Article

On Interpretability of Artificial Neural Networks: A Survey

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TRPMS.2021.3066428

Keywords

Deep learning; interpretability; neural networks; survey

Funding

  1. Rensselaer-IBM AI Research Collaboration Program
  2. IBM AI Horizons Network
  3. NIH [R01 EB026646, R01 CA233888, R01 CA237267, R01 HL151561]

Ask authors/readers for more resources

Deep learning by artificial deep neural networks has achieved great success in various fields, but their black-box nature hinders their adoption in critical applications like medicine. The interpretability of neural networks has become increasingly important, with wide applications in medicine and various future research directions.
Deep learning as performed by artificial deep neural networks (DNNs) has achieved great successes recently in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide adoption in mission-critical applications such as medical diagnosis and therapy. Because of the huge potentials of deep learning, the interpretability of DNNs has recently attracted much research attention. In this article, we propose a simple but comprehensive taxonomy for interpretability, systematically review recent studies on interpretability of neural networks, describe applications of interpretability in medicine, and discuss future research directions, such as in relation to fuzzy logic and brain science.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.0
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available