4.6 Article

Methods for interpreting and understanding deep neural networks

Journal

DIGITAL SIGNAL PROCESSING
Volume 73, Issue -, Pages 1-15

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.dsp.2017.10.011

Keywords

Deep neural networks; Activation maximization; Sensitivity analysis; Taylor decomposition; Layer-wise relevance propagation

Funding

  1. National Research Foundation of Korea
  2. Institute for Information & Communications Technology Promotion (IITP) - Korea Government [2017-0-00451]
  3. Deutsche Forschungsgemeinschaft (DFG) [MU 987/17-1]
  4. German Ministry for Education and Research as Berlin Big Data Center (BBDC) [01IS14013A]
  5. Institute for Information & Communication Technology Planning & Evaluation (IITP), Republic of Korea [2017-0-00451-002] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data. (C) 2017 The Authors. Published by Elsevier Inc.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available