3.8 Proceedings Paper

Network Dissection: Quantifying Interpretability of Deep Visual Representations

Publisher

IEEE
DOI: 10.1109/CVPR.2017.354

Keywords

-

Funding

  1. National Science Foundation [1524817]
  2. Vannevar Bush Faculty Fellowship program - Basic Research Office of the Assistant Secretary of Defense for Research and Engineering
  3. Office of Naval Research [N00014-16-1-3116]
  4. MIT Big Data Initiative at CSAIL
  5. Toyota Research Institute / MIT CSAIL Joint Research Center
  6. Google Award
  7. Amazon Award
  8. Facebook Fellowship
  9. Div Of Electrical, Commun & Cyber Sys
  10. Directorate For Engineering [1532591] Funding Source: National Science Foundation
  11. Div Of Information & Intelligent Systems
  12. Direct For Computer & Info Scie & Enginr [1524817] Funding Source: National Science Foundation

Ask authors/readers for more resources

We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available