Journal
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017)
Volume -, Issue -, Pages 3319-3327Publisher
IEEE
DOI: 10.1109/CVPR.2017.354
Keywords
-
Categories
Funding
- National Science Foundation [1524817]
- Vannevar Bush Faculty Fellowship program - Basic Research Office of the Assistant Secretary of Defense for Research and Engineering
- Office of Naval Research [N00014-16-1-3116]
- MIT Big Data Initiative at CSAIL
- Toyota Research Institute / MIT CSAIL Joint Research Center
- Google Award
- Amazon Award
- Facebook Fellowship
- Div Of Electrical, Commun & Cyber Sys
- Directorate For Engineering [1532591] Funding Source: National Science Foundation
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1524817] Funding Source: National Science Foundation
Ask authors/readers for more resources
We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available