4.8 Article

Extraction of an Explanatory Graph to Interpret a CNN

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.2992207

Keywords

Feature extraction; Visualization; Neural networks; Semantics; Annotations; Task analysis; Training; Convolutional neural networks; graphical model; interpretable deep learning

Funding

  1. National Natural Science Foundation of China [U19B2043, 61906120]
  2. DARPA XAI Award [N66001-17-2-4029]
  3. NSF [IIS 1423305]
  4. ARO Project [W911NF1810296]

Ask authors/readers for more resources

This paper introduces an explanatory graph representation to reveal object parts encoded in convolutional layers of a CNN. By learning the explanatory graph, different object parts are automatically disentangled from each filter, boosting the transferability of CNN features.
This paper introduces an explanatory graph representation to reveal object parts encoded inside convolutional layers of a CNN. Given a pre-trained CNN, each filter(1) in a conv-layer usually represents a mixture of object parts. We develop a simple yet effective method to learn an explanatory graph, which automatically disentangles object parts from each filter without any part annotations. Specifically, given the feature map of a filter, we mine neural activations from the feature map, which correspond to different object parts. The explanatory graph is constructed to organize each mined part as a graph node. Each edge connects two nodes, whose corresponding object parts usually co-activate and keep a stable spatial relationship. Experiments show that each graph node consistently represented the same object part through different images, which boosted the transferability of CNN features. The explanatory graph transferred features of object parts to the task of part localization, and our method significantly outperformed other approaches.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available