4.8 Article

Extraction of an Explanatory Graph to Interpret a CNN

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.2992207

关键词

Feature extraction; Visualization; Neural networks; Semantics; Annotations; Task analysis; Training; Convolutional neural networks; graphical model; interpretable deep learning

资金

  1. National Natural Science Foundation of China [U19B2043, 61906120]
  2. DARPA XAI Award [N66001-17-2-4029]
  3. NSF [IIS 1423305]
  4. ARO Project [W911NF1810296]

向作者/读者索取更多资源

This paper introduces an explanatory graph representation to reveal object parts encoded in convolutional layers of a CNN. By learning the explanatory graph, different object parts are automatically disentangled from each filter, boosting the transferability of CNN features.
This paper introduces an explanatory graph representation to reveal object parts encoded inside convolutional layers of a CNN. Given a pre-trained CNN, each filter(1) in a conv-layer usually represents a mixture of object parts. We develop a simple yet effective method to learn an explanatory graph, which automatically disentangles object parts from each filter without any part annotations. Specifically, given the feature map of a filter, we mine neural activations from the feature map, which correspond to different object parts. The explanatory graph is constructed to organize each mined part as a graph node. Each edge connects two nodes, whose corresponding object parts usually co-activate and keep a stable spatial relationship. Experiments show that each graph node consistently represented the same object part through different images, which boosted the transferability of CNN features. The explanatory graph transferred features of object parts to the task of part localization, and our method significantly outperformed other approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据