期刊
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
卷 43, 期 11, 页码 3863-3877出版社
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.2992207
关键词
Feature extraction; Visualization; Neural networks; Semantics; Annotations; Task analysis; Training; Convolutional neural networks; graphical model; interpretable deep learning
资金
- National Natural Science Foundation of China [U19B2043, 61906120]
- DARPA XAI Award [N66001-17-2-4029]
- NSF [IIS 1423305]
- ARO Project [W911NF1810296]
This paper introduces an explanatory graph representation to reveal object parts encoded in convolutional layers of a CNN. By learning the explanatory graph, different object parts are automatically disentangled from each filter, boosting the transferability of CNN features.
This paper introduces an explanatory graph representation to reveal object parts encoded inside convolutional layers of a CNN. Given a pre-trained CNN, each filter(1) in a conv-layer usually represents a mixture of object parts. We develop a simple yet effective method to learn an explanatory graph, which automatically disentangles object parts from each filter without any part annotations. Specifically, given the feature map of a filter, we mine neural activations from the feature map, which correspond to different object parts. The explanatory graph is constructed to organize each mined part as a graph node. Each edge connects two nodes, whose corresponding object parts usually co-activate and keep a stable spatial relationship. Experiments show that each graph node consistently represented the same object part through different images, which boosted the transferability of CNN features. The explanatory graph transferred features of object parts to the task of part localization, and our method significantly outperformed other approaches.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据