Journal
IEEE TRANSACTIONS ON MULTIMEDIA
Volume 22, Issue 7, Pages 1847-1861Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2020.2976985
Keywords
Visualization; Computational modeling; Perturbation methods; Convolutional neural networks; Medical services; Birds; Model interpretability; feature-flow; sparse representation
Categories
Funding
- Canadian Natural Sciences and Engineering Research Council (NSERC)
- International Doctoral Fellowship at the University of British Columbia
Ask authors/readers for more resources
Despite the great success of deep convolutional neural networks (DCNNs) in computer vision tasks, their black-box aspect remains a critical concern. The interpretability of DCNN models has been attracting increasing attention. In this work, we propose a novel model, Feature-fLOW INterpretation (FLOWIN) model, to interpret a DCNN by its feature-flow. The FLOWIN can express deep-layer features as a sparse representation of shallow-layer features. Based on that, it distills the optimal feature-flow for the prediction of a given instance, starting from deep layers to shallow layers. Therefore, the FLOWIN can provide an instance-specific interpretation, which presents its feature-flow units and their interpretable meanings for its network decision. The FLOWIN can also give the quantitative interpretation in which the contribution of each flow unit in different layers is used to interpret the net decision. From the class-level view, we can further understand networks by studying feature-flows within and between classes. The FLOWIN not only provides the visualization of the feature-flow but also studies feature-flow quantitatively by investigating its density and similarity metrics. In our experiments, the FLOWIN is evaluated on different datasets and networks by quantitative and qualitative ways to show its interpretability.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available