4.6 Article

ADVISE: ADaptive feature relevance and VISual Explanations for convolutional neural networks

Journal

VISUAL COMPUTER
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s00371-023-03112-5

Keywords

Convolutional neural network; Deep learning; EXplainable AI

Ask authors/readers for more resources

This paper introduces a new explainability method called ADVISE, which leverages the relevance of each unit in the feature map to provide better visual explanations. The authors also propose an evaluation protocol to quantify the visual explainability of CNN models and validate the effectiveness of their method in extensive image classification tasks.
To equip convolutional neural networks (CNNs) with explainability, it is essential to interpret how opaque models make specific decisions, understand what causes the errors, improve the architecture design, and identify unethical biases in the classifiers. This paper introduces ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations. To this end, we propose using adaptive bandwidth kernel density estimation to assign a relevance score to each unit of the feature map with respect to the predicted class. We also propose an evaluation protocol to quantitatively assess the visual explainability of CNN models. Our extensive evaluation of ADVISE in image classification tasks using pretrained AlexNet, VGG16, ResNet50, and Xception models on ImageNet shows that our method outperforms other visual explainable methods in quantifying feature-relevance and visual explainability while maintaining competitive time complexity. Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks. The implementation is accessible for reproducibility purposes on https://github.com/dehshibi/ADVISE.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available