4.7 Article

Preemptively pruning Clever-Hans strategies in deep neural networks

期刊

INFORMATION FUSION
卷 103, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2023.102094

关键词

Clever Hans effect; Model refinement; Pruning; Explainable AI; Deep neural networks

向作者/读者索取更多资源

This paper investigates the issue of mismatches between the decision strategy of the explainable model and the user's domain knowledge, and proposes a new method EGEM to mitigate hidden flaws in the model. Experimental results demonstrate that the approach can significantly reduce reliance on Clever Hans strategies and improve the accuracy of the model on new data.
Robustness has become an important consideration in deep learning. With the help of explainable AI, mismatches between an explained model's decision strategy and the user's domain knowledge (e.g. Clever Hans effects) have been identified as a starting point for improving faulty models. However, it is less clear what to do when the user and the explanation agree. In this paper, we demonstrate that acceptance of explanations by the user is not a guarantee for a machine learning model to be robust against Clever Hans effects, which may remain undetected. Such hidden flaws of the model can nevertheless be mitigated, and we demonstrate this by contributing a new method, Explanation-Guided Exposure Minimization (EGEM), that preemptively prunes variations in the ML model that have not been the subject of positive explanation feedback. Experiments demonstrate that our approach leads to models that strongly reduce their reliance on hidden Clever Hans strategies, and consequently achieve higher accuracy on new data.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据