4.7 Article

Finding and removing Clever Hans: Using explanation methods to debug and improve deep models

期刊

INFORMATION FUSION
卷 77, 期 -, 页码 261-295

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2021.07.015

关键词

Deep Neural Networks; Explainable Artificial Intelligence; Clever Hans predictors; Feature unlearning; Spectral Relevance Analysis; Class Artifact Compensation

资金

  1. German Ministry for Education and Research (BMBF) [01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18056A, 01IS18025A, 01IS18037A]
  2. European Union [965221]
  3. Information & Communications Technology Planning & Evaluation (IITP) - Korea government [2017-0-001779]
  4. Research Training Group Differential Equation- and Data-driven Models in Life Sciences and Fluid Dynamics (DAEDALUS) - German Research Foundation (DFG) [GRK 2433]
  5. Grant Math+ - German Research Foundation (DFG) [390685689]

向作者/读者索取更多资源

Contemporary learning models for computer vision trained on large datasets may exhibit biases, artifacts, or errors leading to a "Clever Hans" behavior. By introducing Class Artifact Compensation methods, researchers are able to significantly reduce the model's Clever Hans behavior and improve its performance on different datasets.
Contemporary learning models for computer vision are typically trained on very large (benchmark) datasets with millions of samples. These may, however, contain biases, artifacts, or errors that have gone unnoticed and are exploitable by the model. In the worst case, the trained model does not learn a valid and generalizable strategy to solve the problem it was trained for, and becomes a ``Clever Hans'' predictor that bases its decisions on spurious correlations in the training data, potentially yielding an unrepresentative or unfair, and possibly even hazardous predictor. In this paper, we contribute by providing a comprehensive analysis framework based on a scalable statistical analysis of attributions from explanation methods for large data corpora. Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit Clever Hans behavior, (b) several approaches we collectively denote as Class Artifact Compensation, which are able to effectively and significantly reduce a model's Clever Hans behavior, i.e., we are able to un-Hans models trained on (poisoned) datasets, such as the popular ImageNet data corpus. We demonstrate that Class Artifact Compensation, defined in a simple theoretical framework, may be implemented as part of a neural network's training or fine-tuning process, or in a post-hoc manner by injecting additional layers, preventing any further propagation of undesired Clever Hans features, into the network architecture. Using our proposed methods, we provide qualitative and quantitative analyses of the biases and artifacts in, e.g., the ImageNet dataset, the Adience benchmark dataset of unfiltered faces, and the ISIC 2019 skin lesion analysis dataset. We demonstrate that these insights can give rise to improved, more representative, and fairer models operating on implicitly cleaned data corpora.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据