4.7 Article

Preemptively pruning Clever-Hans strategies in deep neural networks

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Proceedings Paper Computer Science, Artificial Intelligence

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Alexander Binder et al.

Summary: The evaluation of explanations is crucial but needs to be done carefully, considering the limitations of model-randomization-based sanity checks.

2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2023)

Article Engineering, Civil

Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles

Szilard Aradi

Summary: Academic research in the field of autonomous vehicles has gained popularity in recent years, covering various topics such as sensor technologies, communication, safety, decision making, and control. Artificial Intelligence and Machine Learning methods have become integral parts of this research. Motion planning, with a focus on strategic decision-making, trajectory planning, and control, has also been studied. This article specifically explores Deep Reinforcement Learning (DRL) as a field within Machine Learning. The paper provides insights into hierarchical motion planning and the basics of DRL, including environment modeling, state representation, perception models, reward mechanisms, and neural network implementation. It also discusses vehicle models, simulation possibilities, and computational requirements. The paper surveys state-of-the-art solutions, categorized by different tasks and levels of autonomous driving, such as car-following, lane-keeping, trajectory following, merging, and driving in dense traffic. Lastly, it raises open questions and future challenges.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

Finding and removing Clever Hans: Using explanation methods to debug and improve deep models

Christopher J. Anders et al.

Summary: Contemporary learning models for computer vision trained on large datasets may exhibit biases, artifacts, or errors leading to a "Clever Hans" behavior. By introducing Class Artifact Compensation methods, researchers are able to significantly reduce the model's Clever Hans behavior and improve its performance on different datasets.

INFORMATION FUSION (2022)

Article Pediatrics

The augmented radiologist: artificial intelligence in the practice of radiology

Erich Sorantin et al.

Summary: Artificial intelligence in medicine, especially in radiology, shows great promise in providing more accurate results. While AI can handle large datasets and discover various variants, the key advantage of human intelligence lies in content knowledge and problem-solving abilities.

PEDIATRIC RADIOLOGY (2022)

Article Computer Science, Artificial Intelligence

Building and Interpreting Deep Similarity Models

Oliver Eberle et al.

Summary: This paper proposes a method to make similarities interpretable by decomposing deep similarity models and provides insights into complex similarity models. The method is applied to assess similarity between historical documents in digital humanities.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Multidisciplinary Sciences

DNA methylation-based classification of sinonasal tumors

Philipp Jurmeister et al.

Summary: The authors used machine learning to classify sinonasal undifferentiated carcinomas into molecular classes and showed that the current terminology of SNUCs may not accurately reflect their differentiation state. Their findings provide insights into improving the diagnostic classification of sinonasal tumors.

NATURE COMMUNICATIONS (2022)

Article Engineering, Electrical & Electronic

Toward Explainable Artificial Intelligence for Regression Models A methodological perspective

Simon Letzgus et al.

IEEE SIGNAL PROCESSING MAGAZINE (2022)

Article Computer Science, Artificial Intelligence

Higher-Order Explanations of Graph Neural Networks via Relevant Walks

Thomas Schnake et al.

Summary: This paper introduces a new method for explaining graph neural networks, which can extract relevant input graph traversals that contribute to the prediction using higher-order expansions and nested attribution scheme. It has practical applications in areas such as sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Computer Science, Theory & Methods

A Survey of Deep Active Learning

Pengzhen Ren et al.

Summary: Researchers have shown relatively lower interest in active learning compared to deep learning, but with the increasing demand for large-scale high-quality annotated datasets, active learning is receiving more attention. This article provides a comprehensive survey on deep active learning, including a formal classification method, an overview of existing work, and an analysis of developments from an application perspective.

ACM COMPUTING SURVEYS (2022)

Article Computer Science, Artificial Intelligence

From Clustering to Cluster Explanations via Neural Networks

Jacob Kauffmann et al.

Summary: In recent years, there has been a trend in machine learning to enhance learned models with the ability to explain their predictions. This field, known as explainable AI (XAI), has mainly focused on supervised learning, particularly deep neural network classifiers. However, in many practical problems where label information is not given, the goal is to discover the underlying structure of the data, such as its clusters. This study proposes a novel framework that can explain cluster assignments in terms of input features efficiently and reliably, by rewriting clustering models as neural networks.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2022)

Article Computer Science, Interdisciplinary Applications

Interpretability-Driven Sample Selection Using Self Supervised Learning for Disease Classification and Segmentation

Dwarikanath Mahapatra et al.

Summary: In this article, we propose a novel sample selection methodology called IDEAL based on deep features for medical image analysis, which can improve system performance and reduce expert interactions. By leveraging information from interpretability saliency maps, a self-supervised learning approach is used to train a classifier to identify the most informative samples in a given batch of images. Experimental results demonstrate that the proposed approach outperforms other methods in lung disease classification and histopathology image segmentation tasks, showing the potential of using interpretability information for sample selection in active learning systems.

IEEE TRANSACTIONS ON MEDICAL IMAGING (2021)

Article Computer Science, Artificial Intelligence

Extraction of an Explanatory Graph to Interpret a CNN

Quanshi Zhang et al.

Summary: This paper introduces an explanatory graph representation to reveal object parts encoded in convolutional layers of a CNN. By learning the explanatory graph, different object parts are automatically disentangled from each filter, boosting the transferability of CNN features.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2021)

Review Engineering, Electrical & Electronic

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

Wojciech Samek et al.

Summary: With the increasing demand for explainable artificial intelligence (XAI) due to the successful usage of machine learning, particularly deep neural networks, this work aims to provide an overview of the field, test interpretability algorithms, and demonstrate successful usage in application scenarios.

PROCEEDINGS OF THE IEEE (2021)

Article Computer Science, Artificial Intelligence

Pruning by explaining: A novel criterion for deep neural network pruning

Seul-Ki Yeom et al.

Summary: This paper proposes a novel criterion for CNN pruning, inspired by neural network interpretability, to automatically find the most relevant weights or filters using relevance scores obtained from concepts of explainable AI (XAI). The method efficiently prunes CNN models in transfer-learning setups and outperforms existing criteria in resource-constrained scenarios. The approach allows for iterative model compression while maintaining or improving accuracy, with computational cost similar to gradient computation and simplicity in application without hyperparameter tuning for pruning.

PATTERN RECOGNITION (2021)

Article Computer Science, Artificial Intelligence

A Survey on Neural Network Interpretability

Yu Zhang et al.

Summary: This study provides a comprehensive review of the interpretability of neural networks, clarifies the definition, and proposes a new taxonomy. The trust in deep learning systems is affected by the interpretability issue, which is also related to ethical problems. The interpretability of deep networks is a desired property for becoming powerful tools in other research fields.

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE (2021)

Article Computer Science, Artificial Intelligence

Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans

Michael Roberts et al.

Summary: Many machine learning-based approaches have been developed for the prognosis and diagnosis of COVID-19 from medical images. However, a systematic review found that current studies have methodological flaws, preventing their potential clinical utility. Recommendations are provided to address these issues for higher-quality model development.

NATURE MACHINE INTELLIGENCE (2021)

Article Computer Science, Information Systems

A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence

Ilia Stepin et al.

Summary: This study presents a systematic literature review on contrastive and counterfactual explanations in artificial intelligence algorithms, examining theoretical foundations and computational frameworks. The research reveals shortcomings in existing approaches and proposes a taxonomy for theoretical and practical methods in contrastive and counterfactual explanation.

IEEE ACCESS (2021)

Article Computer Science, Artificial Intelligence

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

Ramprasaath R. Selvaraju et al.

INTERNATIONAL JOURNAL OF COMPUTER VISION (2020)

Review Medicine, General & Internal

Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal

Laure Wynants et al.

BMJ-BRITISH MEDICAL JOURNAL (2020)

Article Computer Science, Artificial Intelligence

Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

Alejandro Barredo Arrieta et al.

INFORMATION FUSION (2020)

Article Computer Science, Artificial Intelligence

Shortcut learning in deep neural networks

Robert Geirhos et al.

NATURE MACHINE INTELLIGENCE (2020)

Article Computer Science, Artificial Intelligence

Making deep neural networks right for the right scientific reasons by interacting with their explanations

Patrick Schramowski et al.

NATURE MACHINE INTELLIGENCE (2020)

Article Multidisciplinary Sciences

Unmasking Clever Hans predictors and assessing what machines really learn

Sebastian Lapuschkin et al.

NATURE COMMUNICATIONS (2019)

Article Computer Science, Artificial Intelligence

Retraining-free methods for fast on-the-fly pruning of convolutional neural networks

Amir H. Ashouri et al.

NEUROCOMPUTING (2019)

Article Multidisciplinary Sciences

Definitions, methods, and applications in interpretable machine learning

W. James Murdoch et al.

PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA (2019)

Editorial Material Robotics

XAI-Explainable artificial intelligence

David Gunning et al.

SCIENCE ROBOTICS (2019)

Proceedings Paper Computer Science, Hardware & Architecture

Deep Validation: Toward Detecting Real-world Corner Cases for Deep Neural Networks

Weibin Wu et al.

2019 49TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN 2019) (2019)

Article Computer Science, Artificial Intelligence

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Cynthia Rudin

NATURE MACHINE INTELLIGENCE (2019)

Article Multidisciplinary Sciences

DNA methylation-based classification of central nervous system tumours

David Capper et al.

NATURE (2018)

Article History & Philosophy Of Science

The Deluge of Spurious Correlations in Big Data

Cristian S. Calude et al.

FOUNDATIONS OF SCIENCE (2017)

Article Multidisciplinary Sciences

Mastering the game of Go with deep neural networks and tree search

David Silver et al.

NATURE (2016)

Article Computer Science, Artificial Intelligence

ImageNet Large Scale Visual Recognition Challenge

Olga Russakovsky et al.

INTERNATIONAL JOURNAL OF COMPUTER VISION (2015)

Article Multidisciplinary Sciences

Human-level control through deep reinforcement learning

Volodymyr Mnih et al.

NATURE (2015)

Article Computer Science, Information Systems

Channel-Level Acceleration of Deep Face Representations

Adam Polyak et al.

IEEE ACCESS (2015)

Article Chemistry, Medicinal

Visual Interpretation of Kernel-Based Prediction Models

Katja Hansen et al.

MOLECULAR INFORMATICS (2011)