3.8 Proceedings Paper

Explainable AI Methods - A Brief Overview

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

Qiang Huang et al.

Summary: Recently, the effectiveness of graph neural networks (GNN) in representing graph structured data has been demonstrated. However, explaining GNN models is challenging due to their complex nonlinear transformations. In this paper, the authors propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso. GraphLIME is a generic framework that learns a nonlinear interpretable model locally in the subgraph of the explained node. Experimental results show that GraphLIME provides more descriptive and informative explanations compared to existing methods.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2023)

Article Computer Science, Artificial Intelligence

Finding and removing Clever Hans: Using explanation methods to debug and improve deep models

Christopher J. Anders et al.

Summary: Contemporary learning models for computer vision trained on large datasets may exhibit biases, artifacts, or errors leading to a "Clever Hans" behavior. By introducing Class Artifact Compensation methods, researchers are able to significantly reduce the model's Clever Hans behavior and improve its performance on different datasets.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

Explain and improve: LRP-inference fine-tuning for image captioning models

Jiamei Sun et al.

Summary: This paper compares the interpretability of attention heatmaps with explanation methods, showing that the latter can provide more evidence for decision-making, accurately relate to object locations, and assist in "debugging" the model. The authors also propose an LRP-inference fine-tuning strategy to address object hallucination issues in image captioning models while maintaining sentence fluency.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations

Leila Arras et al.

Summary: The rise of deep learning has increased the need to explain model decisions beyond prediction performances, leading to the development of XAI methods. Lack of objective quality measures for explanations has raised doubts on the trustworthiness of XAI methods. A new framework based on the CLEVR task was proposed in this study to evaluate ten different explanation methods, providing new insights on their quality and properties.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case

Natalia Diaz-Rodriguez et al.

Summary: The latest Deep Learning models have achieved unprecedented performance in detection and classification, but lack explainability; on the other hand, Symbolic AI systems are easier to explain but have lower generalization capabilities. The key challenge lies in fusing Deep Learning representations with expert knowledge.

INFORMATION FUSION (2022)

Proceedings Paper Computer Science, Artificial Intelligence

ECQ(x): Explainability-Driven Quantization for Low-Bit and Sparse DNNs

Daniel Becking et al.

Summary: This chapter presents a novel method for quantizing deep neural networks using concepts from explainable AI and information theory. Experimental results show that this method can generate ultra low-precision and sparse neural networks while maintaining or even improving model performance. The rendered networks are highly compressible in terms of file size, up to 103x compared to the full-precision unquantized DNN model.

XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)

Proceedings Paper Computer Science, Artificial Intelligence

A Rate-Distortion Framework for Explaining Black-Box Model Decisions

Stefan Kolek et al.

Summary: This paper presents a framework called Rate-Distortion Explanation (RDE) for explaining black-box model decisions. The framework is based on perturbations of the target input signal and can be applied to any differentiable pre-trained model, such as neural networks. Experiments demonstrate the adaptability of the framework to diverse data modalities, including images, audio, and physical simulations of urban environments.

XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science

Antonios Mamalakis et al.

Summary: In recent years, artificial intelligence and specifically artificial neural networks (NNs) have achieved great success in solving complex problems in earth sciences. However, the decision-making strategies of NNs are difficult to understand, which hinders scientists from interpreting and trusting the NN predictions. The introduction of explainable artificial intelligence (XAI) methods aims to attribute NN predictions to specific features and explain their strategies. This article provides an overview of recent work applying XAI to meteorology and climate science, including satellite applications, climate prediction, and detection of climatic changes. The article also introduces a synthetic benchmark dataset for evaluating XAI methods.

XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Explaining the Predictions of Unsupervised Learning Models

Gregoire Montavon et al.

Summary: Unsupervised learning is a subfield of machine learning that focuses on learning the structure of data without labels. This chapter reviews a new approach, NEON, which brings Explainable AI (XAI) to unsupervised learning and showcases its effectiveness through two application examples.

XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)

Article Law

Legal aspects of data cleansing in medical AI

Karl Stoeger et al.

Summary: Data quality is crucial for data-driven AI applications, especially in medical AI. Data cleansing plays a crucial role in improving the usability of medical AI systems, but must be carefully managed to avoid negative consequences. Both technical and legal aspects should be considered together in this sensitive context.

COMPUTER LAW & SECURITY REVIEW (2021)

Article Computer Science, Hardware & Architecture

Toward Human-AI Interfaces to Support Explainability and Causability in Medical AI

Andreas Holzinger et al.

Summary: The concept of causability is a measure of humans' ability to understand machine explanations, particularly important in the field of medical artificial intelligence (AI). It is used to develop and evaluate future human-AI interfaces.

COMPUTER (2021)

Review Engineering, Electrical & Electronic

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

Wojciech Samek et al.

Summary: With the increasing demand for explainable artificial intelligence (XAI) due to the successful usage of machine learning, particularly deep neural networks, this work aims to provide an overview of the field, test interpretability algorithms, and demonstrate successful usage in application scenarios.

PROCEEDINGS OF THE IEEE (2021)

Article Computer Science, Hardware & Architecture

Deep Learning for AI

Yoshua Bengio et al.

Summary: Research on artificial neural networks is motivated by the observation that human intelligence emerges from parallel networks of simple non-linear neurons, leading to the question of how these networks can learn complicated internal representations.

COMMUNICATIONS OF THE ACM (2021)

Article Computer Science, Artificial Intelligence

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Andreas Holzinger et al.

Summary: AI excels in certain tasks but humans excel at multi-modal thinking and building self-explanatory systems. The medical domain highlights the importance of various modalities contributing to one result. Using conceptual knowledge to guide model training can lead to more explainable, robust, and less biased machine learning models.

INFORMATION FUSION (2021)

Article Computer Science, Artificial Intelligence

Pruning by explaining: A novel criterion for deep neural network pruning

Seul-Ki Yeom et al.

Summary: This paper proposes a novel criterion for CNN pruning, inspired by neural network interpretability, to automatically find the most relevant weights or filters using relevance scores obtained from concepts of explainable AI (XAI). The method efficiently prunes CNN models in transfer-learning setups and outperforms existing criteria in resource-constrained scenarios. The approach allows for iterative model compression while maintaining or improving accuracy, with computational cost similar to gradient computation and simplicity in application without hyperparameter tuning for pruning.

PATTERN RECOGNITION (2021)

Review Computer Science, Information Systems

Graph Neural Network: A Comprehensive Review on Non-Euclidean Space

Nurul A. Asif et al.

Summary: This review provides a comprehensive overview of the state-of-the-art methods of graph-based networks from a deep learning perspective, highlighting the success of Graph Neural Networks (GNNs) in solving the problem of processing data in non-euclidean space. Recent developments in computational hardware and optimization have enabled graph networks to learn complex graph relationships and solve various problems effectively.

IEEE ACCESS (2021)

Article Computer Science, Artificial Intelligence

Towards explaining anomalies: A deep Taylor decomposition of one-class models

Jacob Kauffmann et al.

PATTERN RECOGNITION (2020)

Article Computer Science, Artificial Intelligence

Measuring the Quality of Explanations: The System Causability Scale (SCS) Comparing Human and Machine Explanations

Andreas Holzinger et al.

KUNSTLICHE INTELLIGENZ (2020)

Article Computer Science, Artificial Intelligence

From local explanations to global understanding with explainable AI for trees

Scott M. Lundberg et al.

NATURE MACHINE INTELLIGENCE (2020)

Article Computer Science, Hardware & Architecture

The Seven Tools of Causal Inference, with Reflections on Machine Learning

Judea Pearl

COMMUNICATIONS OF THE ACM (2019)

Article Multidisciplinary Sciences

Unmasking Clever Hans predictors and assessing what machines really learn

Sebastian Lapuschkin et al.

NATURE COMMUNICATIONS (2019)

Article Automation & Control Systems

A survey and critique of multiagent deep reinforcement learning

Pablo Hernandez-Leal et al.

AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS (2019)

Article Computer Science, Artificial Intelligence

Interactive machine learning: experimental evidence for the human in the algorithmic loop: A case study on Ant Colony Optimization

Andreas Holzinger et al.

APPLIED INTELLIGENCE (2019)

Review Surgery

Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery

Shane O'Sullivan et al.

INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY (2019)

Article Computer Science, Artificial Intelligence

Principles alone cannot guarantee ethical AI

Brent Mittelstadt

NATURE MACHINE INTELLIGENCE (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Insights into Learning Competence Through Probabilistic Graphical Models

Anna Saranti et al.

MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2019 (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Global and Local Interpretability for Cardiac MRI Classification

James R. Clough et al.

MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT IV (2019)

Article Engineering, Electrical & Electronic

Methods for interpreting and understanding deep neural networks

Gregoire Montavon et al.

DIGITAL SIGNAL PROCESSING (2018)

Article Computer Science, Artificial Intelligence

Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations

Ranjay Krishna et al.

INTERNATIONAL JOURNAL OF COMPUTER VISION (2017)

Article Computer Science, Artificial Intelligence

Explaining nonlinear classification decisions with deep Taylor decomposition

Gregoire Montavon et al.

PATTERN RECOGNITION (2017)

Article Biochemical Research Methods

Interpretable deep neural networks for single-trial EEG classification

Irene Sturm et al.

JOURNAL OF NEUROSCIENCE METHODS (2016)

Article Computer Science, Artificial Intelligence

Explaining classifications for individual instances

Marko Robnik-Sikonja et al.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2008)

Article History & Philosophy Of Science

Causes and explanations: A structural-model approach. Part II: Explanations

JY Halpern et al.

BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE (2005)