Related references
Note: Only part of the references are listed.GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Qiang Huang et al.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2023)
Finding and removing Clever Hans: Using explanation methods to debug and improve deep models
Christopher J. Anders et al.
INFORMATION FUSION (2022)
Explain and improve: LRP-inference fine-tuning for image captioning models
Jiamei Sun et al.
INFORMATION FUSION (2022)
CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations
Leila Arras et al.
INFORMATION FUSION (2022)
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case
Natalia Diaz-Rodriguez et al.
INFORMATION FUSION (2022)
ECQ(x): Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Daniel Becking et al.
XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)
A Rate-Distortion Framework for Explaining Black-Box Model Decisions
Stefan Kolek et al.
XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)
Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science
Antonios Mamalakis et al.
XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)
Explaining the Predictions of Unsupervised Learning Models
Gregoire Montavon et al.
XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)
Legal aspects of data cleansing in medical AI
Karl Stoeger et al.
COMPUTER LAW & SECURITY REVIEW (2021)
Toward Human-AI Interfaces to Support Explainability and Causability in Medical AI
Andreas Holzinger et al.
COMPUTER (2021)
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek et al.
PROCEEDINGS OF THE IEEE (2021)
Deep Learning for AI
Yoshua Bengio et al.
COMMUNICATIONS OF THE ACM (2021)
Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI
Andreas Holzinger et al.
INFORMATION FUSION (2021)
Pruning by explaining: A novel criterion for deep neural network pruning
Seul-Ki Yeom et al.
PATTERN RECOGNITION (2021)
Graph Neural Network: A Comprehensive Review on Non-Euclidean Space
Nurul A. Asif et al.
IEEE ACCESS (2021)
Towards explaining anomalies: A deep Taylor decomposition of one-class models
Jacob Kauffmann et al.
PATTERN RECOGNITION (2020)
Measuring the Quality of Explanations: The System Causability Scale (SCS) Comparing Human and Machine Explanations
Andreas Holzinger et al.
KUNSTLICHE INTELLIGENZ (2020)
From local explanations to global understanding with explainable AI for trees
Scott M. Lundberg et al.
NATURE MACHINE INTELLIGENCE (2020)
The Seven Tools of Causal Inference, with Reflections on Machine Learning
Judea Pearl
COMMUNICATIONS OF THE ACM (2019)
Unmasking Clever Hans predictors and assessing what machines really learn
Sebastian Lapuschkin et al.
NATURE COMMUNICATIONS (2019)
A survey and critique of multiagent deep reinforcement learning
Pablo Hernandez-Leal et al.
AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS (2019)
Interactive machine learning: experimental evidence for the human in the algorithmic loop: A case study on Ant Colony Optimization
Andreas Holzinger et al.
APPLIED INTELLIGENCE (2019)
Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery
Shane O'Sullivan et al.
INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY (2019)
Principles alone cannot guarantee ethical AI
Brent Mittelstadt
NATURE MACHINE INTELLIGENCE (2019)
Insights into Learning Competence Through Probabilistic Graphical Models
Anna Saranti et al.
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2019 (2019)
Global and Local Interpretability for Cardiac MRI Classification
James R. Clough et al.
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT IV (2019)
Methods for interpreting and understanding deep neural networks
Gregoire Montavon et al.
DIGITAL SIGNAL PROCESSING (2018)
Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
Ranjay Krishna et al.
INTERNATIONAL JOURNAL OF COMPUTER VISION (2017)
Explaining nonlinear classification decisions with deep Taylor decomposition
Gregoire Montavon et al.
PATTERN RECOGNITION (2017)
Interpretable deep neural networks for single-trial EEG classification
Irene Sturm et al.
JOURNAL OF NEUROSCIENCE METHODS (2016)
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
Sebastian Bach et al.
PLOS ONE (2015)
Explaining classifications for individual instances
Marko Robnik-Sikonja et al.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2008)
Causes and explanations: A structural-model approach. Part II: Explanations
JY Halpern et al.
BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE (2005)