3.8 Proceedings Paper

xxAI - Beyond Explainable Artificial Intelligence

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

Andreas Holzinger et al.

Summary: Medical artificial intelligence systems have achieved significant success and are crucial for improving human health. In order to enhance performance, addressing uncertainty and errors while explaining the result process is essential. Information fusion can help develop more robust and explainable machine learning models.

INFORMATION FUSION (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Explainable AI Methods - A Brief Overview

Andreas Holzinger et al.

Summary: This article provides a brief overview of selected methods in the field of Explainable Artificial Intelligence (xAI), aiming to give beginners a quick summary of the current state of the art.

XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (2022)

Editorial Material Computer Science, Hardware & Architecture

Medical Artificial Intelligence: The European Legal Perspective

Karl Stoeger et al.

COMMUNICATIONS OF THE ACM (2021)

Review Engineering, Electrical & Electronic

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

Wojciech Samek et al.

Summary: With the increasing demand for explainable artificial intelligence (XAI) due to the successful usage of machine learning, particularly deep neural networks, this work aims to provide an overview of the field, test interpretability algorithms, and demonstrate successful usage in application scenarios.

PROCEEDINGS OF THE IEEE (2021)

Article Computer Science, Hardware & Architecture

Deep Learning for AI

Yoshua Bengio et al.

Summary: Research on artificial neural networks is motivated by the observation that human intelligence emerges from parallel networks of simple non-linear neurons, leading to the question of how these networks can learn complicated internal representations.

COMMUNICATIONS OF THE ACM (2021)

Article Computer Science, Hardware & Architecture

The Ten Commandments of Ethical Medical AI

Heimo Mueller et al.

Summary: The proposed ten commandments serve as practical guidelines for those applying artificial intelligence, offering a concise checklist to a wide range of stakeholders.

COMPUTER (2021)

Article Computer Science, Artificial Intelligence

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Andreas Holzinger et al.

Summary: AI excels in certain tasks but humans excel at multi-modal thinking and building self-explanatory systems. The medical domain highlights the importance of various modalities contributing to one result. Using conceptual knowledge to guide model training can lead to more explainable, robust, and less biased machine learning models.

INFORMATION FUSION (2021)

Article Computer Science, Artificial Intelligence

Model complexity of deep learning: a survey

Xia Hu et al.

Summary: This paper provides a systematic overview of the latest studies on model complexity in deep learning, categorizing it into expressive capacity and effective model complexity. The paper reviews existing research on these two categories based on four important factors and discusses the applications of deep learning model complexity. Finally, the paper proposes several interesting future directions.

KNOWLEDGE AND INFORMATION SYSTEMS (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis

Samuele Poppi et al.

Summary: This paper focuses on Class Activation Mapping (CAM) approaches, which offer effective visualization by taking weighted averages of activation maps. It introduces a novel set of metrics to quantify explanation maps, enhancing evaluation and reproducibility. By comparing different CAM-based visualization methods on the entire ImageNet validation set, proper comparisons and reproducibility are promoted.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021 (2021)

Article Computer Science, Artificial Intelligence

Measuring the Quality of Explanations: The System Causability Scale (SCS) Comparing Human and Machine Explanations

Andreas Holzinger et al.

KUNSTLICHE INTELLIGENZ (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Network Dissection: Quantifying Interpretability of Deep Visual Representations

David Bau et al.

30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) (2017)

Article Computer Science, Artificial Intelligence

Visualizing Deep Convolutional Neural Networks Using Natural Pre-images

Aravindh Mahendran et al.

INTERNATIONAL JOURNAL OF COMPUTER VISION (2016)

Proceedings Paper Computer Science, Artificial Intelligence

Top-Down Neural Attention by Excitation Backprop

Jianming Zhang et al.

COMPUTER VISION - ECCV 2016, PT IV (2016)

Article Multidisciplinary Sciences

Human-level control through deep reinforcement learning

Volodymyr Mnih et al.

NATURE (2015)