4.7 Article

A manifesto on explainability for artificial intelligence in medicine

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

AI in Combating the COVID-19 Pandemic

Longbing Cao

Summary: The SARS-CoV-2 virus and the resulting pandemic have had a tremendous impact on the world. AI communities have made significant efforts to combat the pandemic, but fundamental questions about the role and performance of AI in tackling COVID-19 remain unanswered. These questions need to be addressed as a priority by the AI communities.

IEEE INTELLIGENT SYSTEMS (2022)

Article Computer Science, Interdisciplinary Applications

Evaluating pointwise reliability of machine learning prediction

Giovanna Nicora et al.

Summary: Interest in applying machine learning in clinical and biological problems is growing, but how to determine the reliability of predictions remains a challenge. This paper reviews methods to identify unreliable predictions and proposes an integrative framework for evaluating prediction reliability in specific scenarios.

JOURNAL OF BIOMEDICAL INFORMATICS (2022)

Article Ethics

Relative explainability and double standards in medical decision-making Should medical AI be subjected to higher standards in medical decision-making than doctors?

Hendrik Kempt et al.

Summary: This paper discusses the debate surrounding the standards of explainability for medical AI, distinguishing the importance of explainability for general AI and medical AI use, proposing to evaluate the interpretability of medical decisions based on practical considerations, and ultimately suggesting resolving the issue by focusing on the AI's certifiability and interpretability.

ETHICS AND INFORMATION TECHNOLOGY (2022)

Editorial Material Computer Science, Hardware & Architecture

Medical Artificial Intelligence: The European Legal Perspective

Karl Stoeger et al.

COMMUNICATIONS OF THE ACM (2021)

Article Computer Science, Hardware & Architecture

Toward Human-AI Interfaces to Support Explainability and Causability in Medical AI

Andreas Holzinger et al.

Summary: The concept of causability is a measure of humans' ability to understand machine explanations, particularly important in the field of medical artificial intelligence (AI). It is used to develop and evaluate future human-AI interfaces.

COMPUTER (2021)

Article Computer Science, Cybernetics

Human-centered XAI: Developing design patterns for explanations of clinical decision support systems

Tjeerd A. J. Schoonderwoerd et al.

Summary: Research focus on transparency of machine learning models and human-centered approaches to XAI. The paper presents a case study applying a human-centered design approach for AI-generated explanations, aiming to integrate human factors into the development processes.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (2021)

Editorial Material Medicine, General & Internal

Next-Generation Artificial Intelligence for Diagnosis From Predicting Diagnostic Labels to Wayfinding

Julia Adler-Milstein et al.

JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION (2021)

Review Computer Science, Information Systems

Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics

Jianlong Zhou et al.

Summary: The paper provides a comprehensive overview of methods proposed for the evaluation of ML explanations in the current literature. It identifies properties of explainability from the review of definitions of explainability and uses them as objectives that evaluation metrics should achieve. The survey found that different explanation methods use quantitative metrics primarily to evaluate either simplicity of interpretability or fidelity of explainability, while subjective measures like trust and confidence are key in human-centered evaluation of explainable systems.

ELECTRONICS (2021)

Article Computer Science, Artificial Intelligence

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

Erico Tjoa et al.

Summary: Artificial intelligence and machine learning have shown remarkable performances in various fields, but the challenge of interpretability remains. The medical sector requires higher levels of interpretability to ensure the reliability of machine decisions, and a deeper understanding of the mechanisms behind machine algorithms is needed to advance medical practices.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

CEFEs: A CNN Explainable Framework for ECG Signals

Barbara Mukami Maweu et al.

Summary: In the healthcare domain, explaining the functionality of deep learning models has become crucial, especially in the context of interpreting ECG signals. This paper proposes a modular framework, CEFEs, to provide users with interpretability into CNN models used for analyzing medical time-series data. Evaluating the capacity and quality of these explainable models through different training methods is essential for understanding the correlation between learned features and classification capabilities.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2021)

Article Computer Science, Artificial Intelligence

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions

Miroslav Hudec et al.

Summary: The study introduced a novel classification method that empowers domain experts to choose important observations for attributes and utilizes function variability for machine learning opportunities. Demonstrated the research steps of human-in-the-loop interactive machine learning with aggregation functions.

KNOWLEDGE-BASED SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research

Markus Langer et al.

Summary: This paper discusses the classification of stakeholders and their desires in Explainable Artificial Intelligence (XAI), and proposes a model to explicitly explain the main concepts and relationships needed to fulfill stakeholders' desires.

ARTIFICIAL INTELLIGENCE (2021)

Article Computer Science, Artificial Intelligence

An explainable AI system for automated COVID-19 assessment and lesion categorization from CT-scans

Matteo Pennisi et al.

Summary: The study introduces an AI-powered pipeline for automated COVID-19 detection and lesion categorization from CT scans, achieving comparable results with expert radiologists. Prior lung and lobe segmentation significantly enhances the classification performance of the model by over 6 percent points.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2021)

Article Engineering, Biomedical

Machine Learning and XAI approaches for Allergy Diagnosis

Ramisetty Kavya et al.

Summary: This work presents a computer-aided framework for allergy diagnosis capable of handling comorbidities. Techniques such as data sampling and machine learning algorithms are applied to improve efficiency, with cross-validation used to select the optimal model. The system's transparency and performance are validated, deployed on mobile devices, and integrated as a source of information for clinicians to enhance diagnostic accuracy.

BIOMEDICAL SIGNAL PROCESSING AND CONTROL (2021)

Article Computer Science, Hardware & Architecture

The Ten Commandments of Ethical Medical AI

Heimo Mueller et al.

Summary: The proposed ten commandments serve as practical guidelines for those applying artificial intelligence, offering a concise checklist to a wide range of stakeholders.

COMPUTER (2021)

Article Computer Science, Artificial Intelligence

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Andreas Holzinger et al.

Summary: AI excels in certain tasks but humans excel at multi-modal thinking and building self-explanatory systems. The medical domain highlights the importance of various modalities contributing to one result. Using conceptual knowledge to guide model training can lead to more explainable, robust, and less biased machine learning models.

INFORMATION FUSION (2021)

Editorial Material Multidisciplinary Sciences

Beware explanations from AI in health care

Boris Babic et al.

SCIENCE (2021)

Review Oncology

Designing deep learning studies in cancer diagnostics

Andreas Kleppe et al.

Summary: The number of publications on deep learning for cancer diagnostics is increasing rapidly, but clinical translation progress is slow. It is advocated to estimate performance in external cohorts, define a primary analysis in a standardized protocol stored online, and establish recommended protocol items in the field to facilitate transition to the clinic.

NATURE REVIEWS CANCER (2021)

Article Computer Science, Information Systems

Explaining Deep Learning-Based Traffic Classification Using a Genetic Algorithm

Seyoung Ahn et al.

Summary: This paper presents a XAI method based on a genetic algorithm for explaining the working mechanism of deep-learning-based traffic classifiers. The method quantifies the importance of each feature, generates a feature selection mask using a genetic algorithm to select important features, and achieves an accuracy of approximately 97.24% for the classifier.

IEEE ACCESS (2021)

Article Computer Science, Artificial Intelligence

Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice

Mauro Dragoni et al.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2020)

Article Computer Science, Artificial Intelligence

A case-based ensemble learning system for explainable breast cancer recurrence prediction

Dongxiao Gu et al.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2020)

Article Medical Informatics

Parental understanding of crucial medical jargon used in prenatal prematurity counseling

Nicole M. Rau et al.

BMC MEDICAL INFORMATICS AND DECISION MAKING (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

Inioluwa Deborah Raji et al.

FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY (2020)

Review Computer Science, Information Systems

Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review

Seyedeh Neelufar Payrovnaziri et al.

JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION (2020)

Article Computer Science, Artificial Intelligence

On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities

Mauricio Reyes et al.

RADIOLOGY-ARTIFICIAL INTELLIGENCE (2020)

Article Computer Science, Artificial Intelligence

Subspecialty-Level Deep Gray Matter Differential Diagnoses with Deep Learning and Bayesian Networks on Clinical Brain MRI: A Pilot Study

Jeffrey D. Rudie et al.

RADIOLOGY-ARTIFICIAL INTELLIGENCE (2020)

Article Computer Science, Theory & Methods

A Survey of Methods for Explaining Black Box Models

Riccardo Guidotti et al.

ACM COMPUTING SURVEYS (2019)

Article Computer Science, Artificial Intelligence

Explanation in artificial intelligence: Insights from the social sciences

Tim Miller

ARTIFICIAL INTELLIGENCE (2019)

Article Computer Science, Hardware & Architecture

The Seven Tools of Causal Inference, with Reflections on Machine Learning

Judea Pearl

COMMUNICATIONS OF THE ACM (2019)

Article Multidisciplinary Sciences

Evolution of resilience in protein interactomes across the tree of life

Marinka Zitnik et al.

PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA (2019)

Article Biochemistry & Molecular Biology

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

Diego Ardila et al.

NATURE MEDICINE (2019)

Editorial Material Multidisciplinary Sciences

In defense of the black box

Elizabeth A. Holm

SCIENCE (2019)

Article Engineering, Electrical & Electronic

Methods for interpreting and understanding deep neural networks

Gregoire Montavon et al.

DIGITAL SIGNAL PROCESSING (2018)

Article Computer Science, Information Systems

An Ontology-Based Interpretable Fuzzy Decision Support System for Diabetes Diagnosis

Shaker El-Sappagh et al.

IEEE ACCESS (2018)

Article Engineering, Biomedical

Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning

Ryan Poplin et al.

NATURE BIOMEDICAL ENGINEERING (2018)

Article Computer Science, Information Systems

A Methodological Framework for the Integrated Design of Decision-Intensive Care Pathways-an Application to the Management of COPD Patients

Carlo Combi et al.

JOURNAL OF HEALTHCARE INFORMATICS RESEARCH (2017)

Article Medicine, General & Internal

Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs

Varun Gulshan et al.

JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION (2016)

Review Computer Science, Information Systems

Predictive data mining in clinical medicine: Current issues and guidelines

Riccardo Bellazzi et al.

INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS (2008)