4.7 Review

How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare

Related references

Note: Only part of the references are listed.
Editorial Material Health Care Sciences & Services

Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based COMMENT

Liam G. McCoy et al.

Summary: This article explores the role of explainability in machine learning for healthcare. It emphasizes the importance of explainability for achieving performance and trust, and suggests the development of robust empirical methods to evaluate inexplicable algorithmic systems.

JOURNAL OF CLINICAL EPIDEMIOLOGY (2022)

Review Radiology, Nuclear Medicine & Medical Imaging

A review of explainable and interpretable AI with applications in COVID-19 imaging

Jordan D. Fuhrman et al.

Summary: The development of medical imaging AI for evaluating COVID-19 patients shows potential in enhancing clinical decision making, with developers utilizing explainability techniques to increase user trust and clinical translation potential.

MEDICAL PHYSICS (2022)

Article Computer Science, Artificial Intelligence

The three ghosts of medical AI: Can the black-box present deliver?

Thomas P. Quinn et al.

Summary: This article uses the analogy of the three Christmas ghosts to guide readers through the past, present, and future of medical AI. It highlights the reliance on opaque models in modern machine learning and discusses the implications for transparency in healthcare. The article argues that opaque models lack quality assurance, trust, and hinder physician-patient dialogue, and suggests upholding transparency in model design and validation to ensure the success of medical AI.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2022)

Article Cardiac & Cardiovascular Systems

GENESIS: Gene-Specific Machine Learning Models for Variants of Uncertain Significance Found in Catecholaminergic Polymorphic Ventricular Tachycardia and Long QT Syndrome-Associated Genes

Rachel L. Draelos et al.

Summary: This study developed gene-specific machine learning models to predict the pathogenicity of variants in genes associated with cardiac channelopathies. The results showed that the models performed well in prediction and may play a role in post-genetic testing diagnostic analyses by providing high performance prediction of variant pathogenicity.

CIRCULATION-ARRHYTHMIA AND ELECTROPHYSIOLOGY (2022)

Review Oncology

Explainable artificial intelligence in skin cancer recognition: A systematic review

Katja Hauser et al.

Summary: This study investigates the application of explainable artificial intelligence (XAI) in skin cancer detection. It found that XAI is commonly used during the development of new deep neural networks (DNNs), but there is a lack of systematic and rigorous evaluation of its usefulness in this scenario.

EUROPEAN JOURNAL OF CANCER (2022)

Article Computer Science, Artificial Intelligence

Explainable multiple abnormality classification of chest CT volumes

Rachel Lea Draelos et al.

Summary: Understanding model predictions in healthcare is crucial, and this research introduces the challenging task of explainable multiple abnormality classification in volumetric medical images. A novel multiple instance learning convolutional neural network, AxialNet, and attention mechanism HiResCAM are proposed, along with a new approach to automatically obtain 3D allowed regions.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2022)

Article Computer Science, Artificial Intelligence

Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

Bas H. M. Van der Velden et al.

Summary: This survey examines the applications of explainable artificial intelligence (XAI) in deep learning-based medical image analysis. It introduces a framework for classifying deep learning-based medical image analysis methods based on XAI criteria. The survey also categorizes and investigates XAI techniques in medical image analysis according to the framework and anatomical location. The paper concludes by discussing future opportunities for XAI in medical image analysis.

MEDICAL IMAGE ANALYSIS (2022)

Article Medicine, General & Internal

Prediction of Tinnitus Perception Based on Daily Life MHealth Data Using Country Origin and Season

Johannes Allgaier et al.

Summary: Tinnitus is an auditory phantom perception that can severely affect quality of life. Multimodal data analyses, particularly from mHealth data sources, can provide new insights into tinnitus. Examining data from the TrackYourTinnitus mHealth platform, this study found differences in tinnitus symptoms based on the users' country of origin and the season.

JOURNAL OF CLINICAL MEDICINE (2022)

Review Radiology, Nuclear Medicine & Medical Imaging

Radiology subspecialisation in Africa: A review of the current status

Efosa P. Iyawe et al.

Summary: Research indicates that there is limited availability of subspecialist radiology training programmes in African countries, with only a few countries having well-established training programs in place. Alternative models of subspecialist radiology training are suggested to address this deficit.

SA JOURNAL OF RADIOLOGY (2021)

Review Oncology

Deep learning in cancer pathology: a new generation of clinical biomarkers

Amelie Echle et al.

Summary: Clinical workflows in oncology rely on molecular biomarkers for prediction and prognosis. Deep learning technology can extract biomarkers directly from routine histology images, potentially enhancing clinical decision-making, but require rigorous external validation in clinical settings.

BRITISH JOURNAL OF CANCER (2021)

Review Physics, Multidisciplinary

Explainable AI: A Review of Machine Learning Interpretability Methods

Pantelis Linardatos et al.

Summary: Recent advances in artificial intelligence have led to widespread industrial adoption, with machine learning systems demonstrating superhuman performance. However, the complexity of these systems has made them difficult to explain, hindering their application in sensitive domains. Therefore, there is a renewed interest in the field of explainable artificial intelligence.

ENTROPY (2021)

Review Cardiac & Cardiovascular Systems

Artificial intelligence-enhanced electrocardiography in cardiovascular disease management

Konstantinos C. Siontis et al.

Summary: This review summarizes the use of artificial intelligence-enhanced electrocardiography in the detection of cardiovascular disease in at-risk populations, discussing its implications for clinical decision-making in patients with cardiovascular disease and critically appraising potential limitations and unknowns.

NATURE REVIEWS CARDIOLOGY (2021)

Article Chemistry, Analytical

Using Explainable Machine Learning to Improve Intensive Care Unit Alarm Systems

Jose A. Gonzalez-Novoa et al.

Summary: This paper introduces the use of explainable machine learning techniques for automated analysis of ICU patient data. The results show that the proposed model can effectively predict mortality rates for ICU patients in different age groups, and help improve alarm systems to enhance healthcare personnel's vigilance.

SENSORS (2021)

Review Gastroenterology & Hepatology

Artificial intelligence-assisted colonoscopy: A review of current state of practice and research

Mahsa Taghiakbari et al.

Summary: Colonoscopy is an effective screening procedure in colorectal cancer prevention programs, and AI-assisted decision support systems show promise in improving the detection and classification of colorectal polyps and cancer. However, challenges remain in determining their real-time application value in clinical practice due to limitations in model design, validation, and testing under real-life conditions.

WORLD JOURNAL OF GASTROENTEROLOGY (2021)

Article Medicine, General & Internal

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

Matthew J. Page et al.

SYSTEMATIC REVIEWS (2021)

Article Computer Science, Artificial Intelligence

Using ontologies to enhance human understandability of global post-hoc explanations of black-box models

Roberto Confalonieri et al.

Summary: The use of ontologies can enhance the understandability of global post-hoc explanations, particularly when combined with domain knowledge. Results show that decision trees generated with ontologies are more understandable than standard models without significant compromise on accuracy.

ARTIFICIAL INTELLIGENCE (2021)

Review Chemistry, Multidisciplinary

Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review

Anna Markella Antoniadi et al.

Summary: Machine Learning and Artificial Intelligence have great potential in transforming medicine, but the lack of transparency in AI applications, especially in Clinical Decision Support Systems, can lead to issues of reliance. Explainable AI (XAI) provides rationale for users to understand AI outputs, but there is a distinct lack of XAI application and user studies in the context of CDSS. Further research is needed to explore the opportunities and challenges of implementing XAI in CDSS.

APPLIED SCIENCES-BASEL (2021)

Article Computer Science, Artificial Intelligence

Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability

Muhammad Rehman Zafar et al.

Summary: LIME is a popular technique for improving the interpretability of black box ML algorithms, but its random perturbation methods can lead to unstable explanations. To address this issue, we propose DLIME, which uses AHC and KNN to determine the relevant cluster for the explanation target and trains a model on this cluster to generate explanations.

MACHINE LEARNING AND KNOWLEDGE EXTRACTION (2021)

Article Computer Science, Artificial Intelligence

Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

Alejandro Barredo Arrieta et al.

INFORMATION FUSION (2020)

Article Health Care Sciences & Services

The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database

Stan Benjamens et al.

NPJ DIGITAL MEDICINE (2020)

Article Computer Science, Information Systems

How did you get to this number? Stakeholder needs for implementing predictive analytics: a pre-implementation qualitative study

Natalie C. Benda et al.

JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION (2020)

Review Imaging Science & Photographic Technology

Explainable Deep Learning Models in Medical Image Analysis

Amitojdeep Singh et al.

JOURNAL OF IMAGING (2020)

Article Computer Science, Artificial Intelligence

Explanation in artificial intelligence: Insights from the social sciences

Tim Miller

ARTIFICIAL INTELLIGENCE (2019)

Article Computer Science, Artificial Intelligence

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Cynthia Rudin

NATURE MACHINE INTELLIGENCE (2019)

Article Statistics & Probability

Distribution-Free Predictive Inference for Regression

Jing Lei et al.

JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION (2018)

Article Computer Science, Information Systems

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Amina Adadi et al.

IEEE ACCESS (2018)

Review Oncology

Artificial intelligence in radiology

Ahmed Hosny et al.

NATURE REVIEWS CANCER (2018)

Article Multidisciplinary Sciences

Dermatologist-level classification of skin cancer with deep neural networks

Andre Esteva et al.

NATURE (2017)

Article Multidisciplinary Sciences

Comment: The FAIR Guiding Principles for scientific data management and stewardship

Mark D. Wilkinson et al.

SCIENTIFIC DATA (2016)

Article Statistics & Probability

Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation

Alex Goldstein et al.

JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS (2015)

Review Radiology, Nuclear Medicine & Medical Imaging

Cognitive and System Factors Contributing to Diagnostic Errors in Radiology

Cindy S. Lee et al.

AMERICAN JOURNAL OF ROENTGENOLOGY (2013)

Article Biochemical Research Methods

Permutation importance: a corrected feature importance measure

Andre Altmann et al.

BIOINFORMATICS (2010)