4.3 Article

Trust criteria for artificial intelligence in health: normative and epistemic considerations

Related references

Note: Only part of the references are listed.
Review Health Care Sciences & Services

Explainable artificial intelligence for mental health through transparency and interpretability for understandability

Dan W. Joyce et al.

Summary: The literature on AI and ML in mental health and psychiatry lacks consensus on the meaning of "explainability". In the more general XAI literature, there is some agreement on model-agnostic techniques that make complex models easier for humans to understand. However, in this study, the authors propose a different approach by defining model/algorithm explainability as understandability, which is a function of transparency and interpretability. They introduce the TIFU framework and argue that understanding AI/ML models is crucial in psychiatry due to the probabilistic relationships between symptoms, disorders, and their causes.

NPJ DIGITAL MEDICINE (2023)

Article Information Science & Library Science

Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing

Kevin Bauer et al.

Summary: Through two empirical studies, researchers found that feature-based explanations provided by AI systems can change users' understanding of information and the world, thereby affecting their decision-making. However, these explanations may also lead to the accumulation of misconceptions and create spillover effects that alter user behavior in related domains. Therefore, the potential side effects of mental model adjustments should be considered to avoid manipulating user behavior, promoting discriminatory inclinations, and increasing noise in decision-making when employing explainable AI methods.

INFORMATION SYSTEMS RESEARCH (2023)

Article Medicine, General & Internal

Artificial Hallucinations in ChatGPT: Implications in Scientific Writing

Hussam Alkaissi et al.

Summary: ChatGPT, a new chatbot technology, is set to have a significant impact on various industries, such as healthcare, medical education, biomedical research, and scientific writing. However, its implications in academic writing still remain largely unknown.

CUREUS JOURNAL OF MEDICAL SCIENCE (2023)

Article Psychology, Multidisciplinary

Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice

Leah Chong et al.

Summary: This research investigates the impact of human self-confidence on decision-making regarding accepting or rejecting AI suggestions, finding that human self-confidence plays a significant role in determining decisions.

COMPUTERS IN HUMAN BEHAVIOR (2022)

Article Health Care Sciences & Services

AI in the hands of imperfect users

Kristin M. M. Kostick-Quenet et al.

Summary: While bias in algorithms has received much attention, there is a need to address potential biases among human users of AI/ML and factors that influence user reliance. This article argues for a systematic approach to identifying user biases and calls for the development of interface design features informed by decision science and behavioral economics to promote critical decision-making using AI/ML.

NPJ DIGITAL MEDICINE (2022)

Article Computer Science, Artificial Intelligence

A manifesto on explainability for artificial intelligence in medicine

Carlo Combi et al.

Summary: This paper focuses on the importance of explainable artificial intelligence (XAI) in the field of biomedicine. By bringing together researchers with different roles and perspectives, it explores XAI in depth and presents a series of requirements for achieving explainability in AI.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2022)

Article Ethics

Mitigating Racial Bias in Machine Learning

Kristin M. Kostick-Quenet et al.

Summary: AI-based applications in the health sector raise concerns about ethics, legality, and safety. Algorithms trained on data from majority populations may generate less accurate or reliable results for minorities and disadvantaged groups.

JOURNAL OF LAW MEDICINE & ETHICS (2022)

Review Computer Science, Hardware & Architecture

Documentation to facilitate communication between dataset creators and consumers

Timnit Gebru et al.

Summary: Data plays a critical role in machine learning, with mismatched datasets potentially leading to negative model behaviors and societal biases amplification. The World Economic Forum suggests documenting the origin, creation, and use of machine learning datasets to prevent discriminatory outcomes.

COMMUNICATIONS OF THE ACM (2021)

Editorial Material Multidisciplinary Sciences

Beware explanations from AI in health care

Boris Babic et al.

SCIENCE (2021)

Article Health Care Sciences & Services

Do as AI say: susceptibility in deployment of clinical decision-aids

Susanne Gaube et al.

Summary: This study found that radiologists rated diagnostic advice as lower quality when it appeared to come from an AI system, while less experienced physicians did not have this bias. Diagnostic accuracy significantly decreased when participants received inaccurate advice, regardless of the purported source being AI or human experts. Therefore, important considerations need to be made when deploying advice in clinical environments, whether it is coming from AI or non-AI sources.

NPJ DIGITAL MEDICINE (2021)

Article Computer Science, Cybernetics

Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy

Ben Shneiderman

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION (2020)

Article Ethics

In AI We Trust: Ethics, Artificial Intelligence, and Reliability

Mark Ryan

SCIENCE AND ENGINEERING ETHICS (2020)

Review Business

HUMAN TRUST IN ARTIFICIAL INTELLIGENCE: REVIEW OF EMPIRICAL RESEARCH

Ella Glikson et al.

ACADEMY OF MANAGEMENT ANNALS (2020)

Editorial Material Multidisciplinary Sciences

Algorithms on regulatory lockdown in medicine

Boris Babic et al.

SCIENCE (2019)

Article Cardiac & Cardiovascular Systems

A Multisite Randomized Controlled Trial of a Patient-Centered Ventricular Assist Device Decision Aid (VADDA Trial)

Kristin M. Kostick et al.

JOURNAL OF CARDIAC FAILURE (2018)

Article Cardiac & Cardiovascular Systems

Development and validation of a patient-centered knowledge scale for left ventricular assist device placement

Kristin M. Kostick et al.

JOURNAL OF HEART AND LUNG TRANSPLANTATION (2016)

Review Behavioral Sciences

Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust

Kevin Anthony Hoff et al.

HUMAN FACTORS (2015)

Proceedings Paper Computer Science, Information Systems

The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems

Adrian Bussone et al.

2015 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2015) (2015)

Review Clinical Neurology

Treatment of Dystonia

Mary Ann Thenganatt et al.

NEUROTHERAPEUTICS (2014)

Review Behavioral Sciences

Trust in automation: Designing for appropriate reliance

JD Lee et al.

HUMAN FACTORS (2004)