4.6 Article

Do stakeholder needs differ? - Designing stakeholder-tailored Explainable Artificial Intelligence (XAI) interfaces

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Cybernetics

Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness

Yuri Nakao et al.

Summary: Researchers have developed a user interface that supports investigations into the fairness of AI models, enabling data scientists and domain experts to examine the fairness of AI using loan application data. Through workshops and user evaluations, the researchers found that this tool is effective in investigating AI fairness.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION (2023)

Article Computer Science, Cybernetics

What Are the Users' Needs? Design of a User-Centered Explainable Artificial Intelligence Diagnostic System

Xin He et al.

Summary: The paper discusses the increasing application of artificial intelligence (AI) systems in the high-risk field of medicine and the need for these systems to explain their decisions to different users. The lack of explainable AI (XAI) design practices for consumer users in the medical domain is highlighted. To address this, the study developed a library of XAI user needs in the medical domain and designed an XAI-based electrocardiogram diagnostic system prototype for consumer users. User evaluation of the prototype provided empirical experience and promoted consumer user-centered XAI practices.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION (2023)

Article Computer Science, Artificial Intelligence

Quod erat demonstrandum?- Towards a typology of the concept of explanation for the design of explainable AI

Federico Cabitza et al.

Summary: This paper presents a framework for defining different types of explanations of AI systems and criteria for evaluating their quality. It proposes a structural view of constructing explanations and suggests a typology based on the explanandum, explanantia, and their relationship. The paper highlights the importance of epistemological and psychological perspectives in defining quality criteria and aims to support clear inventories, verification criteria, and validation methods for AI explainability.

EXPERT SYSTEMS WITH APPLICATIONS (2023)

Article Computer Science, Cybernetics

Designing an XAI interface for BCI experts: A contextual design for pragmatic explanation interface based on domain knowledge in a specific context

Sangyeon Kim et al.

Summary: Domain experts rely on AI algorithms in decision-support systems, while researchers in brain-computer interface have used deep learning algorithms for decoding neural signals. However, the complexity of these algorithms may result in low transparency. Explainable artificial intelligence (XAI) can provide a solution by making AI algorithms and decisions more interpretable, but the explanations must be designed to meet the users' contextual expectations. This study proposes an explanation interface for BCI experts, using a user-centered approach and scientific knowledge.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (2023)

Article Computer Science, Information Systems

Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities

Christian Meske et al.

Summary: Artificial Intelligence (AI) has permeated many aspects of our lives, and this research note discusses the risks of black-box AI, the need for explainability, and previous research on Explainable AI (XAI) in information systems research. The note also explores the origin, objectives, stakeholders, and quality criteria of personalized explanations in XAI, and concludes with an outlook on future XAI research.

INFORMATION SYSTEMS MANAGEMENT (2022)

Article Computer Science, Artificial Intelligence

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

Guang Yang et al.

Summary: XAI is an emerging research field in machine learning that aims to explain the decision-making process of AI systems. In healthcare, XAI is becoming increasingly important for improving the transparency and explainability of deep learning applications, although the lack of explainability in most AI systems may be a major barrier to successful implementation of AI tools in clinical practice.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

Andreas Holzinger et al.

Summary: Medical artificial intelligence systems have achieved significant success and are crucial for improving human health. In order to enhance performance, addressing uncertainty and errors while explaining the result process is essential. Information fusion can help develop more robust and explainable machine learning models.

INFORMATION FUSION (2022)

Article Engineering, Industrial

Examining Physicians' Explanatory Reasoning in Re-Diagnosis Scenarios for Improving AI Diagnostic Systems

Lamia Alam et al.

Summary: This study investigates the explanations used by physicians in the context of rediagnosis or a change in diagnosis and presents nine broad categories of explanations. Design recommendations are provided to improve user trust and satisfaction with medical diagnostic AI systems.

JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING (2022)

Article Computer Science, Cybernetics

Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty

Jinglu Jiang et al.

Summary: This study investigates the effects of providing three types of post-hoc explanations on user decision-making outcomes in the context of AI advice acceptance and adoption. The results show that users' epistemic uncertainty plays a significant role in understanding the impacts of AI explainability. Providing prediction rationale is beneficial when users' uncertainty increases, while alternative advice and prediction confidence scores may hinder advice acceptance.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (2022)

Review Chemistry, Multidisciplinary

XAI Systems Evaluation: A Review of Human and Computer-Centred Methods

Pedro Lopes et al.

Summary: The lack of transparency in powerful Machine Learning systems has led to the emergence of the XAI field. Researchers focus on developing explanation techniques to better understand the system's reasoning for a particular output. This paper presents a survey of Human-centred and Computer-centred methods to evaluate XAI systems, and proposes a new taxonomy for clearer categorization of these evaluation methods.

APPLIED SCIENCES-BASEL (2022)

Article Chemistry, Multidisciplinary

The Role of XAI in Advice-Taking from a Clinical Decision Support System: A Comparative User Study of Feature Contribution-Based and Example-Based Explanations

Yuhan Du et al.

Summary: This study compared two explainable artificial intelligence methods for clinical decision support systems based on a user study of healthcare practitioners. The results showed no significant difference between the two methods in terms of advice-taking, but both methods may lead to over-reliance. The study also found that different types of healthcare practitioners have differing preferences for explanations, suggesting that CDSS developers should choose XAI methods based on their target users.

APPLIED SCIENCES-BASEL (2022)

Article Computer Science, Information Systems

Personas for Artificial Intelligence (AI) an Open Source Toolbox

Andreas Holzinger et al.

Summary: This paper introduces how the personas method can be adapted to support the development of human-centered AI applications, with a demonstration in the medical field. The work aims to foster the development of novel human-AI interfaces that will be urgently needed in the near future.

IEEE ACCESS (2022)

Article Computer Science, Cybernetics

Human-centered XAI: Developing design patterns for explanations of clinical decision support systems

Tjeerd A. J. Schoonderwoerd et al.

Summary: Research focus on transparency of machine learning models and human-centered approaches to XAI. The paper presents a case study applying a human-centered design approach for AI-generated explanations, aiming to integrate human factors into the development processes.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (2021)

Article Audiology & Speech-Language Pathology

Dysarthria following acute ischemic stroke: Prospective evaluation of characteristics, type and severity

Elien De Cock et al.

Summary: The study identified common speech characteristics of dysarthria following acute ischemic stroke, with most patients showing mild impairments and achieving complete recovery within one week. The findings highlight the importance of early assessment and monitoring of dysarthria in stroke patients.

INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS (2021)

Article Computer Science, Artificial Intelligence

A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems

Sina Mohseni et al.

Summary: The demand for interpretable and accountable intelligent systems is increasing as artificial intelligence applications become more prevalent in everyday life. Researchers from various disciplines collaborate to define, design, and assess explainable AI systems. By categorizing XAI design goals and evaluation methods, this article aims to support different design objectives and evaluation methods in interdisciplinary XAI research.

ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research

Markus Langer et al.

Summary: This paper discusses the classification of stakeholders and their desires in Explainable Artificial Intelligence (XAI), and proposes a model to explicitly explain the main concepts and relationships needed to fulfill stakeholders' desires.

ARTIFICIAL INTELLIGENCE (2021)

Article Medical Informatics

Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

Lamia Alam et al.

Summary: Artificial intelligence has the potential to transform healthcare by assisting in medical diagnosis, but for success, AI systems need to provide explanations for diagnoses. Two simulation experiments showed that explanations can improve patient satisfaction and trust, especially during critical re-diagnosis periods, suggesting the importance of incorporating visual and example-based explanations into AI systems in healthcare.

BMC MEDICAL INFORMATICS AND DECISION MAKING (2021)

Article Computer Science, Artificial Intelligence

Recommender systems in the healthcare domain: state-of-the-art and research issues

Thi Ngoc Trang Tran et al.

Summary: This article discusses the importance of healthcare recommender systems in reducing difficulties for users seeking medical information and aiding medical professionals in making more accurate decisions. Through systematic overview and working examples, it delves into the practical application and challenges of recommender systems.

JOURNAL OF INTELLIGENT INFORMATION SYSTEMS (2021)

Article Computer Science, Cybernetics

Apps That Motivate: a Taxonomy of App Features Based on Self- Determination Theory

Gabriela Villalobos-Zuniga et al.

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (2020)

Article Medical Informatics

Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

Julia Amann et al.

BMC MEDICAL INFORMATICS AND DECISION MAKING (2020)

Editorial Material Oncology

Big data and machine learning algorithms for health-care delivery

Kee Yuan Ngiam et al.

LANCET ONCOLOGY (2019)

Article Computer Science, Information Systems

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Amina Adadi et al.

IEEE ACCESS (2018)

Review Clinical Neurology

Artificial intelligence in healthcare: past, present and future

Fei Jiang et al.

STROKE AND VASCULAR NEUROLOGY (2017)

Article Cardiac & Cardiovascular Systems

Machine Learning in Medicine

Rahul C. Deo

CIRCULATION (2015)

Article Health Care Sciences & Services

Implementing a framework for goal setting in community based stroke rehabilitation: a process evaluation

Lesley Scobbie et al.

BMC HEALTH SERVICES RESEARCH (2013)

Article Public, Environmental & Occupational Health

Does treatment adherence correlates with health related quality of life? findings from a cross sectional study

Fahad Saleem et al.

BMC PUBLIC HEALTH (2012)

Editorial Material Medicine, General & Internal

Shared Decision Making - The Pinnacle of Patient-Centered Care

Michael J. Barry et al.

NEW ENGLAND JOURNAL OF MEDICINE (2012)

Article Education, Scientific Disciplines

Self-regulation theory: Applications to medical education: AMEE Guide No. 58

John Sandars et al.

MEDICAL TEACHER (2011)

Review Audiology & Speech-Language Pathology

Principles of experience-dependent neural plasticity: Implications for rehabilitation after brain damage

Jeffrey A. Kleim et al.

JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH (2008)