3.8 Proceedings Paper

Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3544548.3580945

关键词

Artifcial Intelligence; Human-Computer Interaction (HCI); Explainable AI (XAI); Human-Centered Computing; Explainability; Transparency; AI bias

向作者/读者索取更多资源

Biases in AI systems are a significant issue that requires explanation and resolution. The general public lacks the ability to understand how black-box algorithms function and handle biases. This study conducted in-depth interviews and found that personal relevance, boundaries, and risk level are crucial factors in developing user trust and guiding the design of explainable AI.
Biases in Artifcial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users' perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据