4.5 Article

In AI We Trust? Effects of Agency Locus and Transparency on Uncertainty Reduction in Human-AI Interaction

期刊

JOURNAL OF COMPUTER-MEDIATED COMMUNICATION
卷 26, 期 6, 页码 384-402

出版社

OXFORD UNIV PRESS INC
DOI: 10.1093/jcmc/zmab013

关键词

Machine Learning; Agency Locus; Agency Attribution; Transparency; Uncertainty; Trust

资金

  1. Arthur W. Page Center for Integrity in Public Communication at the Donald P. Bellisario College of Communications at Pennsylvania State University, University Park

向作者/读者索取更多资源

The study found that machine-learning AI systems triggered less social presence, increased user uncertainty, and lowered trust. Transparency of rules reduced uncertainty and enhanced trust, but the mechanisms of this effect differed between the two types of AI systems.
Artificial intelligence (AI) is increasingly used to make decisions for humans. Unlike traditional AI that is programmed to follow human-made rules, machine-learning AI generates rules from data. These machine-generated rules are often unintelligible to humans. Will users feel more uncertainty about decisions governed by such rules? To what extent does rule transparency reduce uncertainty and increase users' trust? In a 2x3x2 between-subjects online experiment, 491 participants interacted with a website that was purported to be a decision-making AI system. Three factors of the AI system were manipulated: agency locus (human-made rules vs. machine-learned rules), transparency (no vs. placebic vs. real explanations), and task (detecting fake news vs. assessing personality). Results show that machine-learning AI triggered less social presence, which increased uncertainty and lowered trust. Transparency reduced uncertainty and enhanced trust, but the mechanisms for this effect differed between the two types of AI. Lay Summary Machine-learning AI systems are governed by system-generated rules based on their analysis of large databases. These rules are not predetermined by humans. Furthermore, they can sometimes be seen as difficult to interpret by humans. In this research, I ask whether users trust the judgments of such systems that are driven by machine-made rules. The results show that when compared with a traditional system that was programmed to follow human-made rules, machine-learning AI was perceived as less humanlike. This led users to be more uncertain about the decisions produced by the machine-learning AI system. This also decreased their trust in the system and their intention to use it. Transparency of the rationales for its decisions alleviated users' uncertainty and enhanced their trust, provided that the rationales are meaningful and informative.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据