4.6 Article

Humans and Algorithms Detecting Fake News: Effects of Individual and Contextual Confidence on Trust in Algorithmic Advice

Journal

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION
Volume 39, Issue 7, Pages 1483-1494

Publisher

TAYLOR & FRANCIS INC
DOI: 10.1080/10447318.2022.2097601

Keywords

-

Ask authors/readers for more resources

Algorithms have become an integral part of our daily lives, but people are often reluctant to follow algorithmic advice due to lack of trust. This study analyzes the influence of individual and contextual factors on people's willingness to request and follow algorithmic advice, and validates these influences in the context of fake news detection.
Algorithms have become part of our daily lives and have taken over many decision-making processes. It has often been argued and shown that algorithmic judgment can be as accurate or even more accurate than human judgement. However, humans are reluctant to follow algorithmic advice, especially when they do not trust the algorithm to be better than they are themselves: self-confidence has been found as one factor that influences the willingness to follow algorithmic advice. However, it is unknown whether this is an individual or a contextual characteristic. The current study analyses whether individual or contextual factors determine whether humans are willing to request algorithmic advice, to follow algorithmic advice, and whether their performance improves given algorithmic advice. We consider the use of algorithmic advice in fake news detection. Using data from 110 participants and 1610 news stories of which almost half were fake, we find that humans without algorithmic advice correctly assess the news stories 64% of the time. This only marginally increases to 66% after they have received feedback from an algorithm that itself is 67% correct. The willingness to accept advice indeed decreases with participants' self-confidence in the initial assessment, but this effect is contextual rather than individual. That is, participants who are on average more confident accept advice just as often as those who are on average less confident. What does hold, however, is that a participant is less likely to accept algorithmic advice for the news stories about which that participant is more confident We outline the implications of these findings for the design of experimental tests of algorithmic advice and give general guidelines for human-algorithm interaction that follow from our results.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available