Journal
HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES
Volume 1, Issue 1, Pages 48-61Publisher
WILEY-HINDAWI
DOI: 10.1002/hbe2.115
Keywords
social influence; social media; social networking
Categories
Funding
- Defense Advanced Research Projects Agency [W911NF-12-10037W911NF-17-C-0094, W911NF-17-C0094, W911NF-12-1-0037]
- National Institutes of Health [5R01DA039928-03]
- Air Force Office of Scientific Research [FA9550-17-1-0327]
Ask authors/readers for more resources
The increased relevance of social media in our daily life has been accompanied by efforts to manipulate online conversations and opinions. Deceptive social bots-automated or semi-automated accounts designed to impersonate humans-have been successfully exploited for these kinds of abuse. Researchers have responded by developing artificial intelligence (AI) tools to arm the public in the fight against social bots. Here we review the literature on different types of bots, their impact, and detection methods. We use the case study of Botometer, a popular bot detection tool developed at Indiana University, to illustrate how people interact with AI countermeasures. A user experience survey suggests that bot detection has become an integral part of the social media experience for many users. However, barriers in interpreting the output of AI tools can lead to fundamental misunderstandings. The arms race between machine learning methods to develop sophisticated bots and effective countermeasures makes it necessary to update the training data and features of detection tools. We again use the Botometer case to illustrate both algorithmic and interpretability improvements of bot scores, designed to meet user expectations. We conclude by discussing how future AI developments may affect the fight between malicious bots and the public.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available