4.8 Article

AI model GPT-3 (dis)informs us better than humans

期刊

SCIENCE ADVANCES
卷 9, 期 26, 页码 -

出版社

AMER ASSOC ADVANCEMENT SCIENCE
DOI: 10.1126/sciadv.adh1850

关键词

-

向作者/读者索取更多资源

Artificial intelligence (AI) is being used to determine the credibility of information and tweets, and it is found that AI can produce both accurate and compelling disinformation. Furthermore, humans are unable to distinguish between tweets generated by AI and those written by real Twitter users. These findings highlight the dangers of AI in spreading disinformation and suggest the need for better information campaigns to protect global health.
Artificial intelligence (AI) is changing the way we create and evaluate information, and this is happening during an infodemic, which has been having marked effects on global health. Here, we evaluate whether recruited individuals can distinguish disinformation from accurate information, structured in the form of tweets, and determine whether a tweet is organic or synthetic, i.e., whether it has been written by a Twitter user or by the AI model GPT-3. The results of our preregistered study, including 697 participants, show that GPT-3 is a double-edge sword: In comparison with humans, it can produce accurate information that is easier to understand, but it can also produce more compelling disinformation. We also show that humans cannot distinguish between tweets generated by GPT-3 and written by real Twitter users. Starting from our results, we reflect on the dangers of AI for disinformation and on how information campaigns can be improved to benefit global health.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据