4.6 Article

Defining and detecting toxicity on social media: context and knowledge are key

期刊

NEUROCOMPUTING
卷 490, 期 -, 页码 312-318

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2021.11.095

关键词

Toxicity; Cursing; Harassment; Extremism; Radicalization; Context

资金

  1. National Science Foundation [1761931]
  2. Div Of Information & Intelligent Systems
  3. Direct For Computer & Info Scie & Enginr [1761931] Funding Source: National Science Foundation

向作者/读者索取更多资源

This paper discusses the issue of toxic communication online and the challenges of detecting and analyzing it. The authors propose an approach that combines psychological and social theories and uses a statistical learning algorithm to address the multidimensional and ambiguous nature of online toxicity.
Online platforms have become an increasingly prominent means of communication. Despite the obvious benefits to the expanded distribution of content, the last decade has resulted in disturbing toxic communication, such as cyberbullying and harassment. Nevertheless, detecting online toxicity is challenging due to its multi-dimensional, context sensitive nature. As exposure to online toxicity can have serious social consequences, reliable models and algorithms are required for detecting and analyzing such communication across the vast and growing space of social media. In this paper, we draw on psychological and social theory to define toxicity. Then, we provide an approach that identifies multiple dimensions of toxicity and incorporates explicit knowledge in a statistical learning algorithm to resolve ambiguity across such dimensions. CO 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据