期刊
NEUROCOMPUTING
卷 490, 期 -, 页码 312-318出版社
ELSEVIER
DOI: 10.1016/j.neucom.2021.11.095
关键词
Toxicity; Cursing; Harassment; Extremism; Radicalization; Context
资金
- National Science Foundation [1761931]
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1761931] Funding Source: National Science Foundation
This paper discusses the issue of toxic communication online and the challenges of detecting and analyzing it. The authors propose an approach that combines psychological and social theories and uses a statistical learning algorithm to address the multidimensional and ambiguous nature of online toxicity.
Online platforms have become an increasingly prominent means of communication. Despite the obvious benefits to the expanded distribution of content, the last decade has resulted in disturbing toxic communication, such as cyberbullying and harassment. Nevertheless, detecting online toxicity is challenging due to its multi-dimensional, context sensitive nature. As exposure to online toxicity can have serious social consequences, reliable models and algorithms are required for detecting and analyzing such communication across the vast and growing space of social media. In this paper, we draw on psychological and social theory to define toxicity. Then, we provide an approach that identifies multiple dimensions of toxicity and incorporates explicit knowledge in a statistical learning algorithm to resolve ambiguity across such dimensions. CO 2021 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据