4.6 Article

Defining and detecting toxicity on social media: context and knowledge are key

Journal

NEUROCOMPUTING
Volume 490, Issue -, Pages 312-318

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.11.095

Keywords

Toxicity; Cursing; Harassment; Extremism; Radicalization; Context

Funding

  1. National Science Foundation [1761931]
  2. Div Of Information & Intelligent Systems
  3. Direct For Computer & Info Scie & Enginr [1761931] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper discusses the issue of toxic communication online and the challenges of detecting and analyzing it. The authors propose an approach that combines psychological and social theories and uses a statistical learning algorithm to address the multidimensional and ambiguous nature of online toxicity.
Online platforms have become an increasingly prominent means of communication. Despite the obvious benefits to the expanded distribution of content, the last decade has resulted in disturbing toxic communication, such as cyberbullying and harassment. Nevertheless, detecting online toxicity is challenging due to its multi-dimensional, context sensitive nature. As exposure to online toxicity can have serious social consequences, reliable models and algorithms are required for detecting and analyzing such communication across the vast and growing space of social media. In this paper, we draw on psychological and social theory to define toxicity. Then, we provide an approach that identifies multiple dimensions of toxicity and incorporates explicit knowledge in a statistical learning algorithm to resolve ambiguity across such dimensions. CO 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available