4.6 Article

ALBERT-based fine-tuning model for cyberbullying analysis

Journal

MULTIMEDIA SYSTEMS
Volume 28, Issue 6, Pages 1941-1949

Publisher

SPRINGER
DOI: 10.1007/s00530-020-00690-5

Keywords

ALBERT; Fine tuning; Deep learning; wordVec; Gated recurrent unit; GRU; CNN

Ask authors/readers for more resources

With the increasing use of online social media platforms, cyberbullying has become a concern. This paper focuses on textual comments and explores the challenges of contextual understanding in cyberbullying. The proposed ALBERT-based model achieved state-of-the-art results through fine-tuning, surpassing existing approaches.
With the world's interaction moving more and more toward using online social media platforms, the advent of cyberbullying has also raised its head. Multiple forms of cyberbullying exist from the more common text based to images or even videos, and this paper will explore the context of textual comments. Even in the niche area of considering only text-based data, several approaches have already been worked upon such as n-grams, recurrent units, convolutional neural networks (CNNs), gated recurrent unit (GRU) and even a combination of the mentioned architectures. While all of these produce workable results, the main point of contention is that true contextual understanding is quite a complex concept. These methods fail due to two simple reasons: (i) lack of large datasets to properly utilize these architectures and (ii) the fact that understanding context requires some mechanism of remembering history that is only present in the recurrent units. This paper explores some of the recent approaches to the difficulties of contextual understanding and proposes an ALBERT-based fine-tuned model that achieves state-of-the-art results. ALBERT is a transformer-based architecture and thus even at its untrained form provides better contextual understanding than other recurrent units. This coupled with the fact that ALBERT is pre-trained on a large corpus allowing the flexibility to use a smaller dataset for fine-tuning as the pre-trained model already has deep understanding of the complexities of the human language. ALBERT showcases high scores in multiple benchmarks such as the GLUE and SQuAD showing that high levels of contextual understanding are inherently present and thus fine-tuning for the specific case of cyberbullying allows to use this to our advantage. With this approach, we have achieved an F1 score of 95% which surpasses current approaches such as the CNN + wordVec, CNN + GRU and BERT implementations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available