4.7 Article

The Biases of Pre-Trained Language Models: An Empirical Study on Prompt-Based Sentiment Analysis and Emotion Detection

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 14, Issue 3, Pages 1743-1753

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2022.3204972

Keywords

Task analysis; Emotion recognition; Sentiment analysis; Computational modeling; Affective computing; Taxonomy; Analytical models; Emotion detection; pre-trained language model; prompt; sentiment analysis

Ask authors/readers for more resources

With the breakthrough of large-scale pre-trained language model (PLM) technology, prompt-based classification tasks, such as sentiment analysis and emotion detection, have gained increasing attention. This study conducts a systematic empirical study on prompt-based sentiment analysis and emotion detection to investigate the biases of PLMs in affective computing.
Thanks to the breakthrough of large-scale pre-trained language model (PLM) technology, prompt-based classification tasks, e.g., sentiment analysis and emotion detection, have raised increasing attention. Such tasks are formalized as masked language prediction tasks which are in line with the pre-training objects of most language models. Thus, one can use a PLM to infer the masked words in a downstream task, then obtaining label predictions with manually defined label-word mapping templates. Prompt-based affective computing takes the advantages of both neural network modeling and explainable symbolic representations. However, there still remain many unclear issues related to the mechanisms of PLMs and prompt-based classification. We conduct a systematic empirical study on prompt-based sentiment analysis and emotion detection to study the biases of PLMs towards affective computing. We find that PLMs are biased in sentiment analysis and emotion detection tasks with respect to the number of label classes, emotional label-word selections, prompt templates and positions, and the word forms of emotion lexicons.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available