Journal
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS
Volume 13, Issue 12, Pages 3727-3742Publisher
SPRINGER HEIDELBERG
DOI: 10.1007/s13042-022-01622-7
Keywords
Sentiment classification; Weakly-supervised learning; Contrastive learning; Noisy labels
Categories
Funding
- National Natural Science Foundation of China [61902316, 62133012, 61936006, 61876144, 61876145, 62073255, 62103314, 61973249, 62001381]
- Key Research and Development Program of Shaanxi [2020ZDLGY04-07, 2021ZDLGY02-06]
- Innovation Capability Support Program of Shaanxi [2021TD-05]
- Natural Science Basic Research Program of Shaanxi [2022JQ-675, 2021JQ-712]
Ask authors/readers for more resources
This paper proposes a novel weakly-supervised anti-noise contrastive learning framework for sentiment classification, which learns robust representations through pre-training and fine-tuning, and demonstrates its superiority on multiple datasets.
Sentiment classification aims to identify the sentiment orientation of an opinionated text, which is widely used for market research, product recommendation, and etc. Supervised deep learning approaches are prominent in sentiment classification and have shown the power in representation learning, however such methods suffer from the costly human annotations. Massive user-tagged opinionated texts on the Internet provide a new source for annotation, such as twitter with emoji. However, the texts may contain noisy labels, which may cause ambiguity during training process. In this paper, we propose a novel Weakly-supervised Anti-noise Contrastive Learning framework for sentiment classification, and name it as WACL. We first adopt the supervised contrastive training strategy during the pre-training phase to fully explore potential contrast patterns of weakly-labeled data to learn robust representations. Then we design a simple dropping-layer strategy to remove the top layers from the pre-trained model that are susceptible to noisy data. Last, we add a classification layer on top of the remaining model and fine tune it with labeled data. The proposed framework can learn rich contrastive sentiment patterns in the case of label noise and is applicable to a variety of deep encoders. The experimental results on the Amazon product review, Twitter and SST5 datasets demonstrate the superiority of our method.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available