4.7 Article

Retrieval Contrastive Learning for Aspect-Level Sentiment Classification

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.ipm.2023.103539

关键词

Natural language processing; Aspect-level sentiment classification; Information retrieval; Contrastive learning

向作者/读者索取更多资源

Aspect-Level Sentiment Classification (ALSC) is a crucial challenge in Natural Language Processing (NLP). Most existing methods fail to consider the correlations between different instances, leading to a lack of global viewpoint. To address this issue, we propose a Retrieval Contrastive Learning (RCL) framework that extracts intrinsic knowledge across instances for improved instance representation. Experimental results demonstrate that training ALSC models with RCL leads to substantial performance improvements.
Aspect-Level Sentiment Classification (ALSC) aims to assign specific sentiments to a sentence toward different aspects, which is one of the crucial challenges in the field of Natural Language Processing (NLP). Despite numerous approaches being proposed and obtaining prominent results, the majority of them focus on leveraging the relationships between the aspect and opinion words in a single instance while ignoring correlations with other instances, which will make models inevitably become trapped in local optima due to the absence of a global viewpoint. Instance representation derived from a single instance, on the one hand, the contained information is insufficient due to the lack of descriptions from other perspectives; on the other hand, its stored knowledge is redundant since the inability to filter extraneous content. To obtain a polished instance representation, we developed a Retrieval Contrastive Learning (RCL) framework to subtly extract intrinsic knowledge across instances. RCL consists of two modules: (a) obtaining retrieval instances by sparse retriever and dense retriever, and (b) extracting and learning the knowledge of the retrieval instances by using Contrastive Learning (CL). To demonstrate the superiority of RCL, five ALSC models are employed to conduct comprehensive experiments on three widely-known benchmarks. Compared with the baselines, ALSC models achieve substantial improvements when trained with RCL. Especially, ABSA-DeBERTa with RCL obtains new state-of-the-art results, which outperform the advanced methods by 0.92%, 0.23%, and 0.47% in terms of Macro F1 gains on Laptops, Restaurants, and Twitter, respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据