4.6 Article

O-2-Bert: Two-Stage Target-Based Sentiment Analysis

Journal

COGNITIVE COMPUTATION
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s12559-023-10191

Keywords

O-2-Bert; OTE-Bert; OSC-Bert; Entity number prediction; Entity starting annotation; Entity length prediction

Ask authors/readers for more resources

The paper proposes a framework called O-2-Bert for target-based sentiment analysis. It consists of two stages: Opinion target extraction (OTE-Bert) and Opinion sentiment classification (OSC-Bert). Experimental results demonstrate competitive performances of the framework on both target extraction and sentiment classification tasks in the restaurant and laptop domains.
Target-based sentiment analysis (TBSA) is one of the most important NLP research topics for widespread applications. However, the task is challenging, especially when the targets contain multiple words or do not exist in the sequences. Conventional approaches cannot accurately extract the (target, sentiment) pairs due to the limitations of the fixed end-to-end architecture design. In this paper, we propose a framework named O-2-Bert, which consists of Opinion target extraction (OTE-Bert) and Opinion sentiment classification (OSC-Bert) to complete the task in two stages. More specifically, we divide the OTE-Bert into three modules. First, an entity number prediction module predicts the number of entities in a sequence, even in an extreme situation where no entities are contained. Afterwards, with predicted number of entities, an entity starting annotation module is responsible for predicting their starting positions. Finally, an entity length prediction module predicts the lengths of these entities, and thus, accomplishes target extraction. In OSC-Bert, the sentiment polarities of extracted targets from OTE-Bert. According to the characteristics of BERT encoders, our framework can be adapted to short English sequences without domain limitations. For other languages, our approach might work through altering the tokenization. Experimental results on the SemEval 2014-16 benchmarks show that the proposed model achieves competitive performances on both domains (restaurants and laptops) and both tasks (target extraction and sentiment classification), with F1-score as evaluated metrics. Specifically, OTE-Bert achieves 84.63%, 89.20%, 83.16%, and 86.88% F1 scores for target extraction, while OSCBert achieves 82.90%, 80.73%, 76.94%, and 83.58% F1 scores for sentiment classification, on the chosen benchmarks. The statistics validate the effectiveness and robustness of our approach and the new two-stage paradigm. In future work, we will explore more possibilities of the new paradigm on other NLP tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available