4.6 Article

Learn from structural scope: Improving aspect-level sentiment analysis with hybrid graph convolutional networks

Journal

NEUROCOMPUTING
Volume 518, Issue -, Pages 373-383

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.10.071

Keywords

Aspect-level sentiment analysis; Syntactic parse tree; Hybrid graph convolutional network; Cascaded model

Ask authors/readers for more resources

This study proposes a hybrid graph convolutional network (HGCN) that synthesizes information from constituency tree and dependency tree to learn structural text regions related to specific targets and predict sentiment polarity. Experimental results show that the proposed method outperforms current state-of-the-art baselines on five public datasets.
Aspect-level sentiment analysis aims to determine the sentiment polarity towards a specific target in a sentence. The main challenge of this task is to effectively model the relation between targets and senti-ments so as to filter out noisy opinion words from irrelevant targets. Most recent efforts capture relations through target-sentiment pairs or opinion spans from a word-level or phrase-level perspective. Based on the observation that targets and sentiments essentially establish relations following the grammatical hierarchy of phrase-clause-sentence structure, it is hopeful to exploit comprehensive syntactic informa-tion for better guiding the learning process. Therefore, we introduce the concept of Scope, which outlines a structural text region related to a specific target. To jointly learn structural Scope and predict the sen-timent polarity, we propose a hybrid graph convolutional network (HGCN) to synthesize information from constituency tree and dependency tree, exploring the potential of linking two syntax parsing meth-ods to enrich the representation. Experimental results on five public datasets illustrate that our HGCN model outperforms current state-of-the-art baselines. More specifically, the average accuracy/ F1 score improvements of our HGCN compared to baseline models on Restaurant 14, 15 and 16 are 2.46%/5.36%, 2.25%/5.70% and 1.73%/5.50%, while the performance improvements are 3.32%/4.30% and 2.50%/3.08% on the Laptop and Twitter datasets, respectively. Furthermore, when cascaded to five mod-els, our method has significantly improved their performances by simplifying the sentence from multiple targets to a single one. (c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available