4.7 Article

Aspect-based sentiment analysis with attention-assisted graph and variational sentence representation

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 258, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.109975

Keywords

Aspect-based sentiment analysis; Graph neural network; Encoder-decoder; Self-attention

Funding

  1. National Natural Science Foundation of China (NSFC)
  2. Scientific and Technological Developing Scheme of Jilin Province
  3. Energy Administration of Jilin Province
  4. [61876071]
  5. [20180201003SF]
  6. [20190701031GH]
  7. [3D516L921421]

Ask authors/readers for more resources

Aspect-based sentiment analysis is a fine-grained task that detects the sentiment polarities of aspect words in a sentence. Current ABSA models primarily use graph-based methods but may rely excessively on the quality of dependency trees and overlook global sentence information. To address these issues, we propose a new ABSA model AG-VSR, which utilizes A2GR and VSR for final classification.
Aspect-based sentiment analysis (ABSA) is a fine-grained task that detects the sentiment polarities of particular aspect words in a sentence. With the rise of graph convolution networks (GCNs), current ABSA models mostly use graph-based methods. These methods construct a dependency tree for each sentence, and regard each word as a unique node. To be more specific, they conduct classification using aspect representations instead of sentence representations, and update them with GCNs. However, this kind of method relies too much on the quality of the dependency tree and may lose the global sentence information, which is also helpful for classification. To deal with these, we design a new ABSA model AG-VSR. Two kinds of representations are proposed to perform the final classification, Attention-assisted Graph-based Representation (A2GR) and Variational Sentence Representation (VSR). A2GR is produced by the GCN module, which inputs a dependency tree modified by the attention mechanism. Furthermore, VSR is sampled from a distribution learned by a VAE-like encoder-decoder structure. Extensive experiments show that our model AG-VSR achieves competitive results. Our code and data have been released in https://github.com/wangbing1416/VAGR.(c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available