Journal
KNOWLEDGE-BASED SYSTEMS
Volume 258, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.knosys.2022.109975
Keywords
Aspect-based sentiment analysis; Graph neural network; Encoder-decoder; Self-attention
Categories
Funding
- National Natural Science Foundation of China (NSFC)
- Scientific and Technological Developing Scheme of Jilin Province
- Energy Administration of Jilin Province
- [61876071]
- [20180201003SF]
- [20190701031GH]
- [3D516L921421]
Ask authors/readers for more resources
Aspect-based sentiment analysis is a fine-grained task that detects the sentiment polarities of aspect words in a sentence. Current ABSA models primarily use graph-based methods but may rely excessively on the quality of dependency trees and overlook global sentence information. To address these issues, we propose a new ABSA model AG-VSR, which utilizes A2GR and VSR for final classification.
Aspect-based sentiment analysis (ABSA) is a fine-grained task that detects the sentiment polarities of particular aspect words in a sentence. With the rise of graph convolution networks (GCNs), current ABSA models mostly use graph-based methods. These methods construct a dependency tree for each sentence, and regard each word as a unique node. To be more specific, they conduct classification using aspect representations instead of sentence representations, and update them with GCNs. However, this kind of method relies too much on the quality of the dependency tree and may lose the global sentence information, which is also helpful for classification. To deal with these, we design a new ABSA model AG-VSR. Two kinds of representations are proposed to perform the final classification, Attention-assisted Graph-based Representation (A2GR) and Variational Sentence Representation (VSR). A2GR is produced by the GCN module, which inputs a dependency tree modified by the attention mechanism. Furthermore, VSR is sampled from a distribution learned by a VAE-like encoder-decoder structure. Extensive experiments show that our model AG-VSR achieves competitive results. Our code and data have been released in https://github.com/wangbing1416/VAGR.(c) 2022 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available