4.7 Article

Gated attention fusion network for multimodal sentiment classification

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 240, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2021.108107

Keywords

Multimodal sentiment classification; Gated attention mechanism; Convolutional neural network; Feature fusion

Funding

  1. Beijing Natural Science Foundation, China [4212013]
  2. National Key R&D Program of China [2019YFC1906002]

Ask authors/readers for more resources

Sentiment classification is important for helping people make better decisions by exploring their expressed opinions. This paper introduces a novel multimodal sentiment classification model based on a gated attention mechanism. The model emphasizes text segments by using image features and focuses on the text that affects sentiment polarity. Experimental results demonstrate the effectiveness of the proposed model in outperforming previous state-of-the-art models.
Sentiment classification can explore the opinions expressed by people and help them make better decisions. With the increasing of multimodal contents on the web, such as text, image, audio and video, how to make full use of them is important in many tasks, including sentiment classification. This paper focuses on the text and image. Previous work cannot capture the fine-grained features of images, and those models bring a lot of noise during feature fusion. In this work, we propose a novel multimodal sentiment classification model based on gated attention mechanism. The image feature is used to emphasize the text segment by the attention mechanism and it allows the model to focus on the text that affects the sentiment polarity. Moreover, the gating mechanism enables the model to retain useful image information while ignoring the noise introduced during the fusion of image and text. The experiment results on Yelp multimodal dataset show that our model outperforms the previous SOTA model. And the ablation experiment results further prove the effectiveness of different strategies in the proposed model. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available