4.6 Article

Visual-Textual Sentiment Analysis Enhanced by Hierarchical Cross-Modality Interaction

Journal

IEEE SYSTEMS JOURNAL
Volume 15, Issue 3, Pages 4303-4314

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSYST.2020.3026879

Keywords

Sentiment analysis; Semantics; Visualization; Analytical models; Task analysis; Convolutional neural networks; Learning systems; Attention mechanism; multimodal convolutional neural networks; sentiment analysis; transfer learning

Funding

  1. National Natural Science Foundation of China [61772133, 61972087]
  2. National Social Science Foundation of China [19@ZH014]
  3. Jiangsu Provincial Key Project [BE2018706]
  4. Natural Science Foundation of Jiangsu province [SBK2019022870]
  5. Jiangsu Provincial Key Laboratory of Computer Networking Technology
  6. Jiangsu Provincial Key Laboratory of Network and Information Security [BM2003201]
  7. Key Laboratory of Computer Network and Information Integration of Ministry of Education of China [93K-9]

Ask authors/readers for more resources

This article proposed a hierarchical cross-modality interaction model for visual-textual sentiment analysis, emphasizing consistency and correlation across modalities and addressing noise and joint understanding issues. Through experiments, the framework outperformed existing methods, with phrase-level text fragments playing an important role in joint visual-textual sentiment analysis.
Visual-textual sentiment analysis could benefit user understanding in online social networks and enable many useful applications like user profiling and recommendation. However, it faces a set of new challenges, i.e., exacerbated noise problem caused by irrelevant or redundant information in different modalities, and the gap in joint understanding for multimodal sentiment. In this article, we propose hierarchical cross-modality interaction model for visual-textual sentiment analysis. Our model emphasises the consistency and correlation across modalities, by extracting the semantic and sentiment interactions between image and text in a hierarchical way, which could cope with the noise and joint understanding issues, respectively. Hierarchical attention mechanism is first adopted to capture the semantic interaction and purify the information in one modality with the help of the other. Then, multimodal convolutional neural network, which could fully exploit cross-modality sentiment interaction is incorporated, and better joint visual-textual representation is generated. A transfer learning method is further designed to alleviate the impact of noises in real social data. Through extensive experiments on two datasets, we show that our proposed framework greatly surpasses the state-of-the-art approaches. Specifically, the phrase-level text fragment plays an important role in interacting with image regions for joint visual-textual sentiment analysis.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available