4.5 Article

A Deep Multi-task Contextual Attention Framework for Multi-modal Affect Analysis

Journal

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3380744

Keywords

Multi-task learning; multi-modal analysis; sentiment analysis; sentiment intensity prediction; emotion analysis; emotion intensity prediction; inter-modal attention

Funding

  1. Sevak-An Intelligent Indian Language Chatbot
  2. SERB, Govt. of India [IMP/2018/002072]
  3. Skymap Global Private Limited
  4. Ministry of Electronics and Information Technology (MeitY), Government of India

Ask authors/readers for more resources

Multi-modal affect analysis (e.g., sentiment and emotion analysis) is an interdisciplinary study and has been an emerging and prominent field in Natural Language Processing and Computer Vision. The effective fusion of multiple modalities (e.g., text, acoustic, or visual frames) is a non-trivial task, as these modalities, often, carry distinct and diverse information, and do not contribute equally. The issue further escalates when these data contain noise. In this article, we study the concept of multi-task learning for multi-modal affect analysis and explore a contextual inter-modal attention framework that aims to leverage the association among the neighboring utterances and their multi-modal information. In general, sentiments and emotions have inter-dependence on each other (e.g., anger. negative or happy. positive). In our current work, we exploit the relatedness among the participating tasks in the multi-task framework. We define three different multi-task setups, each having two tasks, i.e., sentiment & emotion classification, sentiment classification & sentiment intensity prediction, and emotion classification & emotion intensity prediction. Our evaluation of the proposed system on the CMU-Multi-modal Opinion Sentiment and Emotion Intensity benchmark dataset suggests that, in comparison with the single-task learning framework, our multi-task framework yields better performance for the inter-related participating tasks. Further, comparative studies show that our proposed approach attains state-of-the-art performance for most of the cases.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available