4.6 Article

CTNet: Conversational Transformer Network for Emotion Recognition

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASLP.2021.3049898

Keywords

Emotion recognition; Context modeling; Feature extraction; Fuses; Speech processing; Data models; Bidirectional control; Context-sensitive modeling; conversational transformer network (CTNet); conversational emotion recognition; multimodal fusion; speaker-sensitive modeling

Funding

  1. National Key Research and Development Plan of China [2018YFB1005003]
  2. National Natural Science Foundation of China (NSFC) [61831022, 61771472, 61901473, 61773379]
  3. Inria-CAS Joint Research Project [173211KYSB20170061, 173211KYSB20190049]

Ask authors/readers for more resources

The study proposes a multimodal learning framework for conversational emotion recognition, named conversational transformer network (CTNet). By modeling intra-modal and cross-modal interactions, capturing temporal information using lexical and acoustic features, and utilizing a bi-directional GRU component and speaker embeddings to model context-sensitive and speaker-sensitive dependencies, the experimental results demonstrate the effectiveness of the method. The approach shows a performance improvement of 2.1% to 6.2% on weighted average F1 over state-of-the-art strategies.
Emotion recognition in conversation is a crucial topic for its widespread applications in the field of human-computer interactions. Unlike vanilla emotion recognition of individual utterances, conversational emotion recognition requires modeling both context-sensitive and speaker-sensitive dependencies. Despite the promising results of recent works, they generally do not leverage advanced fusion techniques to generate the multimodal representations of an utterance. In this way, they have limitations in modeling the intra-modal and cross-modal interactions. In order to address these problems, we propose a multimodal learning framework for conversational emotion recognition, called conversational transformer network (CTNet). Specifically, we propose to use the transformer-based structure to model intra-modal and cross-modal interactions among multimodal features. Meanwhile, we utilize word-level lexical features and segment-level acoustic features as the inputs, thus enabling us to capture temporal information in the utterance. Additionally, to model context-sensitive and speaker-sensitive dependencies, we propose to use the multi-head attention based bi-directional GRU component and speaker embeddings. Experimental results on the IEMOCAP and MELD datasets demonstrate the effectiveness of the proposed method. Our method shows an absolute 2.1 similar to 6.2% performance improvement on weighted average F1 over state-of-the-art strategies.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available