4.6 Article

CTNet: Conversational Transformer Network for Emotion Recognition

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASLP.2021.3049898

关键词

Emotion recognition; Context modeling; Feature extraction; Fuses; Speech processing; Data models; Bidirectional control; Context-sensitive modeling; conversational transformer network (CTNet); conversational emotion recognition; multimodal fusion; speaker-sensitive modeling

资金

  1. National Key Research and Development Plan of China [2018YFB1005003]
  2. National Natural Science Foundation of China (NSFC) [61831022, 61771472, 61901473, 61773379]
  3. Inria-CAS Joint Research Project [173211KYSB20170061, 173211KYSB20190049]

向作者/读者索取更多资源

The study proposes a multimodal learning framework for conversational emotion recognition, named conversational transformer network (CTNet). By modeling intra-modal and cross-modal interactions, capturing temporal information using lexical and acoustic features, and utilizing a bi-directional GRU component and speaker embeddings to model context-sensitive and speaker-sensitive dependencies, the experimental results demonstrate the effectiveness of the method. The approach shows a performance improvement of 2.1% to 6.2% on weighted average F1 over state-of-the-art strategies.
Emotion recognition in conversation is a crucial topic for its widespread applications in the field of human-computer interactions. Unlike vanilla emotion recognition of individual utterances, conversational emotion recognition requires modeling both context-sensitive and speaker-sensitive dependencies. Despite the promising results of recent works, they generally do not leverage advanced fusion techniques to generate the multimodal representations of an utterance. In this way, they have limitations in modeling the intra-modal and cross-modal interactions. In order to address these problems, we propose a multimodal learning framework for conversational emotion recognition, called conversational transformer network (CTNet). Specifically, we propose to use the transformer-based structure to model intra-modal and cross-modal interactions among multimodal features. Meanwhile, we utilize word-level lexical features and segment-level acoustic features as the inputs, thus enabling us to capture temporal information in the utterance. Additionally, to model context-sensitive and speaker-sensitive dependencies, we propose to use the multi-head attention based bi-directional GRU component and speaker embeddings. Experimental results on the IEMOCAP and MELD datasets demonstrate the effectiveness of the proposed method. Our method shows an absolute 2.1 similar to 6.2% performance improvement on weighted average F1 over state-of-the-art strategies.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据