4.8 Article

Dawn of the Transformer Era in Speech Emotion Recognition: Closing the Valence Gap

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3263585

关键词

Transformers; Emotion recognition; Speech recognition; Robustness; Computer architecture; Task analysis; Data models; Affective computing; speech emotion recognition; transformers

向作者/读者索取更多资源

Recent advances in transformer-based architectures have shown promise in several machine learning tasks, specifically speech emotion recognition (SER) in the audio domain. However, existing works have not thoroughly evaluated the influence of model size and pre-training data on downstream performance, and have shown limited attention to generalisation, robustness, fairness, and efficiency. This study conducts a thorough analysis on pre-trained variants of wav2vec 2.0 and HuBERT, demonstrating their top performance for valence prediction without explicit linguistic information, and releasing the best performing model to the community for reproducibility.
Recent advances in transformer-based architectures have shown promise in several machine learning tasks. In the audio domain, such architectures have been successfully utilised in the field of speech emotion recognition (SER). However, existing works have not evaluated the influence of model size and pre-training data on downstream performance, and have shown limited attention to generalisation, robustness, fairness, and efficiency. The present contribution conducts a thorough analysis of these aspects on several pre-trained variants of wav2vec 2.0 and HuBERT that we fine-tuned on the dimensions arousal, dominance, and valence of MSP-Podcast, while additionally using IEMOCAP and MOSI to test cross-corpus generalisation. To the best of our knowledge, we obtain the top performance for valence prediction without use of explicit linguistic information, with a concordance correlation coefficient (CCC) of. 638 on MSP-Podcast. Our investigations reveal that transformer-based architectures are more robust compared to a CNN-based baseline and fair with respect to gender groups, but not towards individual speakers. Finally, we show that their success on valence is based on implicit linguistic information, which explains why they perform on-par with recent multimodal approaches that explicitly utilise textual information. To make our findings reproducible, we release the best performing model to the community.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据