Journal
2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU)
Volume -, Issue -, Pages 350-357Publisher
IEEE
DOI: 10.1109/ASRU51503.2021.9688036
Keywords
Emotion recognition; disentanglement representation learning; deep learning; multimodality; wav2vec 2.0
Ask authors/readers for more resources
The study proposed a novel cross-representation speech model and a CNN-based text emotion recognition model, addressing the issues of overfitting and learning based on superficial cues in emotion recognition tasks. By combining speech-based and text-based results using score fusion, the method surpassed current works on speech-only, text-only, and multimodal emotion recognition on the IEMOCAP dataset.
Automatic emotion recognition is one of the central concerns of the Human-Computer Interaction field as it can bridge the gap between humans and machines. Current works train deep learning models on low-level data representations to solve the emotion recognition task. Since emotion datasets often have a limited amount of data, these approaches may suffer from overfitting, and they may learn based on superficial cues. To address these issues, we propose a novel cross-representation speech model, inspired by disentanglement representation learning, to perform emotion recognition on wav2vec 2.0 speech features. We also train a CNN-based model to recognize emotions from text features extracted with Transformer-based models. We further combine the speech-based and text-based results with a score fusion approach. Our method is evaluated on the IEMOCAP dataset in a 4-class classification problem, and it surpasses current works on speech-only, text-only, and multimodal emotion recognition.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available