3.8 Proceedings Paper

CONTRASTIVE UNSUPERVISED LEARNING FOR SPEECH EMOTION RECOGNITION

Publisher

IEEE
DOI: 10.1109/ICASSP39728.2021.9413910

Keywords

Speech emotion recognition; Contrastive predictive coding; Unsupervised pre-training

Ask authors/readers for more resources

This study investigates how unsupervised representation learning on unlabeled datasets can benefit speech emotion recognition. The experiment results show that using the contrastive predictive coding method can significantly improve emotion recognition performance.
Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. However, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state-of-the-art concordance correlation coefficient (CCC) performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP-Podcast dataset, our method obtained considerable performance improvements compared to baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available