3.8 Proceedings Paper

Speech SimCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning

Journal

INTERSPEECH 2021
Volume -, Issue -, Pages 1544-1548

Publisher

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2021-391

Keywords

unsupervised pretraining; speech recognition; speech emotion recognition; simclr; reconstruction objective

Ask authors/readers for more resources

Self-supervised visual pretraining method SimCLR has made significant progress on ImageNet and inspired the development of Speech SimCLR for speech representation learning. By combining data augmentation and loss functions, Speech SimCLR achieves competitive results in speech emotion recognition and speech recognition tasks.
Self-supervised visual pretraining has shown significant progress recently. Among those methods, SimCLR greatly advanced the state of the art in self-supervised and semisupervised learning on ImageNet. The input feature representations for speech and visual tasks are both continuous, so it is natural to consider applying similar objective on speech representation learning. In this paper, we propose Speech SimCLR, a new self-supervised objective for speech representation learning. During training, Speech SimCLR applies augmentation on raw speech and its spectrogram. Its objective is the combination of contrastive loss that maximizes agreement between differently augmented samples in the latent space and reconstruction loss of input representation. The proposed method achieved competitive results on speech emotion recognition and speech recognition.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available