4.5 Article

Speech emotion recognition based on transfer learning from the FaceNet frameworka)

Journal

JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
Volume 149, Issue 2, Pages 1338-1345

Publisher

ACOUSTICAL SOC AMER AMER INST PHYSICS
DOI: 10.1121/10.0003530

Keywords

-

Funding

  1. Jilin Provincial Science and Technology Department [20180201003GX]
  2. Jilin Province Development and Reform Commission [2019C053-4]

Ask authors/readers for more resources

Speech plays a crucial role in human-computer emotional interaction, and this study utilizes the FaceNet model to improve speech emotion recognition. By pretraining on the CASIA dataset and fine-tuning on the IEMOCAP dataset, the proposed approach achieves high accuracy due to clean signals. Experimental results demonstrate that the method outperforms state-of-the-art approaches on the IEMOCAP dataset among single modal methods.
Speech plays an important role in human-computer emotional interaction. FaceNet used in face recognition achieves great success due to its excellent feature extraction. In this study, we adopt the FaceNet model and improve it for speech emotion recognition. To apply this model for our work, speech signals are divided into segments at a given time interval, and the signal segments are transformed into a discrete waveform diagram and spectrogram. Subsequently, the waveform and spectrogram are separately fed into FaceNet for end-to-end training. Our empirical study shows that the pretraining is effective on the spectrogram for FaceNet. Hence, we pretrain the network on the CASIA dataset and then fine-tune it on the IEMOCAP dataset with waveforms. It will derive the maximum transfer learning knowledge from the CASIA dataset due to its high accuracy. This high accuracy may be due to its clean signals. Our preliminary experimental results show an accuracy of 68.96% and 90% on the emotion benchmark datasets IEMOCAP and CASIA, respectively. The cross-training is then conducted on the dataset, and comprehensive experiments are performed. Experimental results indicate that the proposed approach outperforms state-of-the-art methods on the IEMOCAP dataset among single modal approaches.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available