Journal
TRAITEMENT DU SIGNAL
Volume 37, Issue 1, Pages 51-57Publisher
INT INFORMATION & ENGINEERING TECHNOLOGY ASSOC
DOI: 10.18280/ts.370107
Keywords
speech emotion recognition; Deep Neural Network (DNN); Convolutional Neural Network (CNN); deep learning algorithm; Mel-Frequency Cepstrum Coefficients (MFCC)
Funding
- Konya Technical University Scientific Research Projects
- Selcuk University Scientific Research Projects
- TUBITAK
Ask authors/readers for more resources
In the present paper, an approach was developed for emotion recognition from speech data using deep learning algorithms, a problem that has gained importance in recent years. Feature extraction manually and feature selection steps were more important in traditional methods for speech emotion recognition. In spite of this, deep learning algorithms were applied to data without any data reduction. The study implemented the triple emotion groups of EmoDB emotion data: Boredom, Neutral, and Sadness-BNS; and Anger, Happiness, and Fear-AHF. Firstly, the spectrogram images resulting from the signal data after preprocessing were classified using AlexNET. Secondly, the results formed from the MelFrequency Cepstrum Coefficients (MFCC) extracted by feature extraction methods to Deep Neural Networks (DNN) were compared. The importance and necessity of using manual feature extraction in deep learning was investigated, which remains a very important part of emotion recognition. The experimental results show that emotion recognition through the implementation of the AlexNet architecture to the spectrogram images was more discriminative than that through the implementation of DNN to manually extracted features.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available