4.7 Article

A Hybrid Time-Distributed Deep Neural Architecture for Speech Emotion Recognition

期刊

出版社

WORLD SCIENTIFIC PUBL CO PTE LTD
DOI: 10.1142/S0129065722500241

关键词

Speech emotion recognition; convolutional neural networks; recurrent neural networks; long short-term memory; Mel-frequency cepstral coefficients; Mel spectrogram

资金

  1. FEDER funds through MINECO Project [TIN201785827-P]
  2. European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant [777720]

向作者/读者索取更多资源

Speech emotion recognition (SER) is a significant research area in human-machine interaction, and this paper proposes a novel approach combining a time-distributed convolutional neural network (TD-CNN) and a long short-term memory (LSTM) network. The proposed hybrid architecture achieves high recognition accuracy on a publicly distributed database for SER benchmarking, outperforming state-of-the-art deep learning models and conventional machine learning techniques.
In recent years, speech emotion recognition (SER) has emerged as one of the most active human-machine interaction research areas. Innovative electronic devices, services and applications are increasingly aiming to check the user emotional state either to issue alerts under some predefined conditions or to adapt the system responses to the user emotions. Voice expression is a very rich and noninvasive source of information for emotion assessment. This paper presents a novel SER approach based on that is a hybrid of a time-distributed convolutional neural network (TD-CNN) and a long short-term memory (LSTM) network. Mel-frequency log-power spectrograms (MFLPSs) extracted from audio recordings are parsed by a sliding window that selects the input for the TD-CNN. The TD-CNN transforms the input image data into a sequence of high-level features that are feed to the LSTM, which carries out the overall signal interpretation. In order to reduce overfitting, the MFLPS representation allows innovative image data augmentation techniques that have no immediate equivalent on the original audio signal. Validation of the proposed hybrid architecture achieves an average recognition accuracy of 73.98% on the most widely and hardest publicly distributed database for SER benchmarking. A permutation test confirms that this result is significantly different from random classification (p < 0.001). The proposed architecture outperforms state-of-the-art deep learning models as well as conventional machine learning techniques evaluated on the same database trying to identify the same number of emotions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据