4.7 Article

End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTSP.2017.2764438

关键词

End-to-end learning; emotion recognition; deep learning; affective computing

资金

  1. EPSRC Center for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS) [EP/L016796/1]
  2. Google Fellowship in Machine Perception, Speech Technology and Computer Vision
  3. EU (RIA ARIA VALUSPA) [645378]
  4. EU (IA SEWA) [645094]
  5. FiDiPro Program of Tekes [1849/31/2015]

向作者/读者索取更多资源

Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human-computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a convolutional neural network (CNN) to extract features from the speech, while for the visual modality a deep residual network of 50 layers is used. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, long short-term memory networks are utilized. The system is then trained in an end-toend fashion where-by also taking advantage of the correlations of each of the streams-we manage to significantly outperform, in terms of concordance correlation coefficient, traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据