4.7 Article

Audio-Visual Emotion Recognition in Video Clips

期刊

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
卷 10, 期 1, 页码 60-75

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2017.2713783

关键词

Multimodal emotion recognition; classifier fusion; data fusion; convolutional neural networks

资金

  1. Estonian Research Grant [PUT638]
  2. European Commission
  3. Estonian Centre of Excellence in IT (EXCITE) - European Regional Development Fund
  4. European Network on Integrating Vision and Language (iV&L Net) ICT COST Action [IC1307]
  5. [TIN2013-43478-P]
  6. [TIN2016-74946-P]

向作者/读者索取更多资源

This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks' geometric relations, i.e., distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE'05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据