Journal
IEEE TRANSACTIONS ON MULTIMEDIA
Volume 24, Issue -, Pages 1313-1324Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3063612
Keywords
Emotion recognition; Feature extraction; Long short term memory; Visualization; Streaming media; Convolution; Two dimensional displays; Auto-encoder; dimensional emotion recognition; long short term memory; multimodal emotion recognition
Ask authors/readers for more resources
This paper proposes a novel deep neural network architecture for integrating visual and audio signal streams for emotion recognition, achieving state-of-the-art performance.
Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches including: (i) how to simultaneously learn compact yet representative features from multimodal data, (ii) how to effectively capture complementary features from multimodal streams, and (iii) how to perform all the tasks in an end-to-end manner. To address these challenges, in this paper, we propose a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition. To validate the robustness of our proposed architecture, we carry out extensive experiments on the multimodal emotion in the wild dataset: RECOLA. Experimental results show that the proposed method achieves state-of-the-art recognition performance.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available