期刊
PATTERN RECOGNITION LETTERS
卷 146, 期 -, 页码 1-7出版社
ELSEVIER
DOI: 10.1016/j.patrec.2021.03.007
关键词
Human behavior recognition; Audiovisual emotion recognition; Video sequences; Deep learning
资金
- Powder, a deep tech startup
Emotional expressions communicate our emotional state or attitude to others through verbal and non-verbal communication. Understanding complex human behavior can be achieved by studying physical features from various modalities, such as facial expressions, vocal cues, and body gestures. Recent research has focused on multi-modal emotion recognition for human behavior analysis, which has led to significant advancements in the field.
Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets. (c) 2021 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据