Journal
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Volume 32, Issue 3, Pages 1034-1047Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2021.3072412
Keywords
Emotion recognition; Real-time systems; Streaming media; Brain modeling; Reinforcement learning; Visualization; Context modeling; Multimodal emotion recognition; reinforcement learning; domain knowledge; real-time video conversation
Categories
Funding
- National Natural Science Foundation of China [61871470, U1801262, 61976179]
- Fundamental Research Funds for the Central Universities [3102019HTXM005, 3102017HQZZ003]
Ask authors/readers for more resources
The ERLDK model proposed in this paper uses reinforcement learning and domain knowledge for multimodal emotion recognition in conversational videos, providing real-time capability. The model utilizes history utterances as emotion-pairs to recognize the context of the subsequent utterance. Experimental results demonstrate that ERLDK achieves state-of-the-art performance on weighted average and most specific emotion categories.
Multimodal emotion recognition in conversational videos (ERC) develops rapidly in recent years. To fully extract the relative context from video clips, most studies build their models on the entire dialogues which make them lack of real-time ERC ability. Different from related researches, a novel multimodal emotion recognition model for conversational videos based on reinforcement learning and domain knowledge (ERLDK) is proposed in this paper. In ERLDK, the reinforcement learning algorithm is introduced to conduct real-time ERC with the occurrence of conversations. The collection of history utterances is composed as an emotion-pair which represents the multimodal context of the following utterance to be recognized. Dueling deep-Q-network (DDQN) based on gated recurrent unit (GRU) layers is designed to learn the correct action from the alternative emotion categories. Domain knowledge is extracted from public dataset based on the former information of emotion-pairs. The extracted domain knowledge is used to revise the results from the RL module and is transformed into other dataset to examine the rationality. The experimental results on datasets show that ERLDK achieves the state-of-the-art results on weighted average and most of the specific emotion categories.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available