4.7 Article

Facial Expression and EEG Fusion for Investigating Continuous Emotions of Deaf Subjects

Journal

IEEE SENSORS JOURNAL
Volume 21, Issue 15, Pages 16894-16903

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2021.3078087

Keywords

Emotion recognition; Electroencephalography; Feature extraction; Brain modeling; Psychology; Sensors; Motion pictures; Continuous emotion recognition; EEG; facial expression; deaf

Funding

  1. Fundamental Research on Advanced Technology and Engineering Application Team, Tianjin, China [20160524]
  2. Natural Science Foundation of Tianjin [18JCYBJC87700]

Ask authors/readers for more resources

This study proposes a multimodal continuous emotion recognition method based on facial expressions and EEG signals for deaf subjects. The results show that EEG signals are more effective in continuous emotion recognition compared to facial expressions, and multimodality can improve performance. The neural activities of deaf subjects are closely related to processing different emotions.
Emotion recognition has received increasing attention in human-computer interaction (HCI) and psychological assessment. Compared with single modal emotion recognition, the multimodal paradigm has an outperformance because of introducing complementary information for emotion recognition. However, current research is mainly focused on normal people, the deaf subjects need to understand emotional changes in real life. In this paper, we propose a multimodal continuous emotion recognition method for deaf subjects based on facial expression and electroencephalograph (EEG) signals. Twelve emotion movie clips as stimulus were selected and annotated by ten postgraduates who majored in psychology. The EEG signals and facial expressions of deaf subjects were collected when they watched the stimulus clips. The differential entropy (DE) features of EEG were extracted by time-frequency analysis and the six facial features were extracted by facial landmark. Long short-term memory networks (LSTM) were utilized to accomplish the feature level fusion and captured the temporal dynamics of emotions. The result shows that the EEG signal is better than the dynamic emotional capture of facial expressions and deaf subjects in continuous emotion recognition. Multi-modality can compensate for the information of a single modality, which achieves a better performance. Finally, from the neural activities of deaf subjects, the result reveals that the prefrontal lobe region may be strongly related to negative emotion processing, and the lateral temporal lobe region may be strongly related to positive emotion processing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available