4.6 Article

The Fusion of Electroencephalography and Facial Expression for Continuous Emotion Recognition

Journal

IEEE ACCESS
Volume 7, Issue -, Pages 155724-155736

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2949707

Keywords

Emotion recognition; Electroencephalography; Feature extraction; Brain modeling; Motion pictures; Human computer interaction; Electrodes; Continuous emotion recognition; EEG; facial expressions; signal processing; decision level fusion; temporal dynamics

Funding

  1. Fundamental Research on Advanced Technology and Engineering Application Team, Tianjin, China [20160524]
  2. Natural Science Foundation of Tianjin [18JCYBJC87700]

Ask authors/readers for more resources

Recently, the study of emotion recognition has received increasing attentions by the rapid development of noninvasive sensor technologies, machine learning algorithms and compute capability of computers. Compared with single modal emotion recognition, the multimodal paradigm introduces complementary information for emotion recognition. Hence, in this work, we presented a decision level fusion framework for detecting emotions continuously by fusing the Electroencephalography (EEG) and facial expressions. Three types of movie clips (positive, negative, and neutral) were utilized to elicit specific emotions of subjects, the EEG and facial expression signals were recorded simultaneously. The power spectrum density (PSD) features of EEG were extracted by time-frequency analysis, and then EEG features were selected for regression. For the facial expression, the facial geometric features were calculated by facial landmark localization. Long short-term memory networks (LSTM) were utilized to accomplish the decision level fusion and captured temporal dynamics of emotions. The results have shown that the proposed method achieved outstanding performance for continuous emotion recognition, and it yields 0.625 0.029 of concordance correlation coefficient (CCC). From the results, the fusion of two modalities outperformed EEG and facial expression separately. Furthermore, different numbers of time-steps of LSTM was applied to analyze the temporal dynamic capturing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available