4.6 Article

Investigating of Deaf Emotion Cognition Pattern By EEG and Facial Expression Combination

Journal

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Volume 26, Issue 2, Pages 589-599

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2021.3092412

Keywords

Emotion recognition; EEG; facial expression; deaf

Funding

  1. Fundamental Research on Advanced Technology and Engineering Application Team, Tianjin, China [20160524]
  2. Natural Science Foundation of Tianjin [18JCYBJC87700]

Ask authors/readers for more resources

With the development of sensor technology and learning algorithms, multimodal emotion recognition has become a popular research topic. Existing studies have mainly focused on emotion recognition in normal individuals, but there is a greater need for emotion recognition in deaf individuals who cannot express emotions through words. This paper proposes a method using deep belief network (DBN) to classify emotions based on electroencephalograph (EEG) and facial expressions in deaf subjects. The results show a classification accuracy of 99.92% with feature selection in deaf emotion recognition.
With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this paper, the deep belief network (DBN) was utilized to classify three category emotions through the electroencephalograph (EEG) and facial expressions. Signals from 15 deaf subjects were recorded when they watched the emotional movie clips. Our system uses a 1-s window without overlap to segment the EEG signals in five frequency bands, then the differential entropy (DE) feature is extracted. The DE feature of EEG and facial expression images plays as multimodal input for subject-dependent emotion recognition. To avoid feature redundancy, the top 12 major EEG electrode channels (FP2, FP1, FT7, FPZ, F7, T8, F8, CB2, CB1, FT8, T7, TP8) in the gamma band and 30 facial expression features (the areas around the eyes and eyebrow) which are selected by the largest weight values. The results show that the classification accuracy is 99.92% by feature selection in deaf emotion reignition. Moreover, investigations on brain activities reveal deaf brain activity changes mainly in the beta and gamma bands, and the brain regions that are affected by emotions are mainly distributed in the prefrontal and outer temporal lobes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available