4.6 Article

Investigating of Deaf Emotion Cognition Pattern By EEG and Facial Expression Combination

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2021.3092412

关键词

Emotion recognition; EEG; facial expression; deaf

资金

  1. Fundamental Research on Advanced Technology and Engineering Application Team, Tianjin, China [20160524]
  2. Natural Science Foundation of Tianjin [18JCYBJC87700]

向作者/读者索取更多资源

With the development of sensor technology and learning algorithms, multimodal emotion recognition has become a popular research topic. Existing studies have mainly focused on emotion recognition in normal individuals, but there is a greater need for emotion recognition in deaf individuals who cannot express emotions through words. This paper proposes a method using deep belief network (DBN) to classify emotions based on electroencephalograph (EEG) and facial expressions in deaf subjects. The results show a classification accuracy of 99.92% with feature selection in deaf emotion recognition.
With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this paper, the deep belief network (DBN) was utilized to classify three category emotions through the electroencephalograph (EEG) and facial expressions. Signals from 15 deaf subjects were recorded when they watched the emotional movie clips. Our system uses a 1-s window without overlap to segment the EEG signals in five frequency bands, then the differential entropy (DE) feature is extracted. The DE feature of EEG and facial expression images plays as multimodal input for subject-dependent emotion recognition. To avoid feature redundancy, the top 12 major EEG electrode channels (FP2, FP1, FT7, FPZ, F7, T8, F8, CB2, CB1, FT8, T7, TP8) in the gamma band and 30 facial expression features (the areas around the eyes and eyebrow) which are selected by the largest weight values. The results show that the classification accuracy is 99.92% by feature selection in deaf emotion reignition. Moreover, investigations on brain activities reveal deaf brain activity changes mainly in the beta and gamma bands, and the brain regions that are affected by emotions are mainly distributed in the prefrontal and outer temporal lobes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据