4.6 Article

Emotion Recognition From Multi-Channel EEG via Deep Forest

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2020.2995767

关键词

Deep neural networks (DNNs); emotion recognition; multi-channel EEG; deep forest; spatiotemporal information

资金

  1. National Key R&D Program of China [2017YFB1002802]
  2. National Natural Science Foundation of China [61922075, 41901350, 61701160, 61701158]
  3. Fundamental Research Funds for the Central Universities [JZ2019HGBZ0151, JZ2020HGPA0111]

向作者/读者索取更多资源

The article proposes a method for multi-channel EEG emotion recognition using deep forest, which preprocesses the EEG signals by considering the spatial position relationship and baseline removal. Experimental results show that the proposed method achieves higher accuracy than existing methods.
Recently, deep neural networks (DNNs) have been applied to emotion recognition tasks based on electroencephalography (EEG), and have achieved better performance than traditional algorithms. However, DNNs still have the disadvantages of too many hyperparameters and lots of training data. To overcome these shortcomings, in this article, we propose a method for multi-channel EEG-based emotion recognition using deep forest. First, we consider the effect of baseline signal to preprocess the raw artifact-eliminated EEG signal with baseline removal. Secondly, we construct 2D frame sequences by taking the spatial position relationship across channels into account. Finally, 2D frame sequences are input into the classification model constructed by deep forest that can mine the spatial and temporal information of EEG signals to classify EEG emotions. The proposed method can eliminate the need for feature extraction in traditional methods and the classification model is insensitive to hyperparameter settings, which greatly reduce the complexity of emotion recognition. To verify the feasibility of the proposed model, experiments were conducted on two public DEAP and DREAMER databases. On the DEAP database, the average accuracies reach to 97.69% and 97.53% for valence and arousal, respectively; on the DREAMER database, the average accuracies reach to 89.03%, 90.41%, and 89.89% for valence, arousal and dominance, respectively. These results show that the proposed method exhibits higher accuracy than the state-of-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据