4.6 Article

Investigating EEG-based functional connectivity patterns for multimodal emotion recognition

期刊

JOURNAL OF NEURAL ENGINEERING
卷 19, 期 1, 页码 -

出版社

IOP Publishing Ltd
DOI: 10.1088/1741-2552/ac49a7

关键词

affective brain-computer interface; EEG; eye movement; brain functional connectivity network; multimodal emotion recognition; multimodal deep learning

资金

  1. National Natural Science Foundation of China [61976135]
  2. SJTU Trans-Med Awards Research [WF540162605]
  3. Fundamental Research Funds for the Central Universities
  4. 111 Project
  5. GuangCi Professorship Program of RuiJin Hospital Shanghai Jiao Tong University School of Medicine

向作者/读者索取更多资源

This study proposes a novel algorithm for selecting emotion-relevant critical subnetworks and investigates three EEG functional connectivity network features. The results show that these EEG connectivity features achieve high classification accuracy in emotion recognition.
Objective. Previous studies on emotion recognition from electroencephalography (EEG) mainly rely on single-channel-based feature extraction methods, which ignore the functional connectivity between brain regions. Hence, in this paper, we propose a novel emotion-relevant critical subnetwork selection algorithm and investigate three EEG functional connectivity network features: strength, clustering coefficient, and eigenvector centrality. Approach. After constructing the brain networks by the correlations between pairs of EEG signals, we calculated critical subnetworks through the average of brain network matrices with the same emotion label to eliminate the weak associations. Then, three network features were conveyed to a multimodal emotion recognition model using deep canonical correlation analysis along with eye movement features. The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public datasets: SEED, SEED-V, and DEAP. Main results. The experimental results reveal that the strength feature outperforms the state-of-the-art features based on single-channel analysis. The classification accuracies of multimodal emotion recognition are 95.08 +/- 6.42% on the SEED dataset, 84.51 +/- 5.11% on the SEED-V dataset, and 85.34 +/- 2.90% 86.61 +/- 3.76% for arousal and valence on the DEAP dataset, respectively, which all achieved the best performance. In addition, the brain networks constructed with 18 channels achieve comparable performance with that of the 62-channel network and enable easier setups in real scenarios. Significance. The EEG functional connectivity networks combined with emotion-relevant critical subnetworks selection algorithm we proposed is a successful exploration to excavate the information between channels.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据