4.7 Article

Fusing Frequency-Domain Features and Brain Connectivity Features for Cross-Subject Emotion Recognition

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2022.3168927

关键词

Feature extraction; Emotion recognition; Electroencephalography; Frequency-domain analysis; Support vector machines; Motion pictures; Kernel; Approximate empirical kernel map (AEKM); brain connectivity (BC) features; cross-subject emotion recognition; frequency-domain (FD) features; fused features

资金

  1. Guangdong Basic and Applied Basic Research Foundation [2020A1515111154]
  2. Technology Development Project of Guangdong Province [2020ZDZX3018]
  3. Special Fund for Science and Technology of Guangdong Province [2020182]
  4. Educational Commission of Guangdong Province [2021KTSCX136]
  5. Hong Kong and Macao Joint Research Project [2019WGALH16]
  6. Jiangmen Science and Technology Project [2021030100050004285]
  7. European Union [031019]
  8. Science and Technology Development Fund, Macau [0045/2019/AFJ]
  9. Wuyi University [2019WGALH16]

向作者/读者索取更多资源

This article investigates the fusion of frequency-domain and brain connectivity features for cross-subject emotion recognition using electroencephalography (EEG). Through multiple perspectives, including critical frequency bands, complementary characteristics for each emotional state, critical channels, and crucial connections, the study reveals that the fused features outperform individual features, especially in high-frequency bands, and significantly enhance classification performance for negative emotion. The findings contribute to the research on emotion-related brain mechanisms and offer new insights into affective computing.
Frequency-domain (FD) features reveal the activated patterns of individual local brain regions responding to different emotions, whereas brain connectivity (BC) features involve the coordination of multiple brain regions for generating emotional responses; these two types of features are complementary to each other. To date, the fusion of these two types of features for electroencephalography (EEG)-based cross-subject emotion recognition remains to be fully investigated due to the intersubject variability in EEG signals. In this article, we first attempt to investigate these fused features for cross-subject emotion recognition from multiple perspectives, including critical frequency bands, complementary characteristics for each emotional state, critical channels, and crucial connections, using a fast and robust approximate empirical kernel map-fusion-based support vector machine (AEKM-Fusion-SVM) method. The experimental results on the SJTU emotion EEG dataset (SEED), BCI2020-A, and BCI2020-B datasets reveal that: 1) the AEKM-fusion method improves the effectiveness and efficiency of the fusion of features of different dimensions; 2) the recognition accuracy of the fused features outperforms each individual feature, and this outperformance is more significant in the high-frequency bands (i.e., the beta and gamma bands); 3) the fused features significantly enhance the classification performance for negative emotion; and 4) the fused features built with 27 selected channels achieve comparable performance to that of the fused features built with the full number of channels (i.e., 62 channels), allowing for easier establishment of brain-computer interface (BCI) systems in real-world scenarios. Our study enriches the research of emotion-related brain mechanisms and provides new insight into affective computing.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据