4.7 Article

Joint Feature Adaptation and Graph Adaptive Label Propagation for Cross-Subject Emotion Recognition From EEG Signals

期刊

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
卷 13, 期 4, 页码 1941-1958

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2022.3189222

关键词

Electroencephalogram (EEG); emotion recognition; feature adaptation; graph learning; label propagation

资金

  1. National Natural Science Foundation of China [61971173, U20B2074]
  2. National Key Research and Development Program of China [2017YFE0116800]
  3. Fundamental Research Funds for the Provincial Universities of Zhejiang [GK209907299001-008]
  4. Natural Science Foundation of Zhejiang Province [LY21F030005]
  5. China Postdoctoral Science Foundation [2017M620470]
  6. CAAC Key Laboratory of Flight Techniques and Flight Safety [FZ2021KF16]
  7. Guangxi Key Laboratory of Optoelectronic Information Processing (Guilin University of Electronic Technology) [GD21202]

向作者/读者索取更多资源

In this paper, a joint feature adaptation and graph adaptive label propagation model (JAGP) is proposed for cross-subject emotion recognition from EEG signals. By unifying the previously scattered feature learning, emotional state estimation, and optimal graph learning into a single objective, the recognition performance is greatly improved, and the critical frequency bands and brain regions can be automatically identified.
Though Electroencephalogram (EEG) could objectively reflect emotional states of our human beings, its weak, non-stationary, and low signal-to-noise properties easily cause the individual differences. To enhance the universality of affective brain-computer interface systems, transfer learning has been widely used to alleviate the data distribution discrepancies among subjects. However, most of existing approaches focused mainly on the domain-invariant feature learning, which is not unified together with the recognition process. In this paper, we propose a joint feature adaptation and graph adaptive label propagation model (JAGP) for cross-subject emotion recognition from EEG signals, which seamlessly unifies the three components of domain-invariant feature learning, emotional state estimation and optimal graph learning together into a single objective. We conduct extensive experiments on two benchmark SEED_IV and SEED_V data sets and the results reveal that 1) the recognition performance is greatly improved, indicating the effectiveness of the triple unification mode; 2) the emotion metric of EEG samples are gradually optimized during model training, showing the necessity of optimal graph learning, and 3) the projection matrix-induced feature importance is obtained based on which the critical frequency bands and brain regions corresponding to subject-invariant features can be automatically identified, demonstrating the superiority of the learned shared subspace.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据