4.7 Article

MI-CAT: A transformer-based domain adaptation network for motor imagery classification

期刊

NEURAL NETWORKS
卷 165, 期 -, 页码 451-462

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.06.005

关键词

Electroencephalograph (EEG); Transformer; Domain adaptation; Motor imagery (MI); Brain-computer interfaces (BCIs)

向作者/读者索取更多资源

Due to its convenience and safety, EEG data is widely used in MI-BCIs. Recent research has begun applying Transformer to EEG signal decoding, but the challenge lies in effectively using data from other subjects to improve single-subject classification performance. To address this, we propose a novel architecture called MI-CAT, which utilizes Transformer's self-attention and cross-attention mechanisms to resolve the distribution differences between domains. Our method achieves competitive performance on real EEG datasets, demonstrating its effectiveness in decoding EEG signals and advancing the development of Transformer for BCIs.
Due to its convenience and safety, electroencephalography (EEG) data is one of the most widely used signals in motor imagery (MI) brain-computer interfaces (BCIs). In recent years, methods based on deep learning have been widely applied to the field of BCIs, and some studies have gradually tried to apply Transformer to EEG signal decoding due to its superior global information focusing ability. However, EEG signals vary from subject to subject. Based on Transformer, how to effectively use data from other subjects (source domain) to improve the classification performance of a single subject (target domain) remains a challenge. To fill this gap, we propose a novel architecture called MI-CAT. The architecture innovatively utilizes Transformer's self-attention and cross-attention mechanisms to interact features to resolve differential distribution between different domains. Specifically, we adopt a patch embedding layer for the extracted source and target features to divide the features into multiple patches. Then, we comprehensively focus on the intra-domain and inter-domain features by stacked multiple Cross-Transformer Blocks (CTBs), which can adaptively conduct bidirectional knowledge transfer and information exchange between domains. Furthermore, we also utilize two non-shared domain-based attention blocks to efficiently capture domain-dependent information, optimizing the features extracted from the source and target domains to assist in feature alignment. To evaluate our method, we conduct extensive experiments on two real public EEG datasets, Dataset IIb and Dataset IIa, achieving competitive performance with an average classification accuracy of 85.26% and 76.81%, respectively. Experimental results demonstrate that our method is a powerful model for decoding EEG signals and facilitates the development of the Transformer for brain-computer interfaces (BCIs). & COPY; 2023 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据