4.7 Article

Adaptive transfer learning for EEG motor imagery classification with deep Convolutional Neural Network

Journal

NEURAL NETWORKS
Volume 136, Issue -, Pages 1-10

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2020.12.013

Keywords

Transfer learning; Brain-computer interface (BCI); Electroencephalography (EEG); Convolutional Neural Network (CNN)

Funding

  1. RIE2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund, Singapore [A20G8b0102]
  2. Institute of Information & Communications Technology Planning & Evaluation (IITP) - Korea government [2017-0-00451, 2019-0-00079]
  3. National Research Foundation of Korea [5199991014614] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

In this paper, five adaptation schemes for deep convolutional neural networks are proposed to improve the performance of Brain-Computer Interface systems. The highest subject independent performance in decoding hand motor imagery was achieved, with significant improvement compared to baseline models. Using transfer learning methods can lead to better accuracy in decoding hand motor imagery.
In recent years, deep learning has emerged as a powerful tool for developing Brain-Computer Interface (BCI) systems. However, for deep learning models trained entirely on the data from a specific individual, the performance increase has only been marginal owing to the limited availability of subject-specific data. To overcome this, many transfer-based approaches have been proposed, in which deep networks are trained using pre-existing data from other subjects and evaluated on new target subjects. This mode of transfer learning however faces the challenge of substantial inter-subject variability in brain data. Addressing this, in this paper, we propose 5 schemes for adaptation of a deep convolutional neural network (CNN) based electroencephalography (EEG)-BCI system for decoding hand motor imagery (MI). Each scheme fine-tunes an extensively trained, pre-trained model and adapt it to enhance the evaluation performance on a target subject. We report the highest subject independent performance with an average (N = 54) accuracy of 84.19% (+/- 9.98%) for two-class motor imagery, while the best accuracy on this dataset is 74.15% (+/- 15.83%) in the literature. Further, we obtain a statistically significant improvement (p = 0.005) in classification using the proposed adaptation schemes compared to the baseline subject-independent model. (C) 2020 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available