期刊
NEURAL NETWORKS
卷 136, 期 -, 页码 1-10出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2020.12.013
关键词
Transfer learning; Brain-computer interface (BCI); Electroencephalography (EEG); Convolutional Neural Network (CNN)
资金
- RIE2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund, Singapore [A20G8b0102]
- Institute of Information & Communications Technology Planning & Evaluation (IITP) - Korea government [2017-0-00451, 2019-0-00079]
- National Research Foundation of Korea [5199991014614] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
In this paper, five adaptation schemes for deep convolutional neural networks are proposed to improve the performance of Brain-Computer Interface systems. The highest subject independent performance in decoding hand motor imagery was achieved, with significant improvement compared to baseline models. Using transfer learning methods can lead to better accuracy in decoding hand motor imagery.
In recent years, deep learning has emerged as a powerful tool for developing Brain-Computer Interface (BCI) systems. However, for deep learning models trained entirely on the data from a specific individual, the performance increase has only been marginal owing to the limited availability of subject-specific data. To overcome this, many transfer-based approaches have been proposed, in which deep networks are trained using pre-existing data from other subjects and evaluated on new target subjects. This mode of transfer learning however faces the challenge of substantial inter-subject variability in brain data. Addressing this, in this paper, we propose 5 schemes for adaptation of a deep convolutional neural network (CNN) based electroencephalography (EEG)-BCI system for decoding hand motor imagery (MI). Each scheme fine-tunes an extensively trained, pre-trained model and adapt it to enhance the evaluation performance on a target subject. We report the highest subject independent performance with an average (N = 54) accuracy of 84.19% (+/- 9.98%) for two-class motor imagery, while the best accuracy on this dataset is 74.15% (+/- 15.83%) in the literature. Further, we obtain a statistically significant improvement (p = 0.005) in classification using the proposed adaptation schemes compared to the baseline subject-independent model. (C) 2020 Elsevier Ltd. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据