4.5 Article

Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition

Journal

PATTERN RECOGNITION LETTERS
Volume 141, Issue -, Pages 54-60

Publisher

ELSEVIER
DOI: 10.1016/j.patrec.2020.11.013

Keywords

Unsupervised domain adaptation; EEG; Classification; Imagined speech; Deep learning

Funding

  1. National Council of Science and Technology in Mexico (CONACYT) [554740]

Ask authors/readers for more resources

The study introduces a novel domain adaptation method for imagined speech classification based on EEG signals, named SRDA, which combines AdaBN and a new loss function based on VOI. Applied over two datasets, SRDA outperforms standard classifiers and existing D-UDA methods for imagined speech recognition, achieving higher accuracy performances.
Recent advances in imagined speech recognition from EEG signals have shown their capability of enabling a new natural form of communication, which is posed to improve the lives of subjects with motor disabilities. However, differences among subjects may be an obstacle to the applicability of a previously trained classifier to new users, since a significant amount of labeled samples must be acquired for each new user, making this process tedious and time-consuming. In this sense, unsupervised domain adaptation (UDA) methods, especially those based on deep learning (D-UDA), arise as a potential solution to address this issue by reducing the differences among feature distributions of subjects. It has been shown that the divergence in the marginal and conditional distributions must be reduced to encourage similar feature distributions. However, current D-UDA methods may become sensitive under adaptation scenarios where a low discriminative feature space among classes is given, reducing the accuracy performance of the classifier. To address this issue, we introduce a D-UDA method, named Standardization-Refinement Domain Adaptation (SRDA), which combines Adaptive Batch Normalization (AdaBN) with a novel loss function based on the variation of information (VOI), in order to build an adaptive classifier on EEG data corresponding to imagined speech. Our proposal, applied over two imagined speech datasets, resulted in SRDA outperforming standard classifiers for BCI and existing D-UDA methods, achieving accuracy performances of 61.02 +/- 08.14% and 62.99 +/- 04.78%, assessed using leave-one-out cross-validation. (C) 2020 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available