4.7 Article

Multimodal Multitask Neural Network for Motor Imagery Classification With EEG and fNIRS Signals

Journal

IEEE SENSORS JOURNAL
Volume 22, Issue 21, Pages 20695-20706

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2022.3205956

Keywords

Electroencephalography; Functional near-infrared spectroscopy; Feature extraction; Multitasking; Task analysis; Brain modeling; Spatial resolution; Brain-computer interface (BCI); motor imagery (MI); multimodal; multitask learning (MTL)

Funding

  1. National Natural Science Foundation ofChina [U20A20192, 62076216]
  2. Hebei Innovation Capability Improvement Plan Project [22567619H]

Ask authors/readers for more resources

This research proposes a multimodal neural network model that combines EEG and fNIRS signals, and uses multitask learning to improve the recognition rate and generalization ability of motor imagery. Experimental results show that this method outperforms other methods in terms of classification accuracy on a public dataset.
Brain-computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfactory classification performance due to the limited representation ability in EEG or fNIRS signals. Usually, different brain signals have complementarity with different sensitivity to different MI patterns. To improve the recognition rate and generalization ability of MI, we propose a novel end-to-end multimodal multitask neural network (M2NN) model with the fusion of EEG and fNIRS signals. M2NN method integrates the spatial-temporal feature extraction module, multimodal feature fusion module, and multitask learning (MTL) module. Specifically, the MTL module includes two learning tasks, namely one main classification task for MI and one auxiliary task with deep metric learning. This approach was evaluated using a public multimodal dataset, and experimental results show that M2NN achieved the classification accuracy improvement of 8.92%, 6.97%, and 8.62% higher than multitask unimodal EEG signal model (MEEG), multitask unimodal HbR signal model (MHbR), and multimodal single-task (MDNN), respectively. Classification accuracies of multitasking methods of MEEG, MHbR, and M2NN are improved by 4.8%, 4.37%, and 8.62% compared with single-task methods EEG, HbR, and MDNN, respectively. The M2NN method achieved the best classification performance of the six methods, with the average accuracy of 29 subjects being 82.11% +/- 7.25%. The effectiveness of multimodal fusion and MTL was verified. The M2NN method is superior to baseline and state-of-the-art (SOTA) methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available