4.6 Article

Cross-subject and cross-experimental classification of mental fatigue based on two-stream self-attention network

Journal

BIOMEDICAL SIGNAL PROCESSING AND CONTROL
Volume 88, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.bspc.2023.105638

Keywords

Electroencephalography; Mental fatigue; Deep transfer learning; Attention mechanism

Ask authors/readers for more resources

The STTransformer architecture, based on a two-stream attention network, achieved promising results in cross-task and cross-subject mental fatigue transfer learning. This architecture uses multiple attention mechanisms to capture common features between different individuals and experimental paradigms, showing good performance in multiple individual and two mental fatigue experiments.
Mental fatigue detection based on Electroencephalogram (EEG) is an objective and effective detection method. However, individual variability and variability in mental fatigue experimental paradigms limit the generalizability of classification models across subjects and experiments. This paper proposes a Spatio-Temporal Transformer (STTransformer) architecture based on a two-stream attention network. We use datasets from three different mental fatigue experimental tasks and individuals. STTransformer has performed cross-task and cross-subject mental fatigue transfer learning and achieved promising results. This architecture is based on the idea of model migration, pre-training deep neural network parameters in the source domain to obtain prior knowledge, freezing some network parameters and migrating to the target domain containing similar samples for fine-tuning. This architecture achieves good transfer effects by using multiple attention mechanisms to capture common features between different individuals and experimental paradigms. Good performance was achieved in multiple individual and two mental fatigue experiments. We used the attention mechanism to visualize part of the feature maps, showing two characteristics of mental fatigue, and exploring deep learning interpretability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available