4.7 Article

Multi-View Spatial-Temporal Graph Convolutional Networks With Domain Generalization for Sleep Stage Classification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNSRE.2021.3110665

关键词

Sleep; Brain modeling; Convolution; Feature extraction; Convolutional neural networks; Adaptation models; Transfer learning; Sleep stage classification; spatial-temporal graph convolution; transfer learning; domain generalization

资金

  1. National Natural Science Foundation of China [61603029]
  2. Swarma-Kaifeng Workshop - Swarma Club
  3. Kaifeng Foundation
  4. NIH [R01EB030362]

向作者/读者索取更多资源

Sleep stage classification is crucial for assessing sleep quality and diagnosing diseases, but challenges remain in effectively utilizing brain signals, handling individual differences, and ensuring interpretability in deep learning methods. A proposed MSTGCN model addresses these challenges by combining spatial-temporal feature extraction, attention mechanism, and domain generalization, outperforming previous baseline models on public datasets.
Sleep stage classification is essential for sleep assessment and disease diagnosis. Although previous attempts to classify sleep stages have achieved high classification performance, several challenges remain open: 1) How to effectively utilize time-varying spatial and temporal features from multi-channel brain signals remains challenging. Prior works have not been able to fully utilize the spatial topological information among brain regions. 2) Due to the many differences found in individual biological signals, how to overcome the differences of subjects and improve the generalization of deep neural networks is important. 3) Most deep learning methods ignore the interpretability of the model to the brain. To address the above challenges, we propose a multi-view spatial-temporal graph convolutional networks (MSTGCN) with domain generalization for sleep stage classification. Specifically, we construct two brain view graphs for MSTGCN based on the functional connectivity and physical distance proximity of the brain regions. The MSTGCN consists of graph convolutions for extracting spatial features and temporal convolutions for capturing the transition rules among sleep stages. In addition, attention mechanism is employed for capturing the most relevant spatial-temporal information for sleep stage classification. Finally, domain generalization and MSTGCN are integrated into a unified framework to extract subject-invariant sleep features. Experiments on two public datasets demonstrate that the proposed model outperforms the state-of-the-art baselines.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据