期刊
IEEE SENSORS JOURNAL
卷 22, 期 4, 页码 3464-3471出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2022.3140383
关键词
Sleep; Electroencephalography; Feature extraction; Electrooculography; Brain modeling; Physiology; Adversarial machine learning; Multi-modal physiological signals; electroencephalography (EEG); electrooculogram (EOG); squeeze-and-excitation network; sleep stage classification
资金
- Fundamental Research Funds for the Central Universities [2020YJS025]
- China Scholarship Council [202007090056]
This study proposes a sleep staging method based on multi-modal physiological signals, which captures the features of electroencephalogram (EEG) and electrooculogram (EOG) and extracts subject-invariant sleep features through adaptive utilization of multi-modal signals and domain adversarial learning. Experimental results demonstrate that this method outperforms baseline models in sleep staging tasks.
Sleep staging is the basis of sleep medicine for diagnosing psychiatric and neurodegenerative diseases. However, the existing sleep staging methods ignore the fact that multi-modal physiological signals are heterogeneous, and different modalities contribute to sleep staging with distinct impacts on specific stages. Therefore, how to model the heterogeneity of multi-modal signals and adaptively utilize the multi-modal signals for sleep staging remains challenging. Moreover, existing methods suffer from the individual variance of physiological signals. How to generalize the sleep staging model for the variance among subjects is also challenging. To address the above challenges, we propose the multi-modal physiological signals based Squeeze-and-Excitation Network with Domain Adversarial Learning (SEN-DAL) to capture the features of electroencephalogram (EEG) and electrooculogram (EOG) for sleep staging. The SEN-DAL is made up of two independent feature extraction networks for modeling the heterogeneity, a Multi-modal Squeeze-and-Excitation feature fusion module for adaptively utilizing the multi-modal signals, and a Domain Adversarial Learning module to extract subject-invariant sleep feature. Experiments demonstrate that the SEN-DAL is superior to the baseline models on a public sleep staging dataset, reaching an F1-Score of 82.1%, which is similar to human experts. Through the ablation experiments, we found that the proposed mechanisms, including modal-independent feature extraction, adaptive utilization of multi-modal signals, and domain adversarial learning, are effective for sleep staging tasks. The code of SEN-DAL is available at https://github.com/xiyangcai/SEN-DAL.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据