4.6 Article

Competition convolutional neural network for sleep stage classification

Journal

BIOMEDICAL SIGNAL PROCESSING AND CONTROL
Volume 64, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.bspc.2020.102318

Keywords

Electroencephalography; Sleep stage; Convolutional neural network; Unsupervised learning; Competitive learning

Funding

  1. National Science Foundation of China [61973177]
  2. Science and Technology Project in Henan Province [202102210127]
  3. Henan International Joint Laboratory of Behavior Optimization Control for Smart Robots [[2018]19]

Ask authors/readers for more resources

A new unsupervised competitive convolutional neural network (C-CNN) model is proposed in this study, achieving better performance than baseline models in sleep stage classification. By applying convolutional operators, competitive operators, and pooling operators, the C-CNN model can effectively extract features from EEG signals and learn the distribution of input samples.
Although convolutional neural network (CNN) has become very popular, and has been applied to the sleep stage classification problem, almost all existing studies on sleep stage classification require a lot of labeled data. Obtaining labeled data is a subjective process and difficulty task. At the same time, due to different knowledge backgrounds, the sleep stage labels scored by different experts will be different. Therefore, a new unsupervised competition convolutional neural network (C-CNN) is proposed in this study. It consists of alternating layers containing a convolution operator, competitive operator, and pooling operator. The convolution operator is used to extract features from EEG signals. The competitive layer iteratively adjusts the weight vectors of the winning neurons according to the competition learning rules. By this way, the learned weight vectors can reflect the distribution of input samples. To evaluate the C-CNN model, two common datasets (UCD and Sleep-EDF) are used. The proposed model obtains a classification performance of 77.2% and 83.4% on UCD and Sleep-EDF datasets, respectively. The experimental results also show that our method outperforms the base models by 4.3% and 9.47%, respectively. This work provides avenues for further studies of unsupervised deep learning models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available