4.7 Article

SparseDGCNN: Recognizing Emotion From Multichannel EEG Signals

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 14, Issue 1, Pages 537-548

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2021.3051332

Keywords

Electroencephalography; Electrodes; Brain modeling; Emotion recognition; Feature extraction; Physiology; Convolution; multichannel EEG signals; graph convolutional neural network; sparse constraints

Ask authors/readers for more resources

In this article, a sparse DGCNN model is proposed to improve the emotion recognition performance by imposing a sparseness constraint on the graph G. The research reveals that different brain regions may have different functions and the functional relations among electrodes are possibly highly localized and sparse. The experiments show that the sparse DGCNN model has consistently better accuracy than representative methods and has good scalability.
Emotion recognition from EEG signals has attracted much attention in affective computing. Recently, a novel dynamic graph convolutional neural network (DGCNN) model was proposed, which simultaneously optimized the network parameters and a weighted graph G characterizing the strength of functional relation between each pair of two electrodes in the EEG recording equipment. In this article, we propose a sparse DGCNN model which modifies DGCNN by imposing a sparseness constraint on G and improves the emotion recognition performance. Our work is based on an important observation: the tomography study reveals that different brain regions sampled by EEG electrodes may be related to different functions of the brain and then the functional relations among electrodes are possibly highly localized and sparse. However, introducing sparseness constraint into the graph G makes the loss function of sparse DGCNN non-differentiable at some singular points. To ensure that the training process of sparse DGCNN converges, we apply the forward-backward splitting method. To evaluate the performance of sparse DGCNN, we compare it with four representative recognition methods (SVM, DBN, GELM and DGCNN). In addition to comparing different recognition methods, our experiments also compare different features and spectral bands, including EEG features in time-frequency domain (DE, PSD, DASM, RASM, ASM and DCAU on different bands) extracted from four representative EEG datasets (SEED, DEAP, DREAMER, and CMEED). The results show that (1) sparse DGCNN has consistently better accuracy than representative methods and has a good scalability, and (2) DE, PSD, and ASM features on ? band convey most discriminative emotional information, and fusion of separate features and frequency bands can improve recognition performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available