4.2 Article

EEG-based emotion recognition with deep convolutional neural networks

Journal

BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK
Volume 66, Issue 1, Pages 43-57

Publisher

WALTER DE GRUYTER GMBH
DOI: 10.1515/bmt-2019-0306

Keywords

azimuthal equidistant projection technique; brain mapping; deep learning; EEG images; electroencephalogram; emotion estimation

Funding

  1. Izmir Katip Celebi University Scientific Research Projects Coordination Unit [2019-ONAP-MUMF-0001]

Ask authors/readers for more resources

This paper proposes a novel method for emotion recognition using deep convolutional neural networks on multi-channel EEG signals from the DEAP database. By preserving temporal, spectral, and spatial information of the EEG signals, significant improvements in classification accuracy were achieved compared to other studies.
The emotional state of people plays a key role in physiological and behavioral human interaction. Emotional state analysis entails many fields such as neuroscience, cognitive sciences, and biomedical engineering because the parameters of interest contain the complex neuronal activities of the brain. Electroencephalogram (EEG) signals are processed to communicate brain signals with external systems and make predictions over emotional states. This paper proposes a novel method for emotion recognition based on deep convolutional neural networks (CNNs) that are used to classify Valence, Arousal, Dominance, and Liking emotional states. Hence, a novel approach is proposed for emotion recognition with time series of multi-channel EEG signals from a Database for Emotion Analysis and Using Physiological Signals (DEAP). We propose a new approach to emotional state estimation utilizing CNN-based classification of multi-spectral topology images obtained from EEG signals. In contrast to most of the EEG-based approaches that eliminate spatial information of EEG signals, converting EEG signals into a sequence of multi-spectral topology images, temporal, spectral, and spatial information of EEG signals are preserved. The deep recurrent convolutional network is trained to learn important representations from a sequence of three-channel topographical images. We have achieved test accuracy of 90.62% for negative and positive Valence, 86.13% for high and low Arousal, 88.48% for high and low Dominance, and finally 86.23% for like-unlike. The evaluations of this method on emotion recognition problem revealed significant improvements in the classification accuracy when compared with other studies using deep neural networks (DNNs) and one-dimensional CNNs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available