4.6 Article

Improving Emotion Recognition Systems by Exploiting the Spatial Information of EEG Sensors

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 39544-39554

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3268233

Keywords

Electroencephalography; Emotion recognition; Feature extraction; Videos; Task analysis; Sensors; Convolutional neural networks; electroencephalography; emotion recognition; spatial information representation

Ask authors/readers for more resources

EEG-based emotion recognition has potential applications in various scientific fields. A novel input representation that rearranges EEG features as an image is proposed, enabling emotion recognition through image-based machine learning methods. The proposed approach achieves high accuracy in emotion classification tasks.
Electroencephalography (EEG)-based emotion recognition is gaining increasing importance due to its potential applications in various scientific fields, ranging from psychophysiology to neuromarketing. A number of approaches have been proposed that use machine learning (ML) technology to achieve high recognition performance, which relies on engineering features from brain activity dynamics. Since ML performance can be improved by utilizing 2D feature representation that exploits the spatial relationships among the features, here we propose a novel input representation that involves re-arranging EEG features as an image that reflects the top view of the subject's scalp. This approach enables emotion recognition through image-based ML methods such as pre-trained deep neural networks or trained-from-scratch convolutional neural networks. We have employed both of these techniques in our study to demonstrate the effectiveness of our proposed input representation. We also compare the recognition performance of these methods against state-of-the-art tabular data analysis approaches, which do not utilize the spatial relationships between the sensors. We test our proposed approach using two publicly available benchmark datasets for EEG-based emotion recognition tasks, namely DEAP and MAHNOB-HCI. Our results show that the trained-from-scratch convolutional neural network outperforms the best approaches in the literature, achieving 97.8% and 98.3% accuracy in valence and arousal classification on MAHNOB-HCI, and 91% and 90.4% on DEAP, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available