4.6 Article

Time series-dependent feature of EEG signals for improved visually evoked emotion classification using EmotionCapsNet

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 34, Issue 16, Pages 13291-13303

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-022-06942-x

Keywords

Time series data; Spatiotemporal; Spectrogram; Electroencephalograph (EEG); EmotionCapsNet

Ask authors/readers for more resources

Machine learning and deep learning techniques, specifically convolutional neural networks (CNNs), have been explored in EEG-based emotion classification. However, CNNs often require complex feature extraction and struggle to capture the natural relationship among different EEG channels. In this study, an advanced CNN called EmotionCapsNet is proposed for multi-channel EEG classification, achieving better accuracy by extracting descriptive and complex features from raw EEG signals. The proposed system outperforms conventional machine learning and deep learning-based CNN models, achieving high accuracy on various datasets.
In recent studies, machine learning and deep learning strategies have been explored in many EEG-based application for best performance. More specifically, convolutional neural networks (CNNs) have demonstrated incredible capacity in electroencephalograph (EEG)-evoked emotion classification tasks. In preexisting case, CNN-based emotion classification techniques using EEG signals mostly involve a moderately intricate phase of feature extrication before any network model implementation. The CNNs are not able to well describe the natural interrelation among the various EEG channels, which basically provides essential data for the classification of different emotion states. In this paper, an efficacious and advanced version of CNN called Emotion-based Capsule Network (EmotionCapsNet) for multi-channel EEG-based emotion classification to achieve better classification accuracy is presented. EmotionCapsNet has been applied to the raw EEG signals as well as 2D image representation generated from EEG signals which can extricate descriptive and complex features from the EEG signals and decide the different emotional states. The proposed system is then compared with the other conventional machine learning and deep learning-based CNN model. Our strategy accomplishes an average accuracy of 77.50%, 78.44% and 79.38% for valence, arousal and dominance on the DEAP, 79.06%, 78.90% and 79.69% on AMIGOS and attains an average accuracy of 80.34%, 83.04% and 82.50% for valence, arousal and dominance on the DREAMER, respectively. These outcomes demonstrate that adapted strategy yields comparable precision on raw EEG signal and it also provides better classification results on spatiotemporal feature of EEG signal for emotion classification task.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available