4.5 Article

A comparative analysis of machine learning methods for emotion recognition using EEG and peripheral physiological signals

Journal

JOURNAL OF BIG DATA
Volume 7, Issue 1, Pages -

Publisher

SPRINGERNATURE
DOI: 10.1186/s40537-020-00289-7

Keywords

Emotion recognition; Multi-channel EEG; Machine learning; Classification

Funding

  1. College of Science and Mathematics
  2. Amazon Web Services

Ask authors/readers for more resources

Emotion recognition using brain signals has the potential to change the way we identify and treat some health conditions. Difficulties and limitations may arise in general emotion recognition software due to the restricted number of facial expression triggers, dissembling of emotions, or among people with alexithymia. Such triggers are identified by studying the continuous brainwaves generated by human brain. Electroencephalogram (EEG) signals from the brain give us a more diverse insight on emotional states that one may not be able to express. Brainwave EEG signals can reflect the changes in electrical potential resulting from communications networks between neurons. This research involves analyzing the epoch data from EEG sensor channels and performing comparative analysis of multiple machine learning techniques [namely Support Vector Machine (SVM), K-nearest neighbor, Linear Discriminant Analysis, Logistic Regression and Decision Trees each of these models] were tested with and without principal component analysis (PCA) for dimensionality reduction. Grid search was also utilized for hyper-parameter tuning for each of the tested machine learning models over Spark cluster for lowered execution time. The DEAP Dataset was used in this study, which is a multimodal dataset for the analysis of human affective states. The predictions were based on the labels given by the participants for each of the 40 1-min long excerpts of music. music. Participants rated each video in terms of the level of arousal, valence, like/dislike, dominance and familiarity. The binary class classifiers were trained on the time segmented, 15 s intervals of epoch data, individually for each of the 4 classes. PCA with SVM performed the best and produced an F1-score of 84.73% with 98.01% recall in the 30th to 45th interval of segmentation. For each of the time segments and a binary training class a different classification model converges to a better accuracy and recall than others. The results prove that different classification models must be used to identify different emotional states.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available