4.7 Article

A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia

Journal

NEURAL NETWORKS
Volume 123, Issue -, Pages 176-190

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2019.12.006

Keywords

Machine learning; Continuous wavelet transform; Bispectrum; Alzheimer's disease; Mild cognitive impairment; Data fusion

Funding

  1. Italian Ministry of Health [GR-2011-02351397]
  2. UK Engineering and Physical Sciences Research Council (EPSRC) [EP/M026981/1]

Ask authors/readers for more resources

Electroencephalographic (EEG) recordings generate an electrical map of the human brain that are useful for clinical inspection of patients and in biomedical smart Internet-of-Things (IoT) and BrainComputer Interface (BCI) applications. From a signal processing perspective, EEGs yield a nonlinear and nonstationary, multivariate representation of the underlying neural circuitry interactions. In this paper, a novel multi-modal Machine Learning (ML) based approach is proposed to integrate EEG engineered features for automatic classification of brain states. EEGs are acquired from neurological patients with Mild Cognitive Impairment (MCI) or Alzheimer's disease (AD) and the aim is to discriminate Healthy Control (HC) subjects from patients. Specifically, in order to effectively cope with nonstationarities, 19-channels EEG signals are projected into the time-frequency (TF) domain by means of the Continuous Wavelet Transform (CWT) and a set of appropriate features (denoted as CWT features) are extracted from delta, theta, alpha(1), alpha(2), beta EEG sub-bands. Furthermore, to exploit nonlinear phase-coupling information of EEG signals, higher order statistics (HOS) are extracted from the bispectrum (BiS) representation. BiS generates a second set of features (denoted as BiS features) which are also evaluated in the five EEG sub-bands. The CWT and BiS features are fed into a number of ML classifiers to perform both 2-way (AD vs. HC, AD vs. MCI, MCI vs. HC) and 3-way (AD vs. MCI vs. HC) classifications. As an experimental benchmark, a balanced EEG dataset that includes 63 AD, 63 MCI and 63 HC is analyzed. Comparative results show that when the concatenation of CWT and BiS features (denoted as multimodal (CWT+BiS) features) is used as input, the Multi-Layer Perceptron (MLP) classifier outperforms all other models, specifically, the Autoencoder (AE), Logistic Regression (LR) and Support Vector Machine (SVM). Consequently, our proposed multi-modal ML scheme can be considered a viable alternative to state-of-the-art computationally intensive deep learning approaches. (C) 2019 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available