4.6 Article

Methodological Issues in Evaluating Machine Learning Models for EEG Seizure Prediction: Good Cross-Validation Accuracy Does Not Guarantee Generalization to New Patients

Journal

APPLIED SCIENCES-BASEL
Volume 13, Issue 7, Pages -

Publisher

MDPI
DOI: 10.3390/app13074262

Keywords

seizure prediction; epilepsy; electroencephalography; feature extraction; machine learning; signal processing; artificial intelligence; model validation

Ask authors/readers for more resources

There is a growing interest in using artificial intelligence techniques for predicting epileptic seizures. Machine learning algorithms can extract statistical regularities from electroencephalographic (EEG) time series to anticipate abnormal brain activity. However, evaluating the performance of predictive models is challenging, as the use of questionable cross-validation schemes can introduce correlated signals into the training and test sets. This study demonstrates the importance of rigorous evaluation protocols in ensuring the generalizability of predictive models.
There is an increasing interest in applying artificial intelligence techniques to forecast epileptic seizures. In particular, machine learning algorithms could extract nonlinear statistical regularities from electroencephalographic (EEG) time series that can anticipate abnormal brain activity. The recent literature reports promising results in seizure detection and prediction tasks using machine and deep learning methods. However, performance evaluation is often based on questionable randomized cross-validation schemes, which can introduce correlated signals (e.g., EEG data recorded from the same patient during nearby periods of the day) into the partitioning of training and test sets. The present study demonstrates that the use of more stringent evaluation strategies, such as those based on leave-one-patient-out partitioning, leads to a drop in accuracy from about 80% to 50% for a standard eXtreme Gradient Boosting (XGBoost) classifier on two different data sets. Our findings suggest that the definition of rigorous evaluation protocols is crucial to ensure the generalizability of predictive models before proceeding to clinical trials.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available