4.7 Article

A deep learning approach for decoding visually imagined digits and lettersusing time-frequency-spatial representation of EEG signals

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 203, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.117417

Keywords

Electroencephalography; Time-frequency-spatial representation; Visual imagery; Deep learning; Convolutional neural networks; Brain-computer interface

Funding

  1. Seed-Grant program at the German Jordanian University [SEEIT 02/2020]

Ask authors/readers for more resources

This paper presents a two-phase approach for decoding visually imagined digits and letters using EEG signals. The first phase constructs a joint time, frequency, and spatial representation of the EEG signals. The second phase utilizes a deep learning framework to automatically extract features and decode the imagined digits and letters. The proposed approach outperforms alternative techniques and achieves high accuracy.
The recent advances in developing assistive devices have attracted researchers to use visual imagery (VI) mentaltasks as a control paradigm to design brain-computer interfaces that can produce a large number of controlsignals. Consequently, this can facilitate the design of control mechanisms that allow locked-in individuals tointeract with the surrounding world. This paper presents a two-phase approach for decoding visually imagineddigits and letters using electroencephalography (EEG) signals. The first phase employs the Choi-Williams time-frequency distribution (CWD) to construct a joint time, frequency, and spatial (TFS) representation of the EEGsignals. The constructed joint TFS representation characterizes the variations in the energy encapsulated withinthe EEG signals over the TFS domains. The second phase presents a novel deep learning (DL) framework toautomatically extract features from the constructed joint TFS representation of the EEG signals and decodethe imagined digits and letters. The performance of our approach is assessed using an EEG dataset that wasacquired for 16 healthy participants while imagining decimal digits and uppercase English letters. Our approachachieved an average +/- standard deviation accuracy of95.47 +/- 2.3%, which is significantly outperformingthe accuracies obtained when the CWD is replaced with two alternative time-frequency analysis techniques,the accuracies obtained using four pre-trained DL models, and the accuracies obtained using CWD-basedhandcrafted features that are classified using four conventional classifiers. Moreover, the results of our proposedapproach outperform those reported by several previous studies with regard to the accuracy and number ofclasses.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available