4.6 Article

MEG-Based Detection of Voluntary Eye Fixations Used to Control a Computer

Journal

FRONTIERS IN NEUROSCIENCE
Volume 15, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2021.619591

Keywords

MEG; brain-computer interface; hybrid brain-computer interface; gaze-based interaction; convolutional neural network; classification; intention

Categories

Funding

  1. Russian Science Foundation [18-19-00593]
  2. NRC Kurchatov Institute [1361/25.06.2019]
  3. Russian Science Foundation [18-19-00593] Funding Source: Russian Science Foundation

Ask authors/readers for more resources

Researchers demonstrate the potential of discriminating voluntary and spontaneous eye fixations using segments of MEG data. Applying CNN for binary classification of MEG signals related to eye fixations shows promising results in distinguishing voluntary from spontaneous fixations, supporting the improvement of gaze-based interfaces.
Gaze-based input is an efficient way of hand-free human-computer interaction. However, it suffers from the inability of gaze-based interfaces to discriminate voluntary and spontaneous gaze behaviors, which are overtly similar. Here, we demonstrate that voluntary eye fixations can be discriminated from spontaneous ones using short segments of magnetoencephalography (MEG) data measured immediately after the fixation onset. Recently proposed convolutional neural networks (CNNs), linear finite impulse response filters CNN (LF-CNN) and vector autoregressive CNN (VAR-CNN), were applied for binary classification of the MEG signals related to spontaneous and voluntary eye fixations collected in healthy participants (n = 25) who performed a game-like task by fixating on targets voluntarily for 500 ms or longer. Voluntary fixations were identified as those followed by a fixation in a special confirmatory area. Spontaneous vs. voluntary fixation-related single-trial 700 ms MEG segments were non-randomly classified in the majority of participants, with the group average cross-validated ROC AUC of 0.66 +/- 0.07 for LF-CNN and 0.67 +/- 0.07 for VAR-CNN (M +/- SD). When the time interval, from which the MEG data were taken, was extended beyond the onset of the visual feedback, the group average classification performance increased up to 0.91. Analysis of spatial patterns contributing to classification did not reveal signs of significant eye movement impact on the classification results. We conclude that the classification of MEG signals has a certain potential to support gaze-based interfaces by avoiding false responses to spontaneous eye fixations on a single-trial basis. Current results for intention detection prior to gaze-based interface's feedback, however, are not sufficient for online single-trial eye fixation classification using MEG data alone, and further work is needed to find out if it could be used in practical applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available