Journal
INFORMATION
Volume 12, Issue 6, Pages -Publisher
MDPI
DOI: 10.3390/info12060226
Keywords
augmented reality; neural networks; eye tracking; classification; attention; EEG
Categories
Funding
- Zentrale Forschungsforderung of the University of Bremen, Attention-driven Interaction Systems in Augmented Reality
- Open Access Initiative of the University of Bremen
- DFG
Ask authors/readers for more resources
The study used machine learning techniques to classify EEG and eye tracking data in augmented reality scenarios to determine whether a target is real or virtual. It found that a multimodal late fusion approach significantly improved classification accuracy. The reliability of the brain-computer interface is high enough to be considered a useful input mechanism for augmented reality applications.
Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain-computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available