4.5 Article

Deep Convolutional Symmetric Encoder-Decoder Neural Networks to Predict Students' Visual Attention

Journal

SYMMETRY-BASEL
Volume 13, Issue 12, Pages -

Publisher

MDPI
DOI: 10.3390/sym13122246

Keywords

cognition; deep learning; convolutional network; encoder-decoder; visual attention; saliency map; solving quizzes

Funding

  1. Pedagogical University of Krakow
  2. Polish Ministry of Science and Higher Education

Ask authors/readers for more resources

This study introduces a machine learning method capable of accurately predicting students' visual attention when solving quizzes, achieving better results than current methods. The predictions are moderately positively correlated with actual data, with a coefficient of 0.547 +/- 0.109. Visual analyses of the obtained saliency maps align with the researchers' experience and expectations in the field.
Prediction of visual attention is a new and challenging subject, and to the best of our knowledge, there are not many pieces of research devoted to the anticipation of students' cognition when solving tests. The aim of this paper is to propose, implement, and evaluate a machine learning method that is capable of predicting saliency maps of students who participate in a learning task in the form of quizzes based on quiz questionnaire images. Our proposal utilizes several deep encoder-decoder symmetric schemas which are trained on a large set of saliency maps generated with eye tracking technology. Eye tracking data were acquired from students, who solved various tasks in the sciences and natural sciences (computer science, mathematics, physics, and biology). The proposed deep convolutional encoder-decoder network is capable of producing accurate predictions of students' visual attention when solving quizzes. Our evaluation showed that predictions are moderately positively correlated with actual data with a coefficient of 0.547 +/- 0.109. It achieved better results in terms of correlation with real saliency maps than state-of-the-art methods. Visual analyses of the saliency maps obtained also correspond with our experience and expectations in this field. Both source codes and data from our research can be downloaded in order to reproduce our results.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available