4.5 Article

A Visual Attention Model Based on Eye Tracking in 3D Scene Maps

Journal

Publisher

MDPI
DOI: 10.3390/ijgi10100664

Keywords

visual attention; eye tracking; map cognition; visual cognition

Funding

  1. Research Start-up Fund for Distinguished Professors of Zhengzhou University [135-32310276]

Ask authors/readers for more resources

Visual attention plays a crucial role in the map-reading process and eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps, and establishes a quantitative relationship between visual saliency and visual factors through experiments and data fitting. The research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping.
Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to quantifying visual attention. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps. First, eye-tracking technology was used to obtain the differences in the participants' gaze behavior when browsing a street view map in the desktop environment, and to establish a quantitative relationship between eye movement indexes and visual saliency. Then, experiments were carried out to determine the quantitative relationship between visual saliency and visual factors, using vector 3D scene maps as stimulus material. Finally, a visual attention model was obtained by fitting the data. It was shown that a combination of three visual factors can represent the visual attention value of a 3D scene map: color, shape, and size, with a goodness of fit (R-2) greater than 0.699. The current research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available