4.6 Article

Look twice: A generalist computational model predicts return fixations across tasks and species

Journal

PLOS COMPUTATIONAL BIOLOGY
Volume 18, Issue 11, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pcbi.1010654

Keywords

-

Funding

  1. NIH [R01EY026025]
  2. NRF [AISG2-RP-2021-025]
  3. Center for Brains, Minds and Machines - NSF Science and Technology Centers Award [CCF-1231216]
  4. CFAR Early Career Investigatorship
  5. Agency for Science, Technology and Research [C210415012]
  6. Research Foundation Flanders (FWO) [1230521N]

Ask authors/readers for more resources

Primates constantly explore their surroundings through saccadic eye movements and frequently revisit previous foveated locations. The locations of return fixations are consistent across subjects and tend to occur within short temporal offsets. A neural network model incorporating five key modules can replicate the properties of return fixations.
Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution. In addition to exploring new regions in the visual field, primates also make frequent return fixations, revisiting previously foveated locations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations. Return fixations were ubiquitous across different behavioral tasks, in monkeys and humans, both when subjects viewed static images and when subjects performed natural behaviors. Return fixations locations were consistent across subjects, tended to occur within short temporal offsets, and typically followed a 180-degree turn in saccadic direction. To understand the origin of return fixations, we propose a proof-of-principle, biologically-inspired and image-computable neural network model. The model combines five key modules: an image feature extractor, bottom-up saliency cues, task-relevant visual features, finite inhibition-of-return, and saccade size constraints. Even though there are no free parameters that are fine-tuned for each specific task, species, or condition, the model produces fixation sequences resembling the universal properties of return fixations. These results provide initial steps towards a mechanistic understanding of the trade-off between rapid foveal recognition and the need to scrutinize previous fixation locations. Author summary We move our eyes several times a second, bringing the center of gaze into focus and high resolution. While we typically assume that we can rapidly recognize the contents at each fixation, it turns out that we often move our eyes back to previously visited locations. These return fixations are ubiquitous across different tasks, conditions, and across species. A computational model captures these eye movements and return fixations by using four key mechanisms: extraction of salient parts of an image, incorporation of task goals such as the target during visual search, a constraint to avoid making large eye movements, and forgetful memory of previous locations. Neither the extreme of getting stuck at a single location or the extreme of never revisiting previous locations seems adequate for visual processing. Instead, the combination of these four mechanisms allows the visual system to achieve a happy medium during scene understanding.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available