4.3 Article

Using sensory weighting to model the influence of canal, otolith and visual cues on spatial orientation and eye movements

Journal

BIOLOGICAL CYBERNETICS
Volume 86, Issue 3, Pages 209-230

Publisher

SPRINGER
DOI: 10.1007/s00422-001-0290-1

Keywords

-

Funding

  1. NIDCD NIH HHS [DC04644, DC03065] Funding Source: Medline
  2. NATIONAL INSTITUTE ON DEAFNESS AND OTHER COMMUNICATION DISORDERS [R03DC004644, R13DC003065] Funding Source: NIH RePORTER

Ask authors/readers for more resources

The sensory weighting model is a general model of sensory integration that consists of three processing layers. First, each sensor provides the central nervous system (CNS) with information regarding a specific physical variable. Due to sensor dynamics, this measure is only reliable for the frequency range over which the sensor is accurate. Therefore, we hypothesize that the CNS improves on the reliability of the individual sensor outside this frequency range by using information from other sensors, a process referred to as frequency completion. Frequency completion uses internal models of sensory dynamics. This improved sensory signal is designated as the sensory estimate of the physical variable. Second, before being combined, information with different physical meanings is first transformed into a common representation; sensory estimates are converted to intermediate estimates. This conversion uses internal models of body dynamics and physical relationships. Third, several sensory systems may provide information about the same physical variable (e.g., semicircular canals and vision both measure self-rotation). Therefore, we hypothesize that the central estimate of a physical variable is computed as a weighted sum of all available intermediate estimates of this physical variable, a process referred to as multicue weighted averaging. The resulting central estimate is fed back to the first two layers. The sensory weighting model is applied to three-dimensional (3D) visual-vestibular interactions and their associated eye movements and perceptual responses. The model inputs are 3D angular and translational stimuli. The sensory inputs are the 3D sensory signals coming from the semicircular canals, otolith organs, and the visual system. The angular and translational components of visual movement are assumed to be available as separate stimuli measured by the visual system using retinal slip and image deformation. In addition, both tonic (regular) and phasic (irregular) otolithic afferents are implemented. Whereas neither tonic nor phasic otolithic afferents distinguish gravity from linear acceleration, the model uses tonic afferents to estimate gravity and phasic afferents to estimate linear acceleration. The model outputs are the internal estimates of physical motion variables and 3D slow-phase eye movements. The model also includes a smooth pursuit module. The model matches eye responses and perceptual effects measured during various motion paradigms in darkness (e.g., centered and eccentric yaw rotation about an earthvertical axis, yaw rotation about an earth-horizontal axis) and with visual cues (e.g., stabilized visual stimulation or optokinetic stimulation).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available