4.5 Article

A Spiking Neural Network Model of Rodent Head Direction Calibrated With Landmark Free Learning

Journal

FRONTIERS IN NEUROROBOTICS
Volume 16, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fnbot.2022.867019

Keywords

spiking neural network; pyNEST; head direction; predictive coding; localization; continuous attractor

Funding

  1. European Union [945539]

Ask authors/readers for more resources

Maintaining a stable estimate of head direction requires the integration of self-motion and environmental information. In unfamiliar or dark environments, visual information is crucial for stabilizing the estimate, and when learning to associate visual scenes with head angle, animals rely on imprecise estimates derived from self-generated information. Our study demonstrates that both discriminative and generative methods of visual processing are capable of providing a corrective signal for learning associations without explicit landmark extraction. Additionally, we present a spiking continuous attractor model that corrects for drift in head direction predictions and validate it against experimental data.
Maintaining a stable estimate of head direction requires both self-motion (idiothetic) information and environmental (allothetic) anchoring. In unfamiliar or dark environments idiothetic drive can maintain a rough estimate of heading but is subject to inaccuracy, visual information is required to stabilize the head direction estimate. When learning to associate visual scenes with head angle, animals do not have access to the 'ground truth' of their head direction, and must use egocentrically derived imprecise head direction estimates. We use both discriminative and generative methods of visual processing to learn these associations without extracting explicit landmarks from a natural visual scene, finding all are sufficiently capable at providing a corrective signal. Further, we present a spiking continuous attractor model of head direction (SNN), which when driven by idiothetic input is subject to drift. We show that head direction predictions made by the chosen model-free visual learning algorithms can correct for drift, even when trained on a small training set of estimated head angles self-generated by the SNN. We validate this model against experimental work by reproducing cue rotation experiments which demonstrate visual control of the head direction signal.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available