4.5 Article

A Spiking Neural Network Model of Rodent Head Direction Calibrated With Landmark Free Learning

期刊

FRONTIERS IN NEUROROBOTICS
卷 16, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnbot.2022.867019

关键词

spiking neural network; pyNEST; head direction; predictive coding; localization; continuous attractor

资金

  1. European Union [945539]

向作者/读者索取更多资源

Maintaining a stable estimate of head direction requires the integration of self-motion and environmental information. In unfamiliar or dark environments, visual information is crucial for stabilizing the estimate, and when learning to associate visual scenes with head angle, animals rely on imprecise estimates derived from self-generated information. Our study demonstrates that both discriminative and generative methods of visual processing are capable of providing a corrective signal for learning associations without explicit landmark extraction. Additionally, we present a spiking continuous attractor model that corrects for drift in head direction predictions and validate it against experimental data.
Maintaining a stable estimate of head direction requires both self-motion (idiothetic) information and environmental (allothetic) anchoring. In unfamiliar or dark environments idiothetic drive can maintain a rough estimate of heading but is subject to inaccuracy, visual information is required to stabilize the head direction estimate. When learning to associate visual scenes with head angle, animals do not have access to the 'ground truth' of their head direction, and must use egocentrically derived imprecise head direction estimates. We use both discriminative and generative methods of visual processing to learn these associations without extracting explicit landmarks from a natural visual scene, finding all are sufficiently capable at providing a corrective signal. Further, we present a spiking continuous attractor model of head direction (SNN), which when driven by idiothetic input is subject to drift. We show that head direction predictions made by the chosen model-free visual learning algorithms can correct for drift, even when trained on a small training set of estimated head angles self-generated by the SNN. We validate this model against experimental work by reproducing cue rotation experiments which demonstrate visual control of the head direction signal.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据