4.8 Article

Linking global top-down views to first-person views in the brain

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2202024119

关键词

cognitive map; head direction cells; place cells; robotics; variational autoencoders

资金

  1. Air Force Office of Scientific Research [FA9550-19-1-0306]
  2. National Science Foundation Information and Intelligence Systems Robust Intelligence [1813785]
  3. NSF Neural and Cognitive Systems Foundations Award Information and Intelligence Systems [2024633]
  4. Direct For Computer & Info Scie & Enginr [2024633] Funding Source: National Science Foundation
  5. Direct For Computer & Info Scie & Enginr
  6. Div Of Information & Intelligent Systems [1813785] Funding Source: National Science Foundation
  7. Div Of Information & Intelligent Systems [2024633] Funding Source: National Science Foundation

向作者/读者索取更多资源

Humans and animals have the ability to translate their position from one spatial frame of reference to another, and seamlessly switch between top-down and first-person views. The medial temporal lobe and other cortical regions are found to contribute to this function. By using variational autoencoders to reconstruct views, researchers gain insights into how the neural system carries out these computations.
Humans and other animals have a remarkable capacity to translate their position from one spatial frame of reference to another. The ability to seamlessly move between top-down and first-person views is important for navigation, memory formation, and other cognitive tasks. Evidence suggests that the medial temporal lobe and other cortical regions contribute to this function. To understand how a neural system might carry out these computations, we used variational autoencoders (VAEs) to reconstruct the first-person view from the top-down view of a robot simulation, and vice versa. Many latent variables in the VAEs had similar responses to those seen in neuron recordings, including location-specific activity, head direction tuning, and encoding of distance to local objects. Place-specific responseswere prominent when reconstructing a first-person view from a top-down view, but head direction-specific responses were prominent when reconstructing a top-down view from a first-person view. In both cases, the model could recover from perturbations without retraining, but rather through remapping. These results could advance our understanding of how brain regions support viewpoint linkages and transformations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据