4.8 Article

Neural representational geometry underlies few-shot concept learning

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2200800119

关键词

few-shot learning; neural networks; ventral visual stream; population coding

资金

  1. Gatsby Charitable Foundation
  2. Swartz foundation
  3. NIH [1U19NS104653]
  4. Stanford Graduate Fellowship
  5. Simons foundation
  6. James S. McDonnell foundation
  7. NSF CAREER award

向作者/读者索取更多资源

This article proposes a simple and feasible neural mechanism for learning new concepts from few examples. It suggests that neural activity in higher-order sensory areas can simulate the learning of natural concepts, and discrimination can be achieved through a simple plasticity rule. Numerical simulations demonstrate the high accuracy of this mechanism and a mathematical theory is developed to predict the performance of few-shot learning.
Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据