4.8 Article

Neural representational geometry underlies few-shot concept learning

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2200800119

Keywords

few-shot learning; neural networks; ventral visual stream; population coding

Funding

  1. Gatsby Charitable Foundation
  2. Swartz foundation
  3. NIH [1U19NS104653]
  4. Stanford Graduate Fellowship
  5. Simons foundation
  6. James S. McDonnell foundation
  7. NSF CAREER award

Ask authors/readers for more resources

This article proposes a simple and feasible neural mechanism for learning new concepts from few examples. It suggests that neural activity in higher-order sensory areas can simulate the learning of natural concepts, and discrimination can be achieved through a simple plasticity rule. Numerical simulations demonstrate the high accuracy of this mechanism and a mathematical theory is developed to predict the performance of few-shot learning.
Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available