4.7 Article

The Successor Representation: Its Computational Logic and Neural Substrates

Journal

JOURNAL OF NEUROSCIENCE
Volume 38, Issue 33, Pages 7193-7200

Publisher

SOC NEUROSCIENCE
DOI: 10.1523/JNEUROSCI.0151-18.2018

Keywords

cognitive map; dopamine; hippocampus; reinforcement learning; reward

Categories

Funding

  1. National Institutes of Health [CRCNS R01-1207833]
  2. Office of Naval Research [N000141712984]
  3. Alfred P. Sloan Research Fellowship
  4. U.S. Department of Defense (DOD) [N000141712984] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

Reinforcement learning is the process by which an agent learns to predict long-term future reward. We now understand a great deal about the brain's reinforcement learning algorithms, but we know considerably less about the representations of states and actions over which these algorithms operate. A useful starting point is asking what kinds of representations we would want the brain to have, given the constraints on its computational architecture. Following this logic leads to the idea of the successor representation, which encodes states of the environment in terms of their predictive relationships with other states. Recent behavioral and neural studies have provided evidence for the successor representation, and computational studies have explored ways to extend the original idea. This paper reviews progress on these fronts, organizing them within a broader framework for understanding how the brain negotiates tradeoffs between efficiency and flexibility for reinforcement learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available