4.7 Article

The successor representation in human reinforcement learning

Journal

NATURE HUMAN BEHAVIOUR
Volume 1, Issue 9, Pages 680-692

Publisher

NATURE PUBLISHING GROUP
DOI: 10.1038/s41562-017-0180-8

Keywords

-

Funding

  1. National Institutes of Health Collaborative Research in Computational Neuroscience award [1R01MH109177]
  2. National Institutes of Health under R. L. Kirschstein National Research Service Award [1F31MH110111-01]
  3. John Templeton Foundation

Ask authors/readers for more resources

Theories of reward learning in neuroscience have focused on two families of algorithms thought to capture deliberative versus habitual choice. 'Model-based' algorithms compute the value of candidate actions from scratch, whereas 'model-free' algorithms make choice more efficient but less flexible by storing pre-computed action values. We examine an intermediate algorithmic family, the successor representation, which balances flexibility and efficiency by storing partially computed action values: predictions about future events. These pre-computation strategies differ in how they update their choices following changes in a task. The successor representation's reliance on stored predictions about future states predicts a unique signature of insensitivity to changes in the task's sequence of events, but flexible adjustment following changes to rewards. We provide evidence for such differential sensitivity in two behavioural studies with humans. These results suggest that the successor representation is a computational substrate for semi-flexible choice in humans, introducing a subtler, more cognitive notion of habit.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available