4.8 Article

States versus Rewards: Dissociable Neural Prediction Error Signals Underlying Model-Based and Model-Free Reinforcement Learning

Journal

NEURON
Volume 66, Issue 4, Pages 585-595

Publisher

CELL PRESS
DOI: 10.1016/j.neuron.2010.04.016

Keywords

-

Categories

Funding

  1. Akademie der Naturforscher Leopoldina LPD [9901/8-140]
  2. National Institute of Mental Health
  3. Gordon and Betty Moore Foundation
  4. Caltech Brain Imaging Center
  5. Gatsby Charitable Foundation
  6. Div Of Biological Infrastructure
  7. Direct For Biological Sciences [0922982] Funding Source: National Science Foundation

Ask authors/readers for more resources

Reinforcement learning (RL) uses sequential experience with situations (states) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available