4.5 Review

The ubiquity of model-based reinforcement learning

Journal

CURRENT OPINION IN NEUROBIOLOGY
Volume 22, Issue 6, Pages 1075-1081

Publisher

CURRENT BIOLOGY LTD
DOI: 10.1016/j.conb.2012.08.003

Keywords

-

Categories

Funding

  1. McKnight Foundation
  2. McDonnell Foundation
  3. NIMH [1R01MH087882-01]
  4. NINDS [1R01NS078784-01]

Ask authors/readers for more resources

The reward prediction error (RPE) theory of dopamine (DA) function has enjoyed great success in the neuroscience of learning and decision-making. This theory is derived from model-free reinforcement learning (RL), in Which choices are made simply on the basis of previously realized rewards. Recently, attention has turned to correlates of more flexible, albeit computationally complex, model-based methods in the brain. These methods are distinguished from model-free learning by their evaluation of candidate actions using expected future outcomes according to a world model. Puzzlingly, signatures from these computations seem to be pervasive in the very same regions previously thought to support model-free learning. Here, we review recent behavioral and neural evidence about these two systems, in attempt to reconcile their enigmatic cohabitation in the brain.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available