4.7 Article

Rethinking dopamine as generalized prediction error

Journal

Publisher

ROYAL SOC
DOI: 10.1098/rspb.2018.1645

Keywords

reinforcement learning; successor representation; temporal difference learning

Funding

  1. National Institutes of Health [CRCNS 1R01MH109177]
  2. Intramural Research Programat NIDA [ZIA-DA000587]
  3. NATIONAL INSTITUTE OF MENTAL HEALTH [R01MH109177] Funding Source: NIH RePORTER
  4. NATIONAL INSTITUTE ON DRUG ABUSE [ZIADA000587] Funding Source: NIH RePORTER

Ask authors/readers for more resources

Midbra in dopamine neurons are commonly thought to report a reward prediction error (RPE), as hypothesized by reinforcement learning (RL) theory. While this theory has been highly successful, several lines of evidence suggest that dopamine activity also encodes sensory prediction errors unrelated to reward. Here, we develop a new theory of dopamine function that embraces a broader conceptualization of prediction errors. By signalling errors in both sensory and reward predictions, dopamine supports a form of RL that lies between model-based and model-free algorithms. This account remains consistent with current canon regarding the correspondence between dopamine transients and RPEs, while also accounting for new data suggesting a role for those signals in phenomena such as sensory preconditioning and identity unblocking, which ostensibly draw upon knowledge beyond reward predictions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available