Journal
CURRENT OPINION IN BEHAVIORAL SCIENCES
Volume 38, Issue -, Pages 74-82Publisher
ELSEVIER
DOI: 10.1016/j.cobeha.2020.10.010
Keywords
-
Funding
- Wellcome Trust [214314/Z/18/Z, 202831/Z/16/Z.]
- Wellcome Trust [202831/Z/16/Z, 214314/Z/18/Z] Funding Source: Wellcome Trust
Ask authors/readers for more resources
The involvement of dopamine in model-based reinforcement learning could be explained by its role in carrying prediction errors to update successor representations, or by the combination of well-established aspects of dopaminergic activity, reward prediction errors, and surprise signals.
Experiments have implicated dopamine in model-based reinforcement learning (RL). These findings are unexpected as dopamine is thought to encode a reward prediction error (RPE), which is the key teaching signal in model-free RL. Here we examine two possible accounts for dopamine?s involvement in model-based RL: the first that dopamine neurons carry a prediction error used to update a type of predictive state representation called a successor representation, the second that two well established aspects of dopaminergic activity, RPEs and surprise signals, can together explain dopamine?s involvement in model-based RL.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available