期刊
CURRENT OPINION IN BEHAVIORAL SCIENCES
卷 38, 期 -, 页码 74-82出版社
ELSEVIER
DOI: 10.1016/j.cobeha.2020.10.010
关键词
-
资金
- Wellcome Trust [214314/Z/18/Z, 202831/Z/16/Z.]
- Wellcome Trust [202831/Z/16/Z, 214314/Z/18/Z] Funding Source: Wellcome Trust
The involvement of dopamine in model-based reinforcement learning could be explained by its role in carrying prediction errors to update successor representations, or by the combination of well-established aspects of dopaminergic activity, reward prediction errors, and surprise signals.
Experiments have implicated dopamine in model-based reinforcement learning (RL). These findings are unexpected as dopamine is thought to encode a reward prediction error (RPE), which is the key teaching signal in model-free RL. Here we examine two possible accounts for dopamine?s involvement in model-based RL: the first that dopamine neurons carry a prediction error used to update a type of predictive state representation called a successor representation, the second that two well established aspects of dopaminergic activity, RPEs and surprise signals, can together explain dopamine?s involvement in model-based RL.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据