4.4 Article

What is dopamine doing in model-based reinforcement learning?

期刊

CURRENT OPINION IN BEHAVIORAL SCIENCES
卷 38, 期 -, 页码 74-82

出版社

ELSEVIER
DOI: 10.1016/j.cobeha.2020.10.010

关键词

-

资金

  1. Wellcome Trust [214314/Z/18/Z, 202831/Z/16/Z.]
  2. Wellcome Trust [202831/Z/16/Z, 214314/Z/18/Z] Funding Source: Wellcome Trust

向作者/读者索取更多资源

The involvement of dopamine in model-based reinforcement learning could be explained by its role in carrying prediction errors to update successor representations, or by the combination of well-established aspects of dopaminergic activity, reward prediction errors, and surprise signals.
Experiments have implicated dopamine in model-based reinforcement learning (RL). These findings are unexpected as dopamine is thought to encode a reward prediction error (RPE), which is the key teaching signal in model-free RL. Here we examine two possible accounts for dopamine?s involvement in model-based RL: the first that dopamine neurons carry a prediction error used to update a type of predictive state representation called a successor representation, the second that two well established aspects of dopaminergic activity, RPEs and surprise signals, can together explain dopamine?s involvement in model-based RL.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据