4.5 Review

Multiplexing signals in reinforcement learning with internal models and dopamine

期刊

CURRENT OPINION IN NEUROBIOLOGY
卷 25, 期 -, 页码 123-129

出版社

CURRENT BIOLOGY LTD
DOI: 10.1016/j.conb.2014.01.001

关键词

-

资金

  1. KAKENHI from the Ministry of Education, Cultures, Sports, Science, and Technology of Japan [24120523]
  2. Grants-in-Aid for Scientific Research [24120523, 26120732] Funding Source: KAKEN

向作者/读者索取更多资源

A fundamental challenge for computational and cognitive neuroscience is to understand how reward-based learning and decision-making are made and how accrued knowledge and internal models of the environment are incorporated. Remarkable progress has been made in the field, guided by the midbrain dopamine reward prediction error hypothesis and the underlying reinforcement learning framework, which does not involve internal models ('model-free'). Recent studies, however, have begun not only to address more complex decision-making processes that are integrated with model-free decision-making, but also to include internal models about environmental reward structures and the minds of other agents, including model-based reinforcement learning and using generalized prediction errors. Even dopamine, a classic model-free signal, may work as multiplexed signals using model-based information and contribute to representational learning of reward structure.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据