期刊
NEUROCOMPUTING
卷 32, 期 -, 页码 679-684出版社
ELSEVIER SCIENCE BV
DOI: 10.1016/S0925-2312(00)00232-0
关键词
dopamine; exponential discounting; temporal-difference learning
Recently there has been much interest in modeling the activity of primate midbrain dopamine neurons as signalling reward prediction error. But since the models are based on temporal-difference (TD) learning, they assume an exponential decline with time in the value of delayed reinforcers, an assumption long known to conflict with animal behavior. We show that a variant of TD learning that tracks variations in the average reward per timestep rather than cumulative discounted reward preserves the models' success at explaining neurophysiological data while significantly increasing their applicability to behavioral data. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据