4.6 Article Proceedings Paper

Behavioral considerations suggest an average reward TD model of the dopamine system

Journal

NEUROCOMPUTING
Volume 32, Issue -, Pages 679-684

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/S0925-2312(00)00232-0

Keywords

dopamine; exponential discounting; temporal-difference learning

Ask authors/readers for more resources

Recently there has been much interest in modeling the activity of primate midbrain dopamine neurons as signalling reward prediction error. But since the models are based on temporal-difference (TD) learning, they assume an exponential decline with time in the value of delayed reinforcers, an assumption long known to conflict with animal behavior. We show that a variant of TD learning that tracks variations in the average reward per timestep rather than cumulative discounted reward preserves the models' success at explaining neurophysiological data while significantly increasing their applicability to behavioral data. (C) 2000 Published by Elsevier Science B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available