4.8 Article

Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans

Journal

NATURE
Volume 442, Issue 7106, Pages 1042-1045

Publisher

NATURE PUBLISHING GROUP
DOI: 10.1038/nature05051

Keywords

-

Funding

  1. Wellcome Trust [078865] Funding Source: Medline

Ask authors/readers for more resources

Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions(1). These theories highlight a central role for reward prediction errors in updating the values associated with available actions(2). In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy(3). However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy-L-phenylalanine; L-DOPA) or reducing ( haloperidol) dopaminergic function. Accordingly, subjects treated with L-DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available