Journal
NETWORK-COMPUTATION IN NEURAL SYSTEMS
Volume 17, Issue 1, Pages 61-84Publisher
TAYLOR & FRANCIS INC
DOI: 10.1080/09548980500361624
Keywords
dopamine; prediction error; associative learning; blocking; latent inhibition; overshadowing; schizophrenia; reinforcement learning; incentive salience; motivated behavior; temporal difference algorithm; Rescorla-Wagner learning rule; psychosis
Ask authors/readers for more resources
The notion of prediction error has established itself at the heart of formal models of animal learning and current hypotheses of dopamine function. Several interpretations of prediction error have been offered, including the model-free reinforcement learning method known as temporal difference learning (TD), and the important Rescorla-Wagner (RW) learning rule. Here, we present a model-based adaptation of these ideas that provides a good account of empirical data pertaining to dopamine neuron firing patterns and associative learning paradigms such as latent inhibition, Kamin blocking and overshadowing. Our departure from model-free reinforcement learning also offers: 1) a parsimonious distinction between tonic and phasic dopamine functions; 2) a potential generalization of the role of phasic dopamine from valence-dependent reward processing to valence-independent salience processing; 3) an explanation for the selectivity of certain dopamine manipulations on motivation for distal rewards; and 4) a plausible link between formal notions of prediction error and accounts of disturbances of thought in schizophrenia (in which dopamine dysfunction is strongly implicated). The model distinguishes itself from existing accounts by offering novel predictions pertaining to the firing of dopamine neurons in various untested behavioral scenarios.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available