4.3 Article

Learning to represent reward structure: A key to adapting to complex environments

Journal

NEUROSCIENCE RESEARCH
Volume 74, Issue 3-4, Pages 177-183

Publisher

ELSEVIER IRELAND LTD
DOI: 10.1016/j.neures.2012.09.007

Keywords

Reward; Dopamine; Reinforcement learning; Decision; Value; Salience; Structure

Categories

Funding

  1. KAKENHI [21300129, 24120522]
  2. Grants-in-Aid for Scientific Research [24120523, 21300129, 24120522] Funding Source: KAKEN

Ask authors/readers for more resources

Predicting outcomes is a critical ability of humans and animals. The dopamine reward prediction error hypothesis, the driving force behind the recent progress in neural value-based decision making, states that dopamine activity encodes the signals for learning in order to predict a reward, that is, the difference between the actual and predicted reward, called the reward prediction error. However, this hypothesis and its underlying assumptions limit the prediction and its error as reactively triggered by momentary environmental events. Reviewing the assumptions and some of the latest findings, we suggest that the internal state representation is learned to reflect the environmental reward structure, and we propose a new hypothesis - the dopamine reward structural learning hypothesis - in which dopamine activity encodes multiplex signals for learning in order to represent reward structure in the internal state, leading to better reward prediction. (C) 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available