4.6 Article

Structure Learning in Human Sequential Decision-Making

Journal

PLOS COMPUTATIONAL BIOLOGY
Volume 6, Issue 12, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pcbi.1001003

Keywords

-

Funding

  1. Office of Naval Research [N00014-07-1-0937]
  2. National Institutes of Health (NIH) [1R90 DK71500-04]
  3. CONICYT-FIC-World Bank Fellowship [05-DOCFIC-BANCO-01]
  4. Center for Cognitive Sciences of the University of Minnesota

Ask authors/readers for more resources

Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that has perfect knowledge of the model of how rewards and events are generated in the environment. Rather than being suboptimal, we argue that the learning problem humans face is more complex, in that it also involves learning the structure of reward generation in the environment. We formulate the problem of structure learning in sequential decision tasks using Bayesian reinforcement learning, and show that learning the generative model for rewards qualitatively changes the behavior of an optimal learning agent. To test whether people exhibit structure learning, we performed experiments involving a mixture of one-armed and two-armed bandit reward models, where structure learning produces many of the qualitative behaviors deemed suboptimal in previous studies. Our results demonstrate humans can perform structure learning in a near-optimal manner.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available