4.4 Article

Learning from other minds: an optimistic critique of reinforcement learning models of social learning

期刊

CURRENT OPINION IN BEHAVIORAL SCIENCES
卷 38, 期 -, 页码 110-115

出版社

ELSEVIER
DOI: 10.1016/j.cobeha.2021.01.006

关键词

-

向作者/读者索取更多资源

Reinforcement learning models have been used to identify neural correlates of social information value, but they underestimate the richness of human social learning. Recent advances show that even young children can learn from others by simulating their mental states. Combining developmental, Bayesian, and reinforcement learning perspectives can enhance our understanding of the neural bases of human social learning.
Reinforcement learning models have been productively applied to identify neural correlates of the value of social information. However, by operationalizing social information as a lean, reward-predictive cue, this literature underestimates the richness of human social learning: Humans readily go beyond action-outcome mappings and can draw flexible inferences from a single observation. We argue that computational models of social learning need minds, that is, a generative model of how others? unobservable mental states cause their observable actions. Recent advances in inferential social learning suggest that even young children learn from others by using an intuitive, generative model of other minds. Bridging developmental, Bayesian, and reinforcement learning perspectives can enrich our understanding of the neural bases of distinctively human social learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据