4.6 Editorial Material

Bridging the data gap between children and large language models

期刊

TRENDS IN COGNITIVE SCIENCES
卷 27, 期 11, 页码 990-992

出版社

CELL PRESS
DOI: 10.1016/j.tics.2023.08.007

关键词

-

向作者/读者索取更多资源

Large language models (LLMs) exhibit intriguing emergent behaviors, but they require significantly more language data compared to human children. Possible explanations for this vast difference include children's pre-existing conceptual knowledge, their use of multimodal grounding, and the interactive, social nature of their input.
Large language models (LLMs) show intriguing emergent behaviors, yet they receive around four or five orders of magnitude more language data than human children. What accounts for this vast difference in sample efficiency? Candidate explanations include children's pre-existing conceptual knowledge, their use of multimodal grounding, and the interactive, social nature of their input.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据