4.5 Article

Reading your own lips: Common-coding theory and visual speech perception

期刊

PSYCHONOMIC BULLETIN & REVIEW
卷 20, 期 1, 页码 115-119

出版社

SPRINGER
DOI: 10.3758/s13423-012-0328-5

关键词

Visual word recognition; Models of visual word recognition and priming; Motor control; Motor planning/programming

资金

  1. NIA NIH HHS [AG018029, R01 AG018029] Funding Source: Medline

向作者/读者索取更多资源

Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据