4.4 Article

Rapid access to speech gestures in perception: Evidence from choice and simple response time tasks

Journal

JOURNAL OF MEMORY AND LANGUAGE
Volume 49, Issue 3, Pages 396-413

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/S0749-596X(03)00072-X

Keywords

direct realism; motor theory of speech perception; acoustic theories of speech perception; choice reaction time; simple reaction time

Funding

  1. NICHD NIH HHS [P01 HD001994, P01 HD001994-37] Funding Source: Medline
  2. NIDCD NIH HHS [R01 DC003782, R01 DC003782-04] Funding Source: Medline

Ask authors/readers for more resources

Participants took part in two speech tests. In both tests, a model speaker produced vowel-consonant-vowels (VCVs) in which the initial vowel varied unpredictably in duration. In the simple response task, participants shadowed the initial vowel; when the model shifted to production of any of three CVs (/pa/, /ta/ or /ka/), participants produced a CV that they were assigned to say (one of /pa/, /ta/ or /ka/). In the choice task, participants shadowed the initial vowel; when the model shifted to a CV, participants shadowed that too. We found that, measured from the model's onset of closure for the consonant to the participant's closure onset, response times in the choice task exceeded those in the simple task by just 26 ms. This is much shorter than the canonical difference between simple and choice latencies [100-150 ms according to Luce (1986)] and is near the fastest simple times that Luce reports. The findings imply rapid access to articulatory speech information in the choice task. A second experiment found much longer choice times when the perception-production link for speech could not be exploited. A third experiment and an acoustic analysis verified that our measurement from closure in Experiment I provided a valid marker of speakers' onsets of consonant production. A final experiment showed that shadowing responses are imitations of the model's speech. We interpret the findings as evidence that listeners rapidly extract information about speakers' articulatory gestures. (C) 2003 Elsevier Science (USA). All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available