4.5 Article Proceedings Paper

Auditory-visual speech perception and aging

Journal

EAR AND HEARING
Volume 23, Issue 5, Pages 439-449

Publisher

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.1097/00003446-200210000-00006

Keywords

-

Funding

  1. NIDCD NIH HHS [001-DC 00110] Funding Source: Medline

Ask authors/readers for more resources

Objective: This experiment was designed to assess the integration of auditory and visual information for speech perception in older adults. The integration of place and voicing information was assessed across modalities using the McGurk effect. The following questions were addressed: 1) Are older adults as successful as younger adults at integrating auditory and visual information for speech perception? 2) Is successful integration of this information related to lipreading performance? Design: The performance of three groups of participants was compared: young adults with normal hearing and vision, older adults with normal to near-normal hearing and vision, and young controls, whose hearing thresholds were shifted with noise to match the older adults. Each participant completed a lipreading test and auditory and auditory-plus-visual identification of syllables with conflicting auditory and visual cues. Results: The results show that on average older adults are as successful as young adults at integrating auditory and visual information for speech perception at the syllable level. The number of fused responses did not differ for the CV tokens across the ages tested. Although there were no significant differences between groups for integration at the syllable level, there were differences in the response alternatives chosen. Young adults with normal peripheral sensitivity often chose an auditory alternative whereas, older adults and control participants leaned toward visual alternatives. In additions, older adults demonstrated poorer lipreading performance than their younger counterparts. This was not related to successful integration of information at the syllable level. Conclusions: Based on the findings of this study, when auditory and visual integration of speech information fails to occur, producing a nonfused response, participants select an alternative response from the modality with the least ambiguous signal.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available