期刊
NEUROIMAGE
卷 44, 期 2, 页码 509-519出版社
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.neuroimage.2008.09.015
关键词
EEG; Explicit learning; Prosody; Speech segmentation; Steady-state response
资金
- McDonnell foundation [ECOS C04B02]
In order to learn an oral language, humans have to discover words from a continuous signal. Streams of artificial monotonous speech can be readily segmented based on the statistical analysis of the syllables' distribution. This parsing is considerably improved when acoustic cues, such as subliminal pauses, are added suggesting that a different mechanism is involved. Here we used a frequency-tagging approach to explore the neural mechanisms underlying word learning while listening to continuous speech. High-density EEG was recorded in adults listening to a concatenation of either random syllables or tri-syllabic artificial words, with or without subliminal pauses added every three syllables. Peaks in the EEG power spectrum at the frequencies of one and three syllables occurrence were used to tag the perception of a monosyllabic or trisyllabic structure, respectively. Word streams elicited the suppression of a one-syllable frequency peak, steadily present during random streams, suggesting that syllables are no more perceived as isolated segments but bounded to adjacent syllables. Crucially, three-syllable frequency peaks were only observed during word streams with pauses, and were positively correlated to the explicit recall of the detected words. This result shows that pauses facilitate a fast, explicit and successful extraction of words from continuous speech, and that the frequency-tagging approach is a powerful tool to track brain responses to different hierarchical units of the speech structure. (C) 2008 Elsevier Inc. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据