Journal
COGNITION
Volume 187, Issue -, Pages 178-187Publisher
ELSEVIER SCIENCE BV
DOI: 10.1016/j.cognition.2019.03.004
Keywords
Sensorimotor integration; Multisensory integration; Lexical tones; Gesture
Categories
Funding
- National Natural Science Foundation of China [31871131]
- Major Program of Science and Technology Commission of Shanghai Municipality (STCSM) [17JC1404104]
- Program of Introducing Talents of Discipline to Universities [B16018]
- New York University Global Seed Grants for Collaborative Research [85-65701-G0757-R4551]
- JRI Seed Grants for Research Collaboration from NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai
Ask authors/readers for more resources
Action and perception interact in complex ways to shape how we learn. In the context of language acquisition, for example, hand gestures can facilitate learning novel sound-to-meaning mappings that are critical to successfully understanding a second language. However, the mechanisms by which motor and visual information influence auditory learning are still unclear. We hypothesize that the extent to which cross-modal learning occurs is directly related to the common representational format of perceptual features across motor, visual, and auditory domains (i.e., the extent to which changes in one domain trigger similar changes in another). Furthermore, to the extent that information across modalities can be mapped onto a common representation, training in one domain may lead to learning in another domain. To test this hypothesis, we taught native English speakers Mandarin tones using directional pitch gestures. Watching or performing gestures that were congruent with pitch direction (e.g., an up gesture moving up, and a down gesture moving down, in the vertical plane) significantly enhanced tone category learning, compared to auditory-only training. Moreover, when gestures were rotated (e.g., an up gesture moving away from the body, and a down gesture moving toward the body, in the horizontal plane), performing the gestures resulted in significantly better learning, compared to watching the rotated gestures. Our results suggest that when a common representational mapping can be established between motor and sensory modalities, auditory perceptual learning is likely to be enhanced.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available