4.6 Article

Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence

Journal

CORTEX
Volume 104, Issue -, Pages 12-25

Publisher

ELSEVIER MASSON, CORPORATION OFFICE
DOI: 10.1016/j.cortex.2018.03.031

Keywords

Language; Multimodal processing; Social neuroscience; Late posterior positivity; N400

Funding

  1. structured graduate program Self Regulation Dynamics Across Adulthood and Old Age: Potentials and Limits
  2. DFG excellence initiative Talking heads from the DAAD [57049661]
  3. Spanish Ministerio de Economia y Cornpetitividad [PSI2013-43107-P]

Ask authors/readers for more resources

Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speakers dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non demanding. (C) 2018 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available