4.5 Article

Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience

Journal

FRONTIERS IN HUMAN NEUROSCIENCE
Volume 17, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fnhum.2023.1108354

Keywords

multimodal communication; face-to-face interactions; social actions; lateral cortical processing pathway; psycholinguistics; sensory neuroscience

Ask authors/readers for more resources

In face-to-face communication, humans need to interpret multiple layers of discontinuous multimodal signals as coherent and unified communicative actions. This poses a computational challenge of binding relevant signals together while segregating unrelated signals. To address this, a neurocognitive model is proposed by combining psycholinguistic and sensory neuroscience frameworks. The model suggests the implementation of multiplex signals, multimodal gestalts, and multilevel predictions along a lateral processing pathway. A multimodal and multidisciplinary research approach is advocated for further empirical testing.
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective (lateral processing pathway). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available