4.6 Article

Perception of facial expressions and voices and of their combination in the human brain

Journal

CORTEX
Volume 41, Issue 1, Pages 49-59

Publisher

ELSEVIER MASSON, CORPORATION OFFICE
DOI: 10.1016/S0010-9452(08)70177-1

Keywords

face expression; multisensory integration; audio-visual perception; affective process; emotion; convergence region; PET; middle temporal gyrus; amygdala

Ask authors/readers for more resources

Using positron emission tomography we explored brain regions activated during the perception of face expressions, emotional voices and combined audio-visual pairs. A convergence region situated in the left lateral temporal cortex was more activated by bimodal stimuli than by either visual only or auditory only stimuli. Separate analyses for the emotions happiness and fear revealed supplementary convergence areas situated mainly anteriorly in the left hemisphere for happy pairings and in the right hemisphere for fear pairings indicating different neuro-anatomical substrates for multisensory integration of positive versus negative emotions. Activation in the right extended amygdala was obtained for fearful faces and fearful audio-visual pairs but not for fearful voices only. These results suggest that during the multisensory perception of emotion, affective information from face and voice converge in heteromodal regions of the human brain.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available