Journal
ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION
Volume 28, Issue 6, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3469232
Keywords
Multimodal emotional expression; artificial agent; social influence; smiling
Funding
- European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant [713567]
- ADAPT Centre for Digital Content Technology under the SFI Research Centres Programme [13/RC/2016]
- European Regional Development Fund
- Science Foundation Ireland, Game Face project [13/CDA/2135]
- Science Foundation Ireland (SFI) [13/CDA/2135] Funding Source: Science Foundation Ireland (SFI)
Ask authors/readers for more resources
The study found that creating an audio-visual emotional channel mismatch may influence decision-making in a cooperative task; multi-modal expressions are perceived and reacted upon differently than unimodal expressions.
Emotional expressivity is essential for human interactions, informing both perception and decision-making. Here, we examine whether creating an audio-visual emotional channel mismatch influences decision-making in a cooperative task with a virtual character. We created a virtual character that was either congruent in its emotional expression (smiling in the face and voice) or incongruent (smiling in only one channel). People (N = 98) evaluated the character in terms of valence and arousal in an online study; then, visitors in a museum played the lunar survival task with the character over three experiments (N = 597, 78, 101, respectively). Exploratory results suggest that multi-modal expressions are perceived, and reacted upon, differently than unimodal expressions, supporting previous theories of audio-visual integration.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available