4.2 Article

Co-linguistic content inferences: From gestures to sound effects and emoji

Journal

QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY
Volume 75, Issue 10, Pages 1828-1843

Publisher

SAGE PUBLICATIONS LTD
DOI: 10.1177/17470218221080645

Keywords

Co-linguistic content; gesture; emoji; semantics; pragmatics

Funding

  1. Western Sydney University through the University's Research Theme Champion - DFG [387623969]

Ask authors/readers for more resources

Co-speech gestures can provide additional semantic content to spoken utterances and interact with logical operators. Research shows that co-speech sound effects and co-text emoji can exhibit similar behaviors and inference patterns as co-speech gestures.
Among other uses, co-speech gestures can contribute additional semantic content to the spoken utterances with which they coincide. A growing body of research is dedicated to understanding how inferences from gestures interact with logical operators in speech, including negation (not/n't), modals (e.g., might), and quantifiers (e.g., each, none, exactly one). A related but less addressed question is what kinds of meaningful content other than gestures can evince this same behaviour; this is in turn connected to the much broader question of what properties of gestures are responsible for how they interact with logical operators. We present two experiments investigating sentences with co-speech sound effects and co-text emoji in lieu of gestures, revealing a remarkably similar inference pattern to that of co-speech gestures. The results suggest that gestural inferences do not behave the way they do because of any traits specific to gestures, and that the inference pattern extends to a much broader range of content.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available