4.7 Article Proceedings Paper

Causal inference and temporal predictions in audiovisual perception of speech and music

Journal

ANNALS OF THE NEW YORK ACADEMY OF SCIENCES
Volume 1423, Issue 1, Pages 102-116

Publisher

WILEY
DOI: 10.1111/nyas.13615

Keywords

audiovisual; speech; music; prediction error; Bayesian causal inference

Funding

  1. ERC

Ask authors/readers for more resources

To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available