Journal
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES
Volume 1423, Issue 1, Pages 102-116Publisher
WILEY
DOI: 10.1111/nyas.13615
Keywords
audiovisual; speech; music; prediction error; Bayesian causal inference
Categories
Funding
- ERC
Ask authors/readers for more resources
To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available