4.6 Article

Structured random receptive fields enable informative sensory encodings

Journal

PLOS COMPUTATIONAL BIOLOGY
Volume 18, Issue 10, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pcbi.1010484

Keywords

-

Ask authors/readers for more resources

This study models the receptive fields of sensory neurons in a way that incorporates randomness and connects to the theory of artificial neural networks. The models enhance signal and remove noise, enabling more efficient learning in artificial tasks. This research has significance for both neuroscience and machine learning communities.
Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains. Author summary Evolution has ensured that animal brains are dedicated to extracting useful information from raw sensory stimuli while discarding everything else. Models of sensory neurons are a key part of our theories of how the brain represents the world. In this work, we model the tuning properties of sensory neurons in a way that incorporates randomness and builds a bridge to a leading mathematical theory for understanding how artificial neural networks learn. Our models capture important properties of large populations of real neurons presented with varying stimuli. Moreover, we give a precise mathematical formula for how sensory neurons in two distinct areas, one involving a gyroscopic organ in insects and the other visual processing center in mammals, transform their inputs. We also find that artificial models imbued with properties from real neurons learn more efficiently, with shorter training time and fewer examples, and our mathematical theory explains some of these findings. This work expands our understanding of sensory representation in large networks with benefits for both the neuroscience and machine learning communities.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available