4.8 Article

Sparse low-order interaction network underlies a highly correlated and learnable neural population code

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.1019641108

Keywords

high-order; correlations; maximum entropy; neural networks; sparseness

Funding

  1. Israel Science Foundation
  2. The Center for Complexity Science
  3. Minerva Foundation
  4. ERASysBio+ program
  5. The Clore Center for Biological Physics
  6. The Peter and Patricia Gruber Foundation

Ask authors/readers for more resources

Information is carried in the brain by the joint activity patterns of large groups of neurons. Understanding the structure and function of population neural codes is challenging because of the exponential number of possible activity patterns and dependencies among neurons. We report here that for groups of similar to 100 retinal neurons responding to natural stimuli, pairwise-based models, which were highly accurate for small networks, are no longer sufficient. We show that because of the sparse nature of the neural code, the higher-order interactions can be easily learned using a novel model and that a very sparse low-order interaction network underlies the code of large populations of neurons. Additionally, we show that the interaction network is organized in a hierarchical and modular manner, which hints at scalability. Our results suggest that learnability may be a key feature of the neural code.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available