Journal
NEUROIMAGE
Volume 155, Issue -, Pages 549-564Publisher
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.neuroimage.2017.04.061
Keywords
Systems biology; Epistemology; Hypothesis testing; High-dimensional statistics; Machine learning; Sample complexity
Funding
- Singapore MOE Tier 2 [MOE2014-T2-2-016]
- NUS Strategic Research [DPRT/944/09/14]
- NUS SOM Aspiration Fund [R185000271720]
- Singapore NMRC [CBRG/0088/2015]
- NUS YIA
- NRF Fellowship [NRF-NRFF2017-06]
- Deutsche Forschungsgemeinschaft (DFG) [BZ2/2-1, BZ2/3-1, BZ2/4-1]
- Amazon AWS Research Grant
- German National Academic Foundation
- START-Program of the Faculty of Medicine, RWTH Aachen
- Deutsche Forschungsgemeinschaft (DFG, International Research Training Group IRTG2150)
Ask authors/readers for more resources
Neuroscience is undergoing faster changes than ever before. Over 100 years our field qualitatively described and invasively manipulated single or few organisms to gain anatomical, physiological, and pharmacological insights. In the last 10 years neuroscience spawned quantitative datasets of unprecedented breadth (e.g., microanatomy, synaptic connections, and optogenetic brain-behavior assays) and size (e.g., cognition, brain imaging, and genetics). While growing data availability and information granularity have been amply discussed, we direct attention to a less explored question: How will the unprecedented data richness shape data analysis practices? Statistical reasoning is becoming more important to distill neurobiological knowledge from healthy and pathological brain measurements. We argue that large-scale data analysis will use more statistical models that are non-parametric, generative, and mixing frequentist and Bayesian aspects, while supplementing classical hypothesis testing with out-of-sample predictions.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available