4.7 Article

Assessing and tuning brain decoders: Cross-validation, caveats, and guidelines

Journal

NEUROIMAGE
Volume 145, Issue -, Pages 166-179

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.neuroimage.2016.10.038

Keywords

Cross-validation; Decoding; FMRI; Model selection; Sparse; Bagging; MVPA

Funding

  1. EU [604102]
  2. NiConnect project [ANR-11-BINF-0004_NiConnect]

Ask authors/readers for more resources

Decoding, i.e. prediction from brain images or signals, calls for empirical evaluation of its predictive power. Such evaluation is achieved via cross-validation, a method also used to tune decoders' hyper-parameters. This paper is a review on cross-validation procedures for decoding in neuroimaging. It includes a didactic overview of the relevant theoretical considerations. Practical aspects are highlighted with an extensive empirical study of the common decoders in within- and across-subject predictions, on multiple datasets anatomical and functional MRI and MEG- and simulations. Theory and experiments outline that the popular leave-one-out strategy leads to unstable and biased estimates, and a repeated random splits method should be preferred. Experiments outline the large error bars of cross-validation in neuroimaging settings: typical confidence intervals of 10%. Nested cross-validation can tune decoders' parameters while avoiding circularity bias. However we find that it can be favorable to use sane defaults, in particular for non-sparse decoders.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available