4.7 Article

The quantitative evaluation of functional neuroimaging experiments: The NPAIRS data analysis framework

Journal

NEUROIMAGE
Volume 15, Issue 4, Pages 747-771

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1006/nimg.2001.1034

Keywords

multisubject PET and fMRI studies; data analysis; univariate; multivariate; prediction error; reproducibility; cross-validation; resampling

Funding

  1. NINDS NIH HHS [NS33179] Funding Source: Medline
  2. OMHHE CDC HHS [P20 MN57180] Funding Source: Medline

Ask authors/readers for more resources

We introduce a data-analysis framework and performance metrics for evaluating and optimizing the interaction between activation tasks, experimental designs, and the methodological choices and tools for data acquisition, preprocessing, data analysis, and extraction of statistical parametric maps (SPMs). Our NPAIRS (nonparametric prediction, activation, influence, and reproducibility resampling) framework provides an alternative to simulations and ROC curves by using real PET and fMRI data sets to examine the relationship between prediction accuracy and the signal-to-noise ratios (SNRs) associated with reproducible SPMs. Using cross-validation resampling we plot training-test set predictions of the experimental design variables (e.g., brain-state labels) versus reproducibility SNR metrics for the associated SPMs. We demonstrate the utility of this framework across the wide range of performance metrics obtained from [O-15]water PET studies of 12 age- and sex-matched data sets performing different motor tasks (8 subjects/set). For the 12 data sets we apply NPAIRS with both univariate and multivariate data-analysis approaches to: (1) demonstrate that this framework may be used to obtain reproducible SPMs from any data-analysis approach on a common Z-score scale (rSPM{Z}); (2) demonstrate that the histogram of a (rSPM{Z}) image may be modeled as the sum of a data-analysis-dependent noise distribution and a task-dependent, Gaussian signal distribution that scales monotonically with our reproducibility performance metric; (3) explore the relation between prediction and reproducibility performance metrics with an emphasis on bias-variance tradeoffs for flexible, multivariate models; and (4) measure the broad range of reproducibility SNRs and the significant influence of individual subjects. A companion paper describes learning curves for four of these 12 data sets, which describe an alternative mutual-information prediction metric and NPAIRS reproducibility as a function of training-set sizes from 2 to 18 subjects. We propose the NPAIRS framework as a validation tool for testing and optimizing methodological choices and tools in functional neuroimaging. (C) 2002 Elsevier Science (USA).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available