4.6 Article

Tracking Cross-Validated Estimates of Prediction Error as Studies Accumulate

期刊

JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
卷 110, 期 511, 页码 1239-1247

出版社

AMER STATISTICAL ASSOC
DOI: 10.1080/01621459.2014.1002926

关键词

Bayes' rule; Classification; Mixture model

资金

  1. Defense Advanced Research Projects Agency [FA8650-11-1-7151]
  2. National Science Council [100-2115-M-009-007-MY2]
  3. Center of Mathematical Modeling & Scientific Computing
  4. National Center for Theoretical Science, Hsinchu, Taiwan

向作者/读者索取更多资源

In recent years, reproducibility has emerged as a key factor in evaluating x applications of statistics to the biomedical sciences, for example, learning predictors of disease phenotypes from high-throughput omics data. In particular, validation is undermined when error rates on newly acquired data are sharply higher than those originally reported. More precisely, when data are collected from m studies representing possibly different subphenotypes, more generally different mixtures of subphenotypes, the error rates in cross-study validation (CSV) are observed to be larger than those obtained in ordinary randomized cross-validation (RCV), although the gap seems to close as m increases. Whereas these findings are hardly surprising for a heterogenous underlying population, this discrepancy is then seen as a barrier to translational research. We provide a statistical formulation in the large-sample limit: studies themselves are modeled as components of a mixture and all error rates are optimal (Bayes) for a two-class problem. Our results cohere with the trends observed in practice and suggest what is likely to be observed with large samples and consistent density estimators, namely, that the CSV error rate exceeds the RCV error rates for any m, the latter (appropriately averaged) increases with m, and both converge to the optimal rate for the whole population.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据