Journal
STATISTICAL SCIENCE
Volume 23, Issue 1, Pages 1-22Publisher
INST MATHEMATICAL STATISTICS
DOI: 10.1214/07-STS236
Keywords
simultaneous tests; empirical null; false discovery rates
Categories
Funding
- Division Of Mathematical Sciences
- Direct For Mathematical & Physical Scien [0804324] Funding Source: National Science Foundation
Ask authors/readers for more resources
The classic frequentist theory of hypothesis testing developed by Neyman, Pearson and Fisher has a claim to being the twentieth century's most influential piece of applied mathematics. Something new is happening in the twenty-first century: high-throughput devices, such as microarrays, routinely require simultaneous hypothesis tests for thousands of individual cases, not at all what the classical theory had in mind. In these situations empirical Bayes information begins to force itself upon frequentists and Bayesians alike. The two-groups model is a simple Bayesian construction that facilitates empirical Bayes analysis. This article concerns the interplay of Bayesian and frequentist ideas in the two-groups setting, with particular attention focused on Benjamini and Hochberg's False Discovery Rate method. Topics include the choice and meaning of the null hypothesis in large-scale testing situations, power considerations, the limitations of permutation methods, significance testing for groups of cases (such as pathways in microarray studies), correlation effects, multiple confidence intervals and Bayesian competitors to the two-groups model.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available