4.6 Review

Estimating the Difference Between Published and Unpublished Effect Sizes: A Meta-Review

Journal

REVIEW OF EDUCATIONAL RESEARCH
Volume 86, Issue 1, Pages 207-236

Publisher

SAGE PUBLICATIONS INC
DOI: 10.3102/0034654315582067

Keywords

publication bias; meta-review; effect size; meta-analysis

Funding

  1. Loyola University Chicago
  2. Institute of Education Sciences [R305B100016]

Ask authors/readers for more resources

Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations. It is therefore paramount to consider the validity of these results. A well-documented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statistically significant effects, relative to studies with small or null effects, are more likely to be published. We investigated this phenomenon empirically by reviewing meta-analyses published in top-tier journals between 1986 and 2013 that quantified the difference between effect sizes from published and unpublished research. We reviewed 383 meta-analyses of which 81 had sufficient information to calculate an effect size. Results indicated that published studies yielded larger effect sizes than those from unpublished studies ((d) over bar = 0.18, 95% confidence interval [0.10, 0.25]). Moderator analyses revealed that the difference was larger in meta-analyses that included a wide range of unpublished literature. We conclude that intervention researchers require continued support to publish null findings and that meta-analyses should include unpublished studies to mitigate the potential bias from publication status.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available