4.6 Article

Publication bias in ecology and evolution: an empirical assessment using the 'trim and fill' method

Journal

BIOLOGICAL REVIEWS
Volume 77, Issue 2, Pages 211-222

Publisher

WILEY
DOI: 10.1017/S1464793101005875

Keywords

effect size; fail-safe number; fluctuating asymmetry; funnel plots; meta-analysis; publication bias; trim and fill

Categories

Ask authors/readers for more resources

Recent reviews of specific topics, such as the relationship between male attractiveness to females and fluctuating asymmetry or attractiveness and the expression of secondary sexual characters, suggest that publication bias might be a problem in ecology and evolution. In these cases, there is a significant negative correlation between the sample size of published studies and the magnitude or strength of the research findings (formally the 'effect size'). If all studies that are conducted are equally likely to be published. irrespective of their findings, there should not be a directional relationship between effect size and sample size: only a decrease in the variance in effect size as sample size increases due to a reduction in sampling error. One interpretation of these reports of negative correlations is that studies with small sample sizes and weaker findings (smaller effect sizes) are less likely to be published. If the biological literature is systematically biased this could undermine the attempts of reviewers to summarise actual biology relationships by inflating estimates of average effect sizes. But how common is this problem? And does it really affect the general conclusions of literature reviews? Here, we examine data sets of effect sizes extracted from 40 peer-reviewed. published meta-analyses. We estimate how many studies are missing using the newly developed 'trim and fill' method. This method uses asymmetry in plots of effect size against sample size ('funnel plots') to detect 'missing' studies. For random-effect models of meta-analysis 38% (15/40) of data sets had a significant number of 'missing' studied. After correcting for potential publication bias, 21% (8/38) of weighted mean effects were no longer significantly greater than zero, and 15% (5/34) were no longer statistically robust when we used random-effects models in a weighted meta-analysis. The mean correlation between sample size and the magnitude of standardised effect size was also significantly negative, (r(s) = -0.20, P < 0.0001). Individual correlations were significantly negative (P<0.10) in 35% (14/40) of cases. Publication bias may therefore affect the main conclusions, of at least 15-21% of meta-analyses. We suggest that future literature reviews assess the robustness of their main conclusions by correcting for potential publication bias using the 'trim and fill' method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available