4.6 Article

Publication and related bias in meta-analysis: Power of statistical tests and prevalence in the literature

期刊

JOURNAL OF CLINICAL EPIDEMIOLOGY
卷 53, 期 11, 页码 1119-1129

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/S0895-4356(00)00242-0

关键词

meta-analysis; publication bias; funnel plot; simulation study; correlation; regression

向作者/读者索取更多资源

Publication and selection biases in meta-analysis are more likely to affect small studies, which also tend to beef lower methodological quality. This may lead to small-study effects, where the smaller studies in a meta-analysis show larger treatment effects. Small-study effects may also arise because of between-trial heterogeneity. Statistical tests for small-study effects have been proposed, but their validity has been questioned. A set of typical meta-analyses containing 5, 10, 20, and 30 trials was defined based on the characteristics of 78 published meta-analyses identified in a hand search of eight journals from 1993 to 1997. Simulations were performed to assess the power of a weighted regression method and a rank correlation test in the presence of no bias, moderate bias or severe bias. We based evidence of small-study effects on P < 0.1. The power to detect bias increased with increasing numbers of trials. The rank correlation test was less powerful than the regression method. For example, assuming a control group event rate of 20% and no treatment effect, moderate bias was detected with the regression test in 13.7%, 23.5%, 40.1% and 51.6% of meta-analyses with 5, 10, 20 and 30 trials. The corresponding figures for the correlation test were 8.5%, 14.7%, 20.4% and 26.0%, respectively. Severe bias was detected with the regression method in 23.5%, 56.1%, 88.3% and 95.9% of meta-anlyses with 5, 10, 20 and 30 trials; as compared to 11.9%, 31.1%, 45.3% and 65.4% with the correlation test. Similar results were obtained in simulations incorporating moderate treatment effects. However the regression method gave false-positive rates which were too high in some situations (large treatment effects, or few events per trial, or all trials of similar sizes). Using the regression method, evidence of small-study effects was present in 21 (26.9%) of the 78 published meta-analyses. Tests for small-study effects should routinely be performed in mete-analysis. Their power is however limited, particularly for moderate amounts of bias or meta-analyses based on a small number of small studies. When evidence of small-study effects is found, cartful consideration should be given to possible explanations for these in the reporting of the meta-analysis. (C) 2000 Elsevier Science Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据