4.4 Article

Evaluating Meta-Analytic Methods to Detect Selective Reporting in the Presence of Dependent Effect Sizes

期刊

PSYCHOLOGICAL METHODS
卷 26, 期 2, 页码 141-160

出版社

AMER PSYCHOLOGICAL ASSOC
DOI: 10.1037/met0000300

关键词

meta-analysis; selective reporting; small-study effects; publication bias; robust variance estimation

向作者/读者索取更多资源

Selective reporting of statistically significant results in primary studies can distort meta-analytic findings. Existing methods for detecting this problem do not account for statistically dependent effect size estimates from primary studies, highlighting the need for further investigation. Tests that incorporate techniques to handle dependent effect sizes show promise in controlling false positives rates, but have limited power to detect selective reporting biases unless a majority of effect sizes are statistically significant. Future work is needed to enhance and expand these methods.
Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger's regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects. Translational Abstract When researchers or journals prefer to publish mostly or solely primary studies with statistically significant results, this creates selective reporting biases that can distort the findings of meta-analyses. A variety of statistical methods for detecting this problem are available and routinely used in meta-analysis, but existing methods do not account for the fact that primary studies often contribute multiple, statistically dependent effect size estimates to a meta-analytic dataset. Thus, there is a need to further investigate tests for detecting selective reporting while also accounting for statistically dependent effect size estimates. We evaluated the performance of several tests for detecting selective reporting in an extensive computer simulation. The results indicated that ignoring the problem of dependent effect sizes leads to tests that do not have the correct false positive rates. Regression tests for small-study effects have controlled false positive rates when using techniques that account for statistically dependent effect size estimates, specifically modeling the dependency using multivariate techniques. However, they have limited power to detect selective reporting except when nearly all included effect sizes are statistically significant. Tests based on a three-parameter selection model do not fully control false positive rates, but show more promising power. Future methodological work is needed to improve and extend these methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据