4.8 Article

Low statistical power and overestimated anthropogenic impacts, exacerbated by publication bias, dominate field studies in global change biology

Journal

GLOBAL CHANGE BIOLOGY
Volume 28, Issue 3, Pages 969-989

Publisher

WILEY
DOI: 10.1111/gcb.15972

Keywords

climate change; exaggerated effect size; experimentation; meta-research; meta-science; reproducibility; second-order meta-analysis; selective reporting bias; small-study effect; transparency

Funding

  1. National Natural Science Foundation of China [32102597]
  2. China Agriculture Research System [CARS-40]
  3. Deutsche Forschungsgemeinschaft [DFG HI 848/26-1]
  4. Alfred-Wegener-Institute, Helmholtz-Center for Polar and Marine Research
  5. Ministry for Science and Culture of Lower Saxony [ZN3285]
  6. Volkswagen Foundation [ZN3285]
  7. Australian Research Council (ARC) Discovery Grant [DP210100812]

Ask authors/readers for more resources

Field studies are essential for quantifying ecological responses to global change, but may have limited statistical power and inflate effect size estimates. Meta-analyses can mitigate this issue, and surprisingly, manipulative experiments are not necessarily more powerful than non-manipulative observations, challenging previous assumptions. Future research calls for high-powered field studies and transparent reporting to ensure reproducibility and reliability in empirical work and evidence synthesis.
Field studies are essential to reliably quantify ecological responses to global change because they are exposed to realistic climate manipulations. Yet such studies are limited in replicates, resulting in less power and, therefore, potentially unreliable effect estimates. Furthermore, while manipulative field experiments are assumed to be more powerful than non-manipulative observations, it has rarely been scrutinized using extensive data. Here, using 3847 field experiments that were designed to estimate the effect of environmental stressors on ecosystems, we systematically quantified their statistical power and magnitude (Type M) and sign (Type S) errors. Our investigations focused upon the reliability of field experiments to assess the effect of stressors on both ecosystem's response magnitude and variability. When controlling for publication bias, single experiments were underpowered to detect response magnitude (median power: 18%-38% depending on effect sizes). Single experiments also had much lower power to detect response variability (6%-12% depending on effect sizes) than response magnitude. Such underpowered studies could exaggerate estimates of response magnitude by 2-3 times (Type M errors) and variability by 4-10 times. Type S errors were comparatively rare. These observations indicate that low power, coupled with publication bias, inflates the estimates of anthropogenic impacts. Importantly, we found that meta-analyses largely mitigated the issues of low power and exaggerated effect size estimates. Rather surprisingly, manipulative experiments and non-manipulative observations had very similar results in terms of their power, Type M and S errors. Therefore, the previous assumption about the superiority of manipulative experiments in terms of power is overstated. These results call for highly powered field studies to reliably inform theory building and policymaking, via more collaboration and team science, and large-scale ecosystem facilities. Future studies also require transparent reporting and open science practices to approach reproducible and reliable empirical work and evidence synthesis.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available