4.5 Editorial Material

The reproducibility crisis meets stock assessment science: Sources of inadvertent bias in the stock assessment prioritization and review process

期刊

FISHERIES RESEARCH
卷 266, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.fishres.2023.106763

关键词

Reproducibility; Bias; P-hacking; Unintended consequences

向作者/读者索取更多资源

The broader scientific community is facing a reproducibility crisis due to various factors, including selective reporting and p-hacking, which may lead to false positive results and misleading effect size estimates. Fisheries science is also vulnerable to this issue, as bias can inadvertently enter the process through various decision-making steps. A simulation model demonstrates how asymmetric scrutiny of assessments or re-assessments can introduce bias into the advisory process for fishing limits in a multi-stock fishery.
The broader scientific community is struggling with a reproducibility crisis brought on by numerous factors, including p-hacking or selective reporting that may increase the rate of false positives or generate misleading effect size estimates from meta-analyses. This results when multiple modeling approaches or statistical tests may be brought to bear on the same problem, and there are pressures or rewards for finding significant results. Fisheries science is unlikely to be immune to this problem, with numerous opportunities for bias to inadvertently enter into the process through the prioritization of stocks for assessment, decisions about competing model approaches or data treatments within complex assessment models, and decisions about whether to adopt as-sessments for management after they are reviewed. I present a simple simulation model of a system where many assessments are performed each management cycle for a multi-stock fishery, and show how asymmetric selection of assessments for extra scrutiny or re-assessment within a cycle can turn a process generating unbiased advice on fishing limits into one that is biased high. I show similar results when sequential assessments receive extra scrutiny if they show large proportional decreases in catch limits compared to a prior assessment for the same stock, especially if there are only small changes in true stock size or status over the interval between assessments. The level of bias introduced by a plausible level of asymmetric scrutiny is unlikely to fundamentally undermine scientific advice, but may be sufficient to compromise the nominal overfishing probabilities used in a common framework for accommodating uncertainty, and introduce a level of bias comparable to the difference between buffers corresponding to commonly-applied levels of risk tolerance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据