4.5 Article

Validity of data extraction in evidence synthesis practice of adverse events: reproducibility study

期刊

BMJ-BRITISH MEDICAL JOURNAL
卷 377, 期 -, 页码 -

出版社

BMJ PUBLISHING GROUP
DOI: 10.1136/bmj-2021-069155

关键词

-

资金

  1. Australian National Health and Medical Research Council Fellowship [APP1158469]
  2. US National Institutes of Health/National Library of Medicine [R01 LM012982]
  3. National Institutes of Health/National Institute of Mental Health [R03 MH128727]

向作者/读者索取更多资源

This study aims to investigate the validity of data extraction in systematic reviews of adverse events, examine the impact of data extraction errors on the results, and develop a classification framework for data extraction errors to support further methodological research.
OBJECTIVES To investigate the validity of data extraction in systematic reviews of adverse events, the effect of data extraction errors on the results, and to develop a classification framework for data extraction errors to support further methodological research. DESIGN Reproducibility study. DATA SOURCES PubMed was searched for eligible systematic reviews published between 1 January 2015 and 1 January 2020. Metadata from the randomised controlled trials were extracted from the systematic reviews by four authors. The original data sources (eg, full text and ClinicalTrials.gov) were then referred to by the same authors to reproduce the data used in these meta analyses. ELIGIBILITY CRITERIA FOR SELECTING STUDIES Systematic reviews were included when based on randomised controlled trials for healthcare interventions that reported safety as the exclusive outcome, with at least one pair meta-analysis that included five or more randomised controlled trials and with a 2x2 table of data for event counts and sample sizes in intervention and control arms available for each trial in the meta-analysis. MAIN OUTCOME MEASURES The primary outcome was data extraction errors summarised at three levels: study level, meta-analysis level, and systematic review level. The potential effect of such errors on the results was further investigated. RESULTS 201 systematic reviews and 829 pairwise meta analyses involving 10 386 randomised controlled trials were included. Data extraction could not be reproduced in 1762 (17.0%) of 10 386 trials. In 554 (66.8%) of 829 meta-analyses, at least one randomised controlled trial had data extraction errors; 171 (85.1%) of 201 systematic reviews had at least one meta-analysis with data extraction errors. The most common types of data extraction errors were numerical errors (49.2%, 867/1762) and ambiguous errors (29.9%, 526/1762), mainly caused by ambiguous definitions of the outcomes. These categories were followed by three others: zero assumption errors, misidentification, and mismatching errors. The impact of these errors were analysed on 288 meta-analyses. Data extraction errors led to 10 (3.5%) of 288 meta-analyses changing the direction of the effect and 19 (6.6%) of 288 meta-analyses changing the significance of the P value. Meta-analyses that had two or more different types of errors were more susceptible to these changes than those with only one type of error (for moderate changes, 11 (28.2%) of 39 v 26 (10.4%) 249, P=0.002; for large changes, 5 (12.8%) of 39 v 8 (3.2%) of 249, P=0.01). CONCLUSION Systematic reviews of adverse events potentially have serious issues in terms of the reproducibility of the data extraction, and these errors can mislead the conclusions. Implementation guidelines are urgently required to help authors of future systematic reviews improve the validity of data extraction.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据