4.6 Review

Crowdsourcing and automation facilitated the identification and classification of randomized controlled trials in a living review

Journal

JOURNAL OF CLINICAL EPIDEMIOLOGY
Volume 164, Issue -, Pages 1-8

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.jclinepi.2023.10.007

Keywords

Randomized controlled trials (RCTs); Crowdsourcing; Machine learning; Rheumatoid arthritis; Systematic reviews; Living systematic reviews; Automation

Ask authors/readers for more resources

This study evaluated an approach that combines automation and crowdsourcing to identify and classify randomized controlled trials (RCTs) for rheumatoid arthritis (RA) in a living systematic review (LSR). The results showed that this approach can significantly reduce the workload for expert reviewers and has high sensitivity.
Objectives: To evaluate an approach using automation and crowdsourcing to identify and classify randomized controlled trials (RCTs) for rheumatoid arthritis (RA) in a living systematic review (LSR). Methods: Records from a database search for RCTs in RA were screened first by machine learning and Cochrane Crowd to exclude non-RCTs, then by trainee reviewers using a Population, Intervention, Comparison, and Outcome (PICO) annotator platform to assess eligibility and classify the trial to the appropriate review. Disagreements were resolved by experts using a custom online tool. We evaluated the efficiency gains, sensitivity, accuracy, and interrater agreement (kappa scores) between reviewers. Results: From 42,452 records, machine learning and Cochrane Crowd excluded 28,777 (68%), trainee reviewers excluded 4,529 (11%), and experts excluded 7,200 (17%). The 1,946 records eligible for our LSR represented 220 RCTs and included 148/149 (99.3%) of known eligible trials from prior reviews. Although excluded from our LSRs, 6,420 records were classified as other RCTs in RA to inform future reviews. False negative rates among trainees were highest for the RCT domain (12%), although only 1.1% of these were for the primary record. Kappa scores for two reviewers ranged from moderate to substantial agreement (0.40-0.69). Conclusion: A screening approach combining machine learning, crowdsourcing, and trainee participation substantially reduced the screening burden for expert reviewers and was highly sensitive. (c) 2023 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available