4.5 Review

Machine learning in systematic reviews: Comparing automated text clustering with Lingo3G and human researcher categorization in a rapid review

Journal

RESEARCH SYNTHESIS METHODS
Volume 13, Issue 2, Pages 229-241

Publisher

WILEY
DOI: 10.1002/jrsm.1541

Keywords

clustering; Lingo3G; machine learning; scoping reviews; systematic review

Ask authors/readers for more resources

The study evaluated the utility of an automated clustering method in categorizing studies, finding that automated clustering had higher precision and recall compared to manual categorization, and also saved 49% more time. The clustering algorithm was sensitive enough to group studies based on linguistic differences, corresponding to manual categories.
Systematic reviews are resource-intensive. The machine learning tools being developed mostly focus on the study identification process, but tools to assist in analysis and categorization are also needed. One possibility is to use unsupervised automatic text clustering, in which each study is automatically assigned to one or more meaningful clusters. Our main aim was to assess the usefulness of an automated clustering method, Lingo3G, in categorizing studies in a simplified rapid review, then compare performance (precision and recall) of this method compared to manual categorization. We randomly assigned all 128 studies in a review to be coded by a human researcher blinded to cluster assignment (mimicking two independent researchers) or by a human researcher non-blinded to cluster assignment (mimicking one researcher checking another's work). We compared time use, precision and recall of manual categorization versus automated clustering. Automated clustering and manual categorization organized studies by population and intervention/context. Automated clustering failed to identify two manually identified categories but identified one additional category not identified by the human researcher. We estimate that automated clustering has similar precision to both blinded and non-blinded researchers (e.g., 88% vs. 89%), but higher recall (e.g., 89% vs. 84%). Manual categorization required 49% more time than automated clustering. Using a specific clustering algorithm, automated clustering can be helpful with categorization of and identifying patterns across studies in simpler systematic reviews. We found that the clustering was sensitive enough to group studies according to linguistic differences that often corresponded to the manual categories.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available