4.7 Article

Explainable Machine Learning Model for Predicting GI Bleed Mortality in the Intensive Care Unit

Journal

AMERICAN JOURNAL OF GASTROENTEROLOGY
Volume 115, Issue 10, Pages 1657-1668

Publisher

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.14309/ajg.0000000000000632

Keywords

-

Ask authors/readers for more resources

INTRODUCTION: Acute gastrointestinal (GI) bleed is a common reason for hospitalization with 2%-10% risk of mortality. In this study, we developed a machine learning (ML) model to calculate the risk of mortality in intensive care unit patients admitted for GI bleed and compared it with APACHE IVa risk score. We used explainable ML methods to provide insight into the model's prediction and outcome. METHODS: We analyzed the patient data in the Electronic Intensive Care Unit Collaborative Research Database and extracted data for 5,691 patients (mean age = 67.4 years; 61% men) admitted with GI bleed. The data were used in training a ML model to identify patients who died in the intensive care unit. We compared the predictive performance of the ML model with the APACHE IVa risk score. Performance was measured by area under receiver operating characteristic curve (AUC) analysis. This study also used explainable ML methods to provide insights into the model's outcome or prediction using the SHAP (SHapley Additive exPlanations) method. RESULTS: The ML model performed better than the APACHE IVa risk score in correctly classifying the low-risk patients. The ML model had a specificity of 27% (95% confidence interval [CI]: 25-36) at a sensitivity of 100% compared with the APACHE IVa score, which had a specificity of 4% (95% CI: 3-31) at a sensitivity of 100%. The model identified patients who died with an AUC of 0.85 (95% CI: 0.80-0.90) in the internal validation set, whereas the APACHE IVa clinical scoring systems identified patients who died with AUC values of 0.80 (95% CI: 0.73-0.86) withPvalue DISCUSSION: We developed a ML model that predicts the mortality in patients with GI bleed with a greater accuracy than the current scoring system. By making the ML model explainable, clinicians would be able to better understand the reasoning behind the outcome.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available