Journal
AMERICAN JOURNAL OF MEDICINE
Volume 135, Issue 6, Pages 769-774Publisher
ELSEVIER SCIENCE INC
DOI: 10.1016/j.amjmed.2021.12.020
Keywords
Critical care patients; External validation; Laboratory prediction; Machine learning; Predictive analytics
Categories
Funding
- National Center for Advancing Translational Sciences (NCATS) [U01TR002062, UL1TR000371, U01TR002393]
- National Institute of Aging (NIA) [R01AG066749]
- Cancer Prevention and Research Institute of Texas (CPRIT) [RP170668, RR180012]
- Reynolds and Reynolds Professorship in Clinical Informatics
Ask authors/readers for more resources
This study externally validated a machine learning algorithm for identifying unnecessary laboratory tests, demonstrating similar performance in a different institution. The model showed good accuracy in predicting abnormality and transitions, but performed poorly in predicting laboratory values in clinical applications.
BACKGROUND: Unnecessary laboratory tests contribute to iatrogenic harm and are a major source of waste in the health care system. We previously developed a machine learning algorithm to help clinicians identify unnecessary laboratory tests, but it has not been externally validated. In this study, we externally validate our machine learning algorithm. METHODS: To externally validate the machine learning algorithm that was originally trained on the Medical Information Mart for Intensive Care (MIMIC) III database, we tested the algorithm in a separate institution. We identified and abstracted data for all patients older than 18 years admitted to the intensive care unit at Memorial Hermann Hospital in Houston, Texas (MHH) from January 1, 2020 to November 13, 2020. Using the transfer learning style, we performed external validation of the machine learning algorithm. RESULTS: A total of 651 MHH patients were included. The model performed well in predicting abnormality (area under the curve [AUC] 0.98 for MIMIC III and 0.89 for MHH). The model performed similarly in predicting transitions from normal laboratory range to abnormal (AUC 0.71 for MIMIC III and 0.70 for MHH). The performance of the model in predicting the actual laboratory value was also similar in the MIMIC III (accuracy 0.41) and MHH data (0.45). CONCLUSIONS: We externally validated the machine learning model and showed that the model performed similarly, supporting the generalizability to other settings. While this model demonstrated good performance for predicting abnormal labs and transitions, it does not perform well enough for prediction of laboratory values in most clinical applications. (C) 2022 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available