4.4 Article

Training and Interpreting Machine Learning Algorithms to Evaluate Fall Risk After Emergency Department Visits

Journal

MEDICAL CARE
Volume 57, Issue 7, Pages 560-566

Publisher

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.1097/MLR.0000000000001140

Keywords

falls; screening; electronic health record; machine learning; emergency medicine

Funding

  1. Agency for Healthcare Research and Quality (AHRQ) [K08HS024558, K08HS024342]
  2. National Institutes of Health (NIH) [K08DK111234, K24AG054560]
  3. Clinical and Translational Science Award (CTSA) program, through the NIH National Center for Advancing Translational Sciences (NCATS) [UL1TR000427]

Ask authors/readers for more resources

Background: Machine learning is increasingly used for risk stratification in health care. Achieving accurate predictive models do not improve outcomes if they cannot be translated into efficacious intervention. Here we examine the potential utility of automated risk stratification and referral intervention to screen older adults for fall risk after emergency department (ED) visits. Objective: This study evaluated several machine learning methodologies for the creation of a risk stratification algorithm using electronic health record data and estimated the effects of a resultant intervention based on algorithm performance in test data. Methods: Data available at the time of ED discharge were retrospectively collected and separated into training and test datasets. Algorithms were developed to predict the outcome of a return visit for fall within 6 months of an ED index visit. Models included random forests, AdaBoost, and regression-based methods. We evaluated models both by the area under the receiver operating characteristic (ROC) curve, also referred to as area under the curve (AUC), and by projected clinical impact, estimating number needed to treat (NNT) and referrals per week for a fall risk intervention. Results: The random forest model achieved an AUC of 0.78, with slightly lower performance in regression-based models. Algorithms with similar performance, when evaluated by AUC, differed when placed into a clinical context with the defined task of estimated NNT in a real-world scenario. Conclusion: The ability to translate the results of our analysis to the potential tradeoff between referral numbers and NNT offers decisionmakers the ability to envision the effects of a proposed intervention before implementation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available