4.6 Article

Assessing the calibration of mortality benchmarks in critical care: The Hosmer-Lemeshow test revisited

Journal

CRITICAL CARE MEDICINE
Volume 35, Issue 9, Pages 2052-2056

Publisher

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.1097/01.CCM.0000275267.64078.B0

Keywords

intensive care; patient outcome assessment; predictive models; hospital mortality; Hosmer-Lemeshow statistic; logistic regression

Ask authors/readers for more resources

Objective: To examine the Hosmer-Lemeshow test's sensitivity in evaluating the calibration of models predicting hospital mortality in large critical care populations. Design: Simulation study. Setting: Intensive care unit databases used for predictive modeling. Patients: Data sets were simulated representing the approximate number of patients used in earlier versions of critical care predictive models (n = 5,000 and 10,000) and more recent predictive models (n = 50,000). Each patient had a hospital mortality probability generated as a function of 23 risk variables. Interventions: None. Measurements and Main Results: Data sets of 5,000, 10,000, and 50,000 patients were replicated 1,000 times. Logistic regression models were evaluated for each simulated data set. This process was initially carried out under conditions of perfect fit (observed mortality = predicted mortality; standardized mortality ratio = 1.000) and repeated with an observed mortality that differed slightly (0.4%) from predicted mortality. Under conditions of perfect fit, the Hosmer-Lemeshow test was not influenced by the number of patients in the data set. In situations where there was a slight deviation from perfect fit, the Hosmer-Lemeshow test was sensitive to sample size. For populations of 5,000 patients, 10% of the Hosmer-Lemeshow tests were significant at p <.05, whereas for 10,000 patients 34% of the Hosmer-Lemeshow tests were significant at p <.05. When the number of patients matched contemporary studies (i.e., 50,000 patients), the Hosmer-Lemeshow test was statistically significant in 100% of the models. Conclusions: Caution should be used in interpreting the calibration of predictive models developed using a smaller data set when applied to larger numbers of patients. A significant Hosmer-Lemeshow test does not necessarily mean that a predictive model is not useful or suspect. While decisions concerning a mortality model's suitability should include the Hosmer-Lemeshow test, additional information needs to be taken into consideration. This includes the overall number of patients, the observed and predicted probabilities within each decile, and adjunct measures of model calibration.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available