4.4 Review

Utility of prediction model score: a proposed tool to standardize the performance and generalizability of clinical predictive models based on systematic review

期刊

JOURNAL OF NEUROSURGERY-SPINE
卷 34, 期 5, 页码 779-787

出版社

AMER ASSOC NEUROLOGICAL SURGEONS
DOI: 10.3171/2020.8.SPINE20963

关键词

prediction model; predictive analytics; survival; degenerative; prognostic; spine metastasis; generalizability; performance; oncology; diagnostic technique

向作者/读者索取更多资源

This study evaluated current prediction models in spine metastasis and degenerative spine disease to create a scoring system. Only a few models were rated as excellent, most fell under good, fair, or poor. The performance and characteristics of prediction models impact their reliability and usability in clinical settings.
OBJECTIVE The objective of this study was to evaluate the characteristics and performance of current prediction models in the fields of spine metastasis and degenerative spine disease to create a scoring system that allows direct comparison of the prediction models. METHODS A systematic search of PubMed and Embase was performed to identify relevant studies that included either the proposal of a prediction model or an external validation of a previously proposed prediction model with 1-year outcomes. Characteristics of the original study and discriminative performance of external validations were then assigned points based on thresholds from the overall cohort. RESULTS Nine prediction models were included in the spine metastasis category, while 6 prediction models were included in the degenerative spine category. After assigning the proposed utility of prediction model score to the spine metastasis prediction models, only 1 reached the grade of excellent, while 2 were graded as good, 3 as fair, and 3 as poor. Of the 6 included degenerative spine models, 1 reached the excellent grade, while 3 studies were graded as good, 1 as fair, and 1 as poor. CONCLUSIONS As interest in utilizing predictive analytics in spine surgery increases, there is a concomitant increase in the number of published prediction models that differ in methodology and performance. Prior to applying these models to patient care, these models must be evaluated. To begin addressing this issue, the authors proposed a grading system that compares these models based on various metrics related to their original design as well as internal and external validation. Ultimately, this may hopefully aid clinicians in determining the relative validity and usability of a given model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据