4.5 Article

A multiple testing framework for diagnostic accuracy studies with co-primary endpoints

Journal

STATISTICS IN MEDICINE
Volume 41, Issue 5, Pages 891-909

Publisher

WILEY
DOI: 10.1002/sim.9308

Keywords

machine learning; medical device; medical testing; model selection; predictive modeling

Funding

  1. Deutsche Forschungsgemeinschaft [281474342/GRK2224/1]

Ask authors/readers for more resources

The study presents a multiple testing framework for disease diagnostic accuracy studies with sensitivity and specificity as co-primary endpoints. It challenges the common recommendation of strict separation between model selection and evaluation, and demonstrates that evaluating multiple promising diagnostic models simultaneously can lead to better final models.
Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high-dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predictive models and medical devices based on such models should undergo a throughout evaluation before being implemented into clinical practice. In this work, we propose a multiple testing framework for (comparative) phase III diagnostic accuracy studies with sensitivity and specificity as co-primary endpoints. Our approach challenges the frequent recommendation to strictly separate model selection and evaluation, that is, to only assess a single diagnostic model in the evaluation study. We show that our parametric simultaneous test procedure asymptotically allows strong control of the family-wise error rate. A multiplicity correction is also available for point and interval estimates. Moreover, we demonstrate in an extensive simulation study that our multiple testing strategy on average leads to a better final diagnostic model and increased statistical power. To plan such studies, we propose a Bayesian approach to determine the optimal number of models to evaluate simultaneously. For this purpose, our algorithm optimizes the expected final model performance given previous (hold-out) data from the model development phase. We conclude that an assessment of multiple promising diagnostic models in the same evaluation study has several advantages when suitable adjustments for multiple comparisons are employed.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available