Journal
NATURE COMMUNICATIONS
Volume 13, Issue 1, Pages -Publisher
NATURE PORTFOLIO
DOI: 10.1038/s41467-022-34025-x
Keywords
-
Categories
Funding
- National Institute of Health/National Cancer Institute NIH/NCI [U01-CA243075]
- National Institute of Health/National Institute of Dental and Craniofacial Research (NIH/NIDCR) [R56-DE030958]
- NIH/NCI [R21 CA251923]
- Department of Defense [W81XWH-22-1-0021]
- Cancer Research Foundation
- Stand Up to Cancer (SU2C) Fanconi Anemia Research Fund - Farrah Fawcett Foundation Head and Neck Cancer Research Team Grant
- Horizon 2021-SC1-BHC I3LUNG grant
- ECOG Research and Education Foundation
- Mark Foundation ASPIRE Award
Ask authors/readers for more resources
A model's ability to express its own predictive uncertainty is crucial in maintaining clinical user confidence in real-world medical settings. This study presents a clinically-oriented approach to quantify uncertainty in cancer digital histopathology, using dropout to estimate uncertainty and establishing cutoffs for low- and high-confidence predictions based on threshold calculation on training data.
A model's ability to express its own predictive uncertainty is an essential attribute for maintaining clinical user confidence as computational biomarkers are deployed into real-world medical settings. In the domain of cancer digital histopathology, we describe a clinically-oriented approach to uncertainty quantification for whole-slide images, estimating uncertainty using dropout and calculating thresholds on training data to establish cutoffs for low- and high-confidence predictions. We train models to identify lung adenocarcinoma vs. squamous cell carcinoma and show that high-confidence predictions outperform predictions without uncertainty, in both cross-validation and testing on two large external datasets spanning multiple institutions. Our testing strategy closely approximates real-world application, with predictions generated on unsupervised, unannotated slides using predetermined thresholds. Furthermore, we show that uncertainty thresholding remains reliable in the setting of domain shift, with accurate high-confidence predictions of adenocarcinoma vs. squamous cell carcinoma for out-of-distribution, non-lung cancer cohorts. Safe clinical deployment of deep learning models for digital pathology requires reliable estimates of predictive uncertainty. Here the authors describe an algorithm for quantifying whole-slide image uncertainty, demonstrating their approach with models trained to distinguish lung cancer subtypes.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available