4.7 Article

Predictive uncertainty estimation for out-of-distribution detection in digital pathology

Journal

MEDICAL IMAGE ANALYSIS
Volume 83, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.media.2022.102655

Keywords

Deep learning; Histopathology; Out-of-distribution detection; Uncertainty estimation; Ensemble diversity; Multi-heads

Ask authors/readers for more resources

This study evaluates prevalent methods on large-scale digital pathology datasets and provides a benchmark. The results show that different methods perform differently in detecting different out-of-distribution data in the medical imaging domain. The study also demonstrates the harmful impact of unknown data on the performance of machine learning models.
Machine learning model deployment in clinical practice demands real-time risk assessment to identify situations in which the model is uncertain. Once deployed, models should be accurate for classes seen during training while providing informative estimates of uncertainty to flag abnormalities and unseen classes for further analysis. Although recent developments in uncertainty estimation have resulted in an increasing number of methods, a rigorous empirical evaluation of their performance on large-scale digital pathology datasets is lacking. This work provides a benchmark for evaluating prevalent methods on multiple datasets by comparing the uncertainty estimates on both in-distribution and realistic near and far out-of-distribution (OOD) data on a whole-slide level. To this end, we aggregate uncertainty values from patch-based classifiers to whole-slide level uncertainty scores. We show that results found in classical computer vision benchmarks do not always translate to the medical imaging setting. Specifically, we demonstrate that deep ensembles perform best at detecting far-OOD data but can be outperformed on a more challenging near-OOD detection task by multi-head ensembles trained for optimal ensemble diversity. Furthermore, we demonstrate the harmful impact OOD data can have on the performance of deployed machine learning models. Overall, we show that uncertainty estimates can be used to discriminate in-distribution from OOD data with high AUC scores. Still, model deployment might require careful tuning based on prior knowledge of prospective OOD data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available