Journal
IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 39, Issue 12, Pages 3868-3878Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2020.3006437
Keywords
Uncertainty; Image segmentation; Calibration; Estimation; Biomedical imaging; Artificial neural networks; Bayes methods; Uncertainty estimation; confidence calibration; out-of-distribution detection; semantic segmentation; fully convolutional neural networks
Categories
Funding
- U.S. National Institutes of Health [P41EB015898]
- Natural Sciences and Engineering Research Council (NSERC) of Canada
- Canadian Institutes of Health Research (CIHR)
Ask authors/readers for more resources
Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2) We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available