4.7 Article

Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 37, Issue 7, Pages 1562-1573

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2018.2791721

Keywords

Interactive image segmentation; convolutional neural network; fine-tuning; fetal MRI; brain tumor

Funding

  1. Wellcome Trust [WT101957, WT97914, HICF-T4-275]
  2. EPSRC [NS/A000027/1, EP/H046410/1, EP/J020990/1, EP/K005278, NS/A000050/1]
  3. Wellcome/EPSRC [203145Z/16/Z]
  4. Royal Society [RG160569]
  5. National Institute for Health Research University College London (UCL) Hospitals Biomedical Research Centre
  6. Great Ormond Street Hospital Charity
  7. UCL ORS
  8. GRS
  9. NVIDIA
  10. Emerald
  11. Engineering and Physical Sciences Research Council [EP/J020990/1, 1585723, EP/H046410/1] Funding Source: researchfish
  12. EPSRC [EP/H046410/1, EP/J020990/1] Funding Source: UKRI

Ask authors/readers for more resources

Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised(without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available