3.8 Proceedings Paper

Multi-Modal Learning from Unpaired Images: Application to Multi-Organ Segmentation in CT and MRI

Publisher

IEEE
DOI: 10.1109/WACV.2018.00066

Keywords

-

Funding

  1. Indonesia Endowment for Education (LPDP) - Indonesia Presidential PhD Scholarship programme
  2. Microsoft Research PhD Scholarship
  3. EPSRC Centre for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS) [EP/L016796/1]
  4. Imperial College Research Fellowship
  5. Efficacy and Mechanism Evaluation (EME) Programme
  6. MRC
  7. NIHR [13/122/01]

Ask authors/readers for more resources

Convolutional neural networks have been widely used in medical image segmentation. The amount of training data strongly determines the overall performance. Most approaches are applied for a single imaging modality, e.g., brain MRI. In practice, it is often difficult to acquire sufficient training data of a certain imaging modality. The same anatomical structures, however, may be visible in different modalities such as major organs on abdominal CT and MRI. In this work, we investigate the effectiveness of learning from multiple modalities to improve the segmentation accuracy on each individual modality. We study the feasibility of using a dual-stream encoder-decoder architecture to learn modality-independent, and thus, generalisable and robust features. All of our MRI and CT data are unpaired, which means they are obtained from different subjects and not registered to each other. Experiments show that multi-modal learning can improve overall accuracy over modality-specific training. Results demonstrate that information across modalities can in particular improve performance on varying structures such as the spleen.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available