4.6 Article

RSegNet: A Joint Learning Framework for Deformable Registration and Segmentation

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASE.2021.3087868

Keywords

Image segmentation; Medical diagnostic imaging; Task analysis; Deep learning; Strain; Image registration; Deformable models; Deep learning; deformable registration; joint learning framework; segmentation

Funding

  1. Shun Hing Institute of Advanced Engineering (SHIAE), The Chinese University of Hong Kong (CUHK) [BME-p1-21, 4720276]
  2. Singapore Academic Research Fund [R397000353114]

Ask authors/readers for more resources

This article presents a joint learning framework named RSegNet for concurrent deformable registration and segmentation, achieving improved accuracy of both tasks. By minimizing an integrated loss function, utilizing data augmentation and dual-consistency supervision, the method demonstrates better anatomical consistency and deformation regularity, resulting in increased segmentation and registration accuracy.
Medical image segmentation and registration are two tasks to analyze the anatomical structures in clinical research. Still, deep-learning solutions utilizing the connections between segmentation and registration remain underdiscovered. This article designs a joint learning framework named RSegNet that can realize concurrent deformable registration and segmentation by minimizing an integrated loss function, including three parts: diffeomorphic registration loss, segmentation similarity loss, and dual-consistency supervision loss. The probabilistic diffeomorphic registration branch could benefit from the auxiliary segmentations available from the segmentation branch to achieve anatomical consistency and better deformation regularity by dual-consistency supervision. Simultaneously, the segmentation performance could also be improved by data augmentation based on the registration with well-behaved diffeomorphic guarantees. Experiments on the human brain 3-D magnetic resonance images have been implemented to demonstrate the effectiveness of our approach. We trained and validated RSegNet with 1000 images and tested its performances on four public datasets, which shows that our method successfully yields concurrent improvements of both segmentation and registration compared with separately trained networks. Specifically, our method can increase the accuracy of segmentation and registration by 7.0% and 1.4%, respectively, in terms of Dice scores.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available