4.7 Article

Bridging Synthetic and Real Images: A Transferable and Multiple Consistency Aided Fundus Image Enhancement Framework

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 42, Issue 8, Pages 2189-2199

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2023.3247783

Keywords

Index Terms-Fundus image; teacher-student model; image enhancement

Ask authors/readers for more resources

Deep learning models for image enhancement have greatly improved the readability of fundus images and reduced the risk of misdiagnosis. However, the discrepancy between synthetic and real images hinders the generalization of these models. In this work, we propose an optimized teacher-student framework that simultaneously enhances images and adapts to domain shift. Our framework outperforms baseline approaches and benefits downstream clinical tasks.
Deep learning based image enhancement models have largely improved the readability of fundus images in order to decrease the uncertainty of clinical observations and the risk of misdiagnosis. However, due to the difficulty of acquiring paired real fundus images at different qualities, most existing methods have to adopt synthetic image pairs as training data. The domain shift between the synthetic and the real images inevitably hinders the generalization of such models on clinical data. In this work, we propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation. The student network uses synthetic pairs for supervised enhancement, and regularizes the enhancement model to reduce domain-shift by enforcing teacher-student prediction consistency on the real fundus images without relying on enhanced ground-truth. Moreover, we also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network. Our MAGE-Net utilizes multi-stage enhancement module and retinal structure preservation module to progressively integrate the multi-scale features and simultaneously preserve the retinal structures for better fundus image quality enhancement. Comprehensive experiments on both real and synthetic datasets demonstrate that our framework outperforms the baseline approaches. Moreover, our method also benefits the downstream clinical tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available