4.7 Article

End-to-End Adversarial Retinal Image Synthesis

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 37, Issue 3, Pages 781-791

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2017.2759102

Keywords

Retinal image synthesis; retinal image analysis; generative adversarial networks; adversarial autoencoders

Funding

  1. ERDF European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation COMPETE Programme
  2. FCT Fundacao para a Ciencia e a Tecnologia Portuguese Foundation for Science and Technology [CMUP-ERI/TIC/0028/2014]
  3. North Portugal Regional Operational Programme (NORTE), under PORTUGAL [NORTE-01-0145-FEDER-000016]

Ask authors/readers for more resources

In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available