4.5 Article

VAE-CoGAN: Unpaired image-to-image translation for low-level vision

Journal

SIGNAL IMAGE AND VIDEO PROCESSING
Volume 17, Issue 4, Pages 1019-1026

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s11760-022-02307-y

Keywords

Dehaze; Derain; Generative adversarial networks; Variational autoencoder

Ask authors/readers for more resources

This paper proposes an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) to solve low-level vision problems. The method, called VAE-CoGAN, introduces a shared-latent space and variational autoencoder (VAE) in its framework. The method has been evaluated using synthetic datasets and real-world images, and has shown favorable performance compared to state-of-the-art methods.
Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) called VAE-CoGAN to solve this problem. Different from the basic CoGAN, we propose a shared-latent space and variational autoencoder (VAE) in framework. We use synthetic datasets and the real-world images to evaluate our method. The extensive evaluation and comparison results show that the proposed method can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available