期刊
SIGNAL IMAGE AND VIDEO PROCESSING
卷 17, 期 4, 页码 1019-1026出版社
SPRINGER LONDON LTD
DOI: 10.1007/s11760-022-02307-y
关键词
Dehaze; Derain; Generative adversarial networks; Variational autoencoder
This paper proposes an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) to solve low-level vision problems. The method, called VAE-CoGAN, introduces a shared-latent space and variational autoencoder (VAE) in its framework. The method has been evaluated using synthetic datasets and real-world images, and has shown favorable performance compared to state-of-the-art methods.
Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) called VAE-CoGAN to solve this problem. Different from the basic CoGAN, we propose a shared-latent space and variational autoencoder (VAE) in framework. We use synthetic datasets and the real-world images to evaluate our method. The extensive evaluation and comparison results show that the proposed method can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据