4.5 Article

VAE-CoGAN: Unpaired image-to-image translation for low-level vision

期刊

SIGNAL IMAGE AND VIDEO PROCESSING
卷 17, 期 4, 页码 1019-1026

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s11760-022-02307-y

关键词

Dehaze; Derain; Generative adversarial networks; Variational autoencoder

向作者/读者索取更多资源

This paper proposes an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) to solve low-level vision problems. The method, called VAE-CoGAN, introduces a shared-latent space and variational autoencoder (VAE) in its framework. The method has been evaluated using synthetic datasets and real-world images, and has shown favorable performance compared to state-of-the-art methods.
Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) called VAE-CoGAN to solve this problem. Different from the basic CoGAN, we propose a shared-latent space and variational autoencoder (VAE) in framework. We use synthetic datasets and the real-world images to evaluate our method. The extensive evaluation and comparison results show that the proposed method can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据