4.6 Review

Unsupervised Image-to-Image Translation: A Review

期刊

SENSORS
卷 22, 期 21, 页码 -

出版社

MDPI
DOI: 10.3390/s22218540

关键词

unsupervised image-to-image translation; machine learning; computer vision; deep learning; generative adversarial networks; review

资金

  1. Fonds National de la Recherche (FNR), Luxembourg grant AFR-PPP [15411817]

向作者/读者索取更多资源

This article introduces the supervised and unsupervised methods of image-to-image translation and their advantages and disadvantages. It also classifies and revises the current state-of-the-art methods, and conducts a quantitative evaluation of these methods.
Supervised image-to-image translation has been proven to generate realistic images with sharp details and to have good quantitative performance. Such methods are trained on a paired dataset, where an image from the source domain already has a corresponding translated image in the target domain. However, this paired dataset requirement imposes a huge practical constraint, requires domain knowledge or is even impossible to obtain in certain cases. Due to these problems, unsupervised image-to-image translation has been proposed, which does not require domain expertise and can take advantage of a large unlabeled dataset. Although such models perform well, they are hard to train due to the major constraints induced in their loss functions, which make training unstable. Since CycleGAN has been released, numerous methods have been proposed which try to address various problems from different perspectives. In this review, we firstly describe the general image-to-image translation framework and discuss the datasets and metrics involved in the topic. Furthermore, we revise the current state-of-the-art with a classification of existing works. This part is followed by a small quantitative evaluation, for which results were taken from papers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据