4.7 Article

Some Theoretical Insights into Wasserstein GANs

期刊

JOURNAL OF MACHINE LEARNING RESEARCH
卷 22, 期 -, 页码 1-45

出版社

MICROTOME PUBL

关键词

Generative Adversarial Networks; Wasserstein distances; deep learning theory; Lipschitz functions; trade-off properties

向作者/读者索取更多资源

This paper presents theoretical advances in WGANs, including discussions on architecture definition, mathematical features, and optimization properties. These features are verified through experiments to illustrate the trade-off effects between the generator and the discriminator.
Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the benefits of the cousin approach called Wasserstein GANs (WGANs), which brings stabilization in the training process. In the present paper, we add a new stone to the edifice by proposing some theoretical advances in the properties of WGANs. First, we properly define the architecture of WGANs in the context of integral probability metrics parameterized by neural networks and highlight some of their basic mathematical features. We stress in particular interesting optimization properties arising from the use of a parametric 1-Lipschitz discriminator. Then, in a statistically-driven approach, we study the convergence of empirical WGANs as the sample size tends to infinity, and clarify the adversarial effects of the generator and the discriminator by underlining some trade-off properties. These features are finally illustrated with experiments using both synthetic and real-world datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据