4.6 Article

The theoretical research of generative adversarial networks: an overview

Journal

NEUROCOMPUTING
Volume 435, Issue -, Pages 26-41

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2020.12.114

Keywords

Generative adversarial networks (GANs); Image generation; Gradient penalty

Funding

  1. Natural Science Foundation of China [62032020]
  2. Hunan Provincial Natural Science Foundation of China for Distinguished Young Scholars [2018JJ1025]
  3. Hunan Science and Technology Planning Project [2019RS3019]
  4. National Key Research and Development Program of China [2018YFB1003702]
  5. Hunan General project of Education Department [19C1758]
  6. PhD research startup foundation of Xiangtan University [19QDZ57]

Ask authors/readers for more resources

This paper focuses on the theoretical achievements of Generative Adversarial Networks (GAN) and categorizes the improved methods into GAN variants and hybrid GANs. It discusses theoretical results, training dynamics, improved methods, and the advantages of GAN over other deep generative models. It also addresses future research directions and open issues to be further explored by the community.
Generative adversarial networks (GAN) has received great attention and made great progress since its emergence in 2014. In this paper, we focus on the theoretical achievements of GAN and discuss them in detail for readers who wish to know more about GAN. Based on the number of the implemented network architectures, we category the improved methods into two groups: GAN variants, which are composed of two networks, to improve the performance by adding some regularization to the loss function; hybrid GANs, which are usually combined with other generative models to improve the training stability. For GAN variants, we discuss the theoretical results of the distribution divergence, training dynamics and various improved methods. For hybrid GANs, we introduce the improved methods of combining encoder, autoencoder or VAE. We also cover some other important issues, such as the quantify metrics of generated samples and the basic construction structure. In addition, we discuss the advantages of the GAN over other deep generative models, the future directions worthy of study, as well as the open issues that the community should further address. (c) 2021 Elsevier B.V. All rights reserved. Generative adversarial networks (GAN) has received great attention and made great progress since its emergence in 2014. In this paper, we focus on the theoretical achievements of GAN and discuss them in detail for readers who wish to know more about GAN. Based on the number of the implemented network architectures, we category the improved methods into two groups: GAN variants, which are composed of two networks, to improve the performance by adding some regularization to the loss function; hybrid GANs, which are usually combined with other generative models to improve the training stability. For GAN variants, we discuss the theoretical results of the distribution divergence, training dynamics and various improved methods. For hybrid GANs, we introduce the improved methods of combining encoder, autoencoder or VAE. We also cover some other important issues, such as the quantify metrics of generated samples and the basic construction structure. In addition, we discuss the advantages of the GAN over other deep generative models, the future directions worthy of study, as well as the open issues that the community should further address.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available