4.6 Article

Solving the reconstruction-generation trade-off: Generative model with implicit embedding learning

期刊

NEUROCOMPUTING
卷 549, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2023.126428

关键词

Autoencoder; Generative model; Embedding learning; Latent mapping; Adversarial

向作者/读者索取更多资源

Variational Autoencoder (VAE) and Generative adversarial network (GAN) are two classic generative models that generate realistic data from a predefined prior distribution. VAE has the advantage of generating high-dimensional data and learning useful latent representations. However, there is a tradeoff between reconstruction and generation in VAE, as matching the prior distribution for latent representations may destroy the geometric structure of the data manifold. To address this issue, we propose an autoencoder-based generative model that allows the prior to learn the embedding distribution. We provide theoretical and experimental support for the effectiveness of our method.
Variational Autoencoder (VAE) and Generative adversarial network (GAN) are two classic generative models that generate realistic data from a predefined prior distribution, such as a Gaussian distribution. One advantage of VAE over GAN is its ability to simultaneously generate high-dimensional data and learn latent representations that are useful for data manipulation. However, it has been observed that a tradeoff exists between reconstruction and generation in VAE, as matching the prior distribution for the latent representations may destroy the geometric structure of the data manifold. To address this issue, we propose an autoencoder-based generative model that allows the prior to learn the embedding distribution, rather than imposing the latent variables to fit the prior. To preserve the geometric structure of the data manifold to the maximum, the embedding distribution is trained using a simple regularized autoencoder architecture. Then an adversarial strategy is employed to achieve a latent mapping. We provide both theoretical and experimental support for the effectiveness of our method, which eliminates the contradiction between preserving the geometric structure of the data manifold and matching the distribution in latent space. The code is available at https://github.com/gengcong940126/GMIEL. & COPY; 2023 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据