4.7 Article

DR-GAN: Distribution Regularization for Text-to-Image Generation

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3165573

关键词

Semantics; Generators; Task analysis; Image synthesis; Training; Visualization; Stability analysis; Distribution normalization; generative adversarial network; semantic disentanglement mechanism; text-to-image (T2I) generation

向作者/读者索取更多资源

This article presents a new text-to-image generation model called distribution regularization generative adversarial network (DR-GAN). DR-GAN introduces two novel modules, semantic disentangling module (SDM) and distribution normalization module (DNM), to improve the quality of generated images by optimizing key semantic information and image latent distribution.
This article presents a new text-to-image (T2I) generation model, named distribution regularization generative adversarial network (DR-GAN), to generate images from text descriptions from improved distribution learning. In DR-GAN, we introduce two novel modules: a semantic disentangling module (SDM) and a distribution normalization module (DNM). SDM combines the spatial self-attention mechanism (SSAM) and a new semantic disentangling loss (SDL) to help the generator distill key semantic information for the image generation. DNM uses a variational auto-encoder (VAE) to normalize and denoise the image latent distribution, which can help the discriminator better distinguish synthesized images from real images. DNM also adopts a distribution adversarial loss (DAL) to guide the generator to align with normalized real image distributions in the latent space. Extensive experiments on two public datasets demonstrated that our DR-GAN achieved a competitive performance in the T2I task. The code link: https://github.com/Tan-H-C/DR-GAN-Distribution-Regularization-for-Text-to-Image-Generation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据