3.8 Proceedings Paper

Text to Image Generation with Semantic-Spatial Aware GAN

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01765

关键词

-

资金

  1. Federal Ministry of Education and Research (BMBF), Germany, under the project LeibnizKILabor [01DD20003]
  2. Center for Digital Innovations (ZDIN)
  3. Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD [EXC 2122]

向作者/读者索取更多资源

Text-to-image synthesis aims to generate photo-realistic images that are semantically consistent with the text descriptions. Existing methods have limitations in capturing fine-grained details. In this paper, a novel framework is proposed to improve the visual fidelity and alignment with input text description by introducing a semantic-spatial aware block.
Text-to-image synthesis (T2I) aims to generate photo-realistic images which are semantically consistent with the text descriptions. Existing methods are usually built upon conditional generative adversarial networks (GANs) and initialize an image from noise with sentence embedding, and then refine the features with fine-grained word embedding iteratively. A close inspection of their generated images reveals a major limitation: even though the generated image holistically matches the description, individual image regions or parts of somethings are often not recognizable or consistent with words in the sentence, e.g. a white crown. To address this problem, we propose a novel framework Semantic-Spatial Aware GAN for synthesizing images from input text. Concretely, we introduce a simple and effective Semantic-Spatial Aware block, which (1) learns semantic-adaptive transformation conditioned on text to effectively fuse text features and image features, and (2) learns a semantic mask in a weakly-supervised way that depends on the current text-image fusion process in order to guide the transformation spatially. Experiments on the challenging COCO and CUB bird datasets demonstrate the advantage of our method over the recent state-of-the-art approaches, regarding both visual fidelity and alignment with input text description. Code available at https://github.com/wtliao/text2image.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据