3.8 Proceedings Paper

SEGMENTATION-AWARE TEXT-GUIDED IMAGE MANIPULATION

出版社

IEEE
DOI: 10.1109/ICIP42928.2021.9506601

关键词

Text-guided image manipulation; generative adversarial network; semantic segmentation

资金

  1. JSPS KAKENHI [JP17H01744]

向作者/读者索取更多资源

This paper presents a novel approach to improve text-guided image manipulation performance by introducing foreground-aware and background-aware biases, addressing the issue of modifying undesired parts in images caused by differences in representation ability between text descriptions and images. By integrating an image segmentation network into the generative adversarial network for image manipulation, the effectiveness of the proposed method is demonstrated through comparative experiments with three state-of-the-art methods.
We propose a novel approach that improves text-guided image manipulation performance in this paper. Text-guided image manipulation aims at modifying some parts of an input image in accordance with the user's text description by semantically associating the regions of the image with the text description. We tackle the conventional methods' problem of modifying undesired parts caused by differences in representation ability between text descriptions and images. Humans tend to pay attention primarily to objects corresponding to the foreground of images, and text descriptions by humans mostly represent the foreground. Therefore, it is necessary to introduce not only a foreground-aware bias based on text descriptions but also a background-aware bias that the text descriptions do not represent. We introduce an image segmentation network into the generative adversarial network for image manipulation to solve the above problem. Comparative experiments with three state-of-the-art methods show the effectiveness of our method quantitatively and qualitatively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据