4.4 Article

GAGCN: Generative adversarial graph convolutional network for non-homogeneous texture extension synthesis

期刊

IET IMAGE PROCESSING
卷 17, 期 5, 页码 1603-1614

出版社

WILEY
DOI: 10.1049/ipr2.12741

关键词

attention mechanism; generative adversarial networks; graph convolutional networks; non-homogeneous texture synthesis

向作者/读者索取更多资源

In non-homogeneous texture synthesis, it is important to maintain consistent overall visual characteristics when extending local patterns. Existing methods focus on local visual features and neglect the crucial relative position features for non-homogeneous texture synthesis. Therefore, modeling pixel dependence is desired to enhance synthesis performance. This paper proposes a non-homogeneous texture extended synthesis model (GAGCN) that combines generate adversarial network (GAN) and graph convolutional network (GCN) to ensure synthesis results from both local detail structure and overall structure.
In the non-homogeneous texture synthesis task, the overall visual characteristics should be consistent when extending the local patterns of the exemplar. The existing methods mainly focus on the local visual features of patterns but ignore the relative position features that are important for non-homogeneous texture synthesis. Although these methods have achieved success on homogeneous textures, they cannot perform well on non-homogeneous textures. Thus, it is desirable to model the dependence between pixels to improve the synthesis performance. To ensure synthesis results from both the local detail structure and the overall structure, this paper proposes a non-homogeneous texture extended synthesis model (GAGCN) combining the generate adversarial network (GAN) and the graph convolutional network (GCN). The GAN learns the internal distribution of image patches, which makes the synthetic image have rich local details. The GCN learns the latent dependence between pixels according to the statistical characteristics of the image. Based on this, a novel graph similarity loss is proposed. This loss describes the latent spatial differences between the sample image and the generated image, which helps the model to better capture global features. Experiments show that our method outperforms existing methods on non-homogeneous textures.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据