Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 45, Issue 3, Pages 3768-3782Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3181587
Keywords
Layout; Semantics; Visualization; Task analysis; Image synthesis; Computational modeling; Generators; Image manipulation and editing; image synthesis; correspondence learning; inpainting
Ask authors/readers for more resources
This paper addresses the problem of semantic image layout manipulation by proposing a high-resolution sparse attention module and a generator architecture. It effectively transfers visual details to new layouts while maintaining visual realism.
We tackle the problem of semantic image layout manipulation, which aims to manipulate an input image by editing its semantic label map. A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic. Recent work on learning cross-domain correspondence has shown promising results for global layout transfer with dense attention-based warping. However, this method tends to lose texture details due to the resolution limitation and the lack of smoothness constraint on correspondence. To adapt this paradigm for the layout manipulation task, we propose a high-resolution sparse attention module that effectively transfers visual details to new layouts at a resolution up to 512x512. To further improve visual quality, we introduce a novel generator architecture consisting of a semantic encoder and a two-stage decoder for coarse-to-fine synthesis. Experiments on the ADE20k and Places365 datasets demonstrate that our proposed approach achieves substantial improvements over the existing inpainting and layout manipulation methods.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available