4.7 Article

Generative Memory-Guided Semantic Reasoning Model for Image Inpainting

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2022.3188169

Keywords

Semantics; Cognition; Decoding; Training; Image restoration; Image edge detection; Visualization; Image inpainting; generative memory; image synthesis; semantic reasoning

Ask authors/readers for more resources

This paper addresses the critical challenge of single image inpainting by accurately inferring semantic information with limited data. The proposed GM-SRM leverages generative memory and inter-image reasoning priors to effectively handle large corrupted areas and outperforms state-of-the-art methods in terms of visual quality and quantitative metrics.
The critical challenge of single image inpainting stems from accurate semantic inference via limited information while maintaining image quality. Typical methods for semantic image inpainting train an encoder-decoder network by learning a one-to-one mapping from the corrupted image to the inpainted version. While such methods perform well on images with small corrupted regions, it is challenging for these methods to deal with images with large corrupted area due to two potential limitations. 1) Such one-to-one mapping paradigm tends to overfit each single training pair of images; 2) The inter-image prior knowledge about the general distribution patterns of visual semantics, which can be transferred across images sharing similar semantics, is not explicitly exploited. In this paper, we propose the Generative Memory-guided Semantic Reasoning Model (GM-SRM), which infers the content of corrupted regions based on not only the known regions of the corrupted image, but also the learned inter-image reasoning priors characterizing the generalizable semantic distribution patterns between similar images. In particular, the proposed GM-SRM first pre-learns a generative memory from the whole training data to explicitly learn the distribution of different semantic patterns. Then the learned memory are leveraged to retrieve the matching semantics for the current corrupted image to perform semantic reasoning during image inpainting. While the encoder-decoder network is used for guaranteeing the pixel-level content consistency, our generative priors are favorable for performing high-level semantic reasoning, which is particularly effective for inferring semantic content for large corrupted area. Extensive experiments on Paris Street View, CelebA-HQ, and Places2 benchmarks demonstrate that our GM-SRM outperforms the state-of-the-art methods for image inpainting in terms of both visual quality and quantitative metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available