4.7 Article

Multistage attention network for image inpainting

期刊

PATTERN RECOGNITION
卷 106, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2020.107448

关键词

Image inpainting; Irregular mask; Deep learning; Attention mechanism; Unet-like network

资金

  1. National Natural Science Foundation of China [61771349]
  2. Science and Technology Major Project of Hubei Province (Next-Generation AI Technologies) [2019AEA170]
  3. Natural Science Foundation of Hubei Province [2018CFB432]

向作者/读者索取更多资源

Image inpainting refers to the process of restoring the mask regions of damaged images. Existing inpainting algorithms have exhibited outstanding performance on certain inpainting tasks that are focused on recovering small masks or square masks. Tasks that attempt to reconstruct large proportion of damaged images can still be improved. Although many attention-related algorithms have been proposed to solve image inpainting tasks, most of them ignore the requirements to balancing the detail and style level. In this paper, we propose a novel image inpainting method for large-scale irregular masks. We introduce a special multistage attention module that considers structure consistency and detail fineness. The proposed multistage attention module operates in a coarse to-fine manner, where the early stage performs large feature patch swapping and ensures the global consistency in images, and the next stage swaps small patches to refine the texture. Then, we adopt a partial convolution strategy to avoid the misuse of invalid data during convolution. Several losses are combined as the training objective function to generate excellent results with global consistency and exquisite detail. Qualitative and quantitative experiments on the Paris StreetView, CelebA, and Places2 datasets demonstrate the superior performance of the proposed approach compared with state-of-the-art models. (C) 2020 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据