4.6 Article

Blind inpainting using the fully convolutional neural network

Journal

VISUAL COMPUTER
Volume 33, Issue 2, Pages 249-261

Publisher

SPRINGER
DOI: 10.1007/s00371-015-1190-z

Keywords

Image processing; Blind inpainting; Deep learning; Convolutional neural network

Funding

  1. National Natural Science Foundation of China [61001179, 61372173, 61471132, 61201393]
  2. Guangdong Higher Education Engineering Technology Research Center [501130144]

Ask authors/readers for more resources

Most of existing inpainting techniques require to know beforehandwhere those damaged pixels are, i.e., non-blind inpainting methods. However, in many applications, such information may not be readily available. In this paper, we propose a novel blind inpainting method based on a fully convolutional neural network. We term this method as blind inpainting convolutional neural network (BICNN). It purely cascades three convolutional layers to directly learn an end-to-end mapping between a pre-acquired dataset of corrupted/ground truth subimage pairs. Stochastic gradient descent with standard backpropagation is used to train the BICNN. Once the BICNN is learned, it can automatically identify and remove the corrupting patterns from a corrupted image without knowing the specific regions. The learned BICNN takes a corrupted image of any size as input and directly produces a clean output by only one pass of forward propagation. Experimental results indicate that the proposed method can achieve a better inpainting performance than the existing inpainting methods for various corrupting patterns.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available