4.7 Article

Pixel convolutional neural network for multi-focus image fusion

Journal

INFORMATION SCIENCES
Volume 433, Issue -, Pages 125-141

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2017.12.043

Keywords

Multi-focus image fusion; Convolutional neural network; Deep learning; Focus measure

Funding

  1. National Natural Science Foundation of China [61572092]
  2. Key Programme of NSFC-Guangdong Union Foundation [U1401252]
  3. Scientific & Technological Research Program of Chongqing Municipal Education Commission [KJ1400429]

Ask authors/readers for more resources

This paper proposes a pixel-wise convolutional neural network (p-CNN) that can recognize the focused and defocused pixels in source images from its neighbourhood information for multi-focus image fusion. The proposed p-CNN can be thought of as a learned focus measure (FM) and provides more efficiency than conventional handcrafted FMs. To enable the p-CNN with the strong capability to discriminate focused and defocused pixels, a comprehensive training image set based on a public image database is created. Furthermore, by setting precise labels according to different focus levels and adding various defocus masks, the p-CNN can accurately measure the focus level of each pixel in source images in which the artefacts in the fused image can be efficiently avoided. We also propose a method to implement the p-CNN with a conventional image convolutional neural network (image-wised CNN), which is almost 25 times faster than directly using the p-CNN in multi-focus image fusion. Experimental results demonstrate that the proposed method is competitive with or even outperforms the state-of-the-art methods in terms of both subjective visual perception and objective evaluation metrics. (C) 2017 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available