4.6 Article

SESF-Fuse: an unsupervised deep model for multi-focus image fusion

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 33, Issue 11, Pages 5793-5804

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-020-05358-9

Keywords

Multi-focus image fusion; Unsupervised deep learning; Spatial frequency

Funding

  1. National Key Research and Development Program of China [2016YFB0700500]
  2. National Science Foundation of China [6170203, 61873299]
  3. Key Research Plan of Hainan Province [ZDYF2019009]
  4. Guangdong Province Key Area R and D Program [2019B010940001]
  5. Scientific and Technological Innovation Foundation of Shunde Graduate School, USTB [BK19BE030]
  6. Fundamental Research Funds for the University of Science and Technology Beijing [FRF-BD-19-012A, FRF-TP-19-043A2]
  7. USTB MatCom of Beijing Advanced Innovation Center for Materials Genome Engineering

Ask authors/readers for more resources

The study introduces an unsupervised deep learning model for multi-focus image fusion, achieving state-of-the-art fusion performance in objective and subjective assessments, especially in gradient-based fusion metrics.
Muti-focus image fusion is the extraction of focused regions from different images to create one all-in-focus fused image. The key point is that only objects within the depth-of-field have a sharp appearance in the photograph, while other objects are likely to be blurred. We propose an unsupervised deep learning model for multi-focus image fusion. We train an encoder-decoder network in an unsupervised manner to acquire deep features of input images. Then, we utilize spatial frequency, a gradient-based method to measure sharp variation from these deep features, to reflect activity levels. We apply some consistency verification methods to adjust the decision map and draw out the fused result. Our method analyzes sharp appearances in deep features instead of original images, which can be seen as another success story of unsupervised learning in image processing. Experimental results demonstrate that the proposed method achieves state-of-the-art fusion performance compared to 16 fusion methods in objective and subjective assessments, especially in gradient-based fusion metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available