4.7 Article

MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion

Journal

INFORMATION FUSION
Volume 66, Issue -, Pages 40-53

Publisher

ELSEVIER
DOI: 10.1016/j.inffus.2020.08.022

Keywords

Image fusion; Multi-focus; Unsupervised learning; Generative adversarial network

Funding

  1. National Natural Science Foundation of China [61773295, 41890820]
  2. Natural Science Foundation of Hubei Province, China [2019CFA037]

Ask authors/readers for more resources

This paper introduces a new method for multi-focus image fusion, utilizing a generative adversarial network with adaptive and gradient joint constraints to address the issue of detail loss in existing methods. The proposed method demonstrates superiority in both subjective visual effect and quantitative metrics over the state-of-the-art, while also being approximately one order of magnitude faster.
Multi-focus image fusion is an enhancement method to generate full-clear images, which can address the depth-of-field limitation in imaging of optical lenses. Most existing methods generate the decision map to realize multi-focus image fusion, which usually lead to detail loss due to misclassification, especially near the boundary line of the focused and defocused regions. To overcome this challenge, this paper presents a new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images. In our model, an adaptive decision block is introduced to determine whether source pixels are focused or not based on the difference of repeated blur. Under its guidance, a specifically designed content loss can dynamically guide the optimization trend, that is, force the generator to produce a fused result of the same distribution as the focused source images. To further enhance the texture details, we establish an adversarial game so that the gradient map of the fused result approximates the joint gradient map constructed based on the source images. Our model is unsupervised without requiring ground-truth fused images for training. In addition, we release a new dataset containing 120 high-quality multi-focus image pairs for benchmark evaluation. Experimental results demonstrate the superiority of our method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics. Moreover, our method is about one order of magnitude faster compared with the state-of-the-art.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available