4.7 Article

MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 29, Issue -, Pages 7203-7216

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2020.2999855

Keywords

Gallium nitride; Image fusion; Generative adversarial networks; Generators; Dynamic range; Feature extraction; Training; Image fusion; multi-exposure; generative adversarial network; self-attention

Funding

  1. National Natural Science Foundation of China [61773295]
  2. Natural Science Foundation of Hubei Province [2019CFA037]
  3. Natural Sciences and Engineering Research Council of Canada (NSERC) [RGPIN-2020-04661]

Ask authors/readers for more resources

In this paper, we present an end-to-end architecture for multi-exposure image fusion based on generative adversarial networks, termed as MEF-GAN. In our architecture, a generator network and a discriminator network are trained simultaneously to form an adversarial relationship. The generator is trained to generate a real-like fused image based on the given source images which is expected to fool the discriminator. Correspondingly, the discriminator is trained to distinguish the generated fused images from the ground truth. The adversarial relationship makes the fused image not limited to the restriction of the content loss. Therefore, the fused images are closer to the ground truth in terms of probability distribution, which can compensate for the insufficiency of single content loss. Moreover, aiming at the problem that the luminance of multi-exposure images varies greatly with spatial location, the self-attention mechanism is employed in our architecture to allow for attention-driven and long-range dependency. Thus, local distortion, confusing results, or inappropriate representation can be corrected in the fused image. Qualitative and quantitative experiments are performed on publicly available datasets, where the results demonstrate that MEF-GAN outperforms the state-of-the-art, in terms of both visual effect and objective evaluation metrics. Our code is publicly available at https://github.com/jiayi-ma/MEF-GAN.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available