4.7 Article

PIAFusion: A progressive infrared and visible image fusion network based on illumination aware

Journal

INFORMATION FUSION
Volume 83, Issue -, Pages 79-92

Publisher

ELSEVIER
DOI: 10.1016/j.inffus.2022.03.007

Keywords

Image fusion; Illumination aware; Cross-modality differential aware fusion; Deep learning

Funding

  1. Natural Science Foundation of Hubei Province [2019CFA037, 2020BAB113]

Ask authors/readers for more resources

This paper proposes a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. The method achieves superior results in target maintenance and texture preservation compared to existing alternatives.
Infrared and visible image fusion aims to synthesize a single fused image containing salient targets and abun-dant texture details even under extreme illumination conditions. However, existing image fusion algorithms fail to take the illumination factor into account in the modeling process. In this paper, we propose a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. Specifically, we design an illumination-aware sub-network to estimate the illumination distribution and calculate the illumination probability. Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network. The cross-modality differential aware fusion module and halfway fusion strategy completely integrate common and complementary information under the constraint of illumination-aware loss. In addition, a new benchmark dataset for infrared and visible image fusion, i.e., Multi-Spectral Road Scenarios (available at https://github.com/Linfeng-Tang/MSRS), is released to support network training and comprehensive evaluation. Extensive experiments demonstrate the superiority of our method over state-of-the-art alternatives in terms of target maintenance and texture preservation. Particularly, our progressive fusion framework could round-the-clock integrate meaningful information from source images according to illumination conditions. Furthermore, the application to semantic segmentation demonstrates the potential of our PIAFusion for high-level vision tasks. Our codes will be available at https://github.com/ Linfeng-Tang/PIAFusion.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available