4.7 Article

MAGAN: Multiattention Generative Adversarial Network for Infrared and Visible Image Fusion

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2023.3282300

Keywords

Image fusion; intensity attention; multiattention generative adversarial network (MAGAN); texture attention

Ask authors/readers for more resources

This article proposes a multiattention generative adversarial network (MAGAN) for infrared and visible image fusion, which achieves image feature extraction and fusion through a multiattention generator and two multiattention discriminators. Experimental results show that MAGAN outperforms some state-of-the-art models in terms of visual effects and quantitative metrics.
Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared and visible images, this article proposes a multiattention generative adversarial network (MAGAN) for infrared and visible image fusion, which is composed of a multiattention generator and two multiattention discriminators. The multiattention generator gradually realizes the extraction and fusion of image features by constructing two modules: a triple-path feature prefusion module (TFPM) and a feature emphasis fusion module (FEFM). The two multiattention discriminators are constructed to ensure that the fused images retain the salient targets and the texture information from the source images. In MAGAN, an intensity attention and a texture attention are designed to extract the specific features of the source images to retain more intensity and texture information in the fused image. In addition, a saliency target intensity loss is defined to ensure that the fused images obtain more accurate salient information from infrared images. Experimental results on two public datasets show that the proposed MAGAN outperforms some state-of-the-art models in terms of visual effects and quantitative metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available