4.7 Article

Hyperspectral and multispectral remote sensing image fusion using SwinGAN with joint adaptive spatial-spectral gradient loss function

Journal

INTERNATIONAL JOURNAL OF DIGITAL EARTH
Volume 16, Issue 1, Pages 3580-3600

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/17538947.2023.2253206

Keywords

SwinGAN; HSI; MSI; image fusion; spatial gradient loss; spectral gradient loss

Ask authors/readers for more resources

This study introduces SwinGAN, a fusion network combining Swin Transformer, CNN, and GAN architectures, to improve data resolution in hyperspectral remote sensing image (HSI) fusion with multispectral remote sensing images (MSI). SwinGAN employs a detail injection framework to separately extract HSI and MSI features, fusing them to generate spatial residuals. These residuals are injected into the supersampled HSI to produce the final image, while a pure CNN architecture acts as the discriminator, enhancing the fusion quality. Additionally, a new adaptive loss function is introduced to improve image fusion accuracy, using L1 loss as the content loss and introducing spatial and spectral gradient loss functions.
Hyperspectral remote sensing image (HSI) fusion with multispectral remote sensing images (MSI) improves data resolution. However, current fusion algorithms focus on local information and overlook long-range dependencies. The parameter of network tuning prioritizes global optimization, neglecting spatial and spectral constraints, and limiting spatial and spectral reconstruction capabilities. This study introduces SwinGAN, a fusion network combining Swin Transformer, CNN, and GAN architectures. SwinGAN's generator employs a detail injection framework to separately extract HSI and MSI features, fusing them to generate spatial residuals. These residuals are injected into the supersampled HSI to produce the final image, while a pure CNN architecture acts as the discriminator, enhancing the fusion quality. Additionally, we introduce a new adaptive loss function that improves image fusion accuracy. The loss function uses L1 loss as the content loss, and spatial and spectral gradient loss functions are introduced to improve the spatial representation and spectral fidelity of the fused images. Our experimental results on several datasets demonstrate that SwinGAN outperforms current popular algorithms in both spatial and spectral reconstruction capabilities. The ablation experiments also demonstrate the rationality of the various components of the proposed loss function.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available