4.7 Article

Patch-Aware Deep Hyperspectral and Multispectral Image Fusion by Unfolding Subspace-Based Optimization Model

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTARS.2022.3140211

Keywords

Image fusion; Optimization; Tensors; Spatial resolution; Location awareness; Computational modeling; Pansharpening; Alternating direction method of multipliers (ADMM); deep learning; hyperspectral image (HSI); image fusion; subspace; unfolding

Funding

  1. National Natural Science Foundation of China [62071204, 61871226, 61772274]
  2. Natural Science Foundation of Jiangsu Province [BK20201338, BK20180018]
  3. Jiangsu Provincial Social Developing Project [BE2018727]
  4. China Postdoctoral Science Foundation [2021M691275]
  5. Jiangsu Postdoctoral Research Funding Program [2021K148B]
  6. 111 project [B12018]
  7. Hong Kong Innovation and Technology Commission
  8. Hong Kong Research Grants Council [CityU 11204821]

Ask authors/readers for more resources

This paper proposes a patch-aware deep fusion approach for hyperspectral image fusion, aiming to improve the fusion result by utilizing patch information under subspace representation. The proposed approach builds a fusion model and solves it using an optimization algorithm, resulting in a structured deep fusion network. The performance is further improved by an aggregation fusion technique.
Hyperspectral and multispectral image fusion aims to fuse a low-spatial-resolution hyperspectral image (HSI) and a high-spatial-resolution multispectral image to form a high-spatial-resolution HSI. Motivated by the success of model- and deep learning-based approaches, we propose a novel patch-aware deep fusion approach for HSI by unfolding a subspace-based optimization model, where moderate-sized patches are used in both training and test phases. The goal of this approach is to make full use of the information of patch under subspace representation, restrict the scale and enhance the interpretability of the deep network, thereby improving the fusion. First, a subspace-based fusion model was built with two regularization terms to localize pixels and extract texture. Then, the subspace-based fusion model was solved by the alternating direction method of multipliers algorithm, and the model was divided into one fidelity-based problem and two regularization-based problems. Finally, a structured deep fusion network was proposed by unfolding all steps of the algorithm as network layers. Specifically, the fidelity-based problem was solved by a gradient descent algorithm and implemented by a network. The two regularization-based problems were described by proximal operators and learnt by two u-shaped architectures. Moreover, an aggregation fusion technique was proposed to improve the performance by averaging the fused images in all iterations and aggregating the overlapping patches in the test phase. Experimental results, conducted on both synthetic and real datasets, demonstrated the effectiveness of the proposed approach.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available