4.7 Article

Variable Subpixel Convolution Based Arbitrary-Resolution Hyperspectral Pansharpening

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2022.3189624

Keywords

Pansharpening; Spatial resolution; Training; Standards; Task analysis; Convolution; Optimized production technology; Arbitrary resolution; convolution neural network (CNN); hyperspectral (HS) pansharpening; spatial pattern; variable subpixel convolution (VSPC)

Funding

  1. National Natural Science Foundation of China [62071184, 61571195, 42030111, 61836003]
  2. Guangdong Basic and Applied Basic Research Foundation [2022A1515011615]
  3. Guangzhou Science and Technology Program [202002030395]

Ask authors/readers for more resources

In this paper, a VSPC-CNN method is proposed for arbitrary resolution HS pansharpening. The method consists of two-stage elevators to improve the resolution of the input HS image and adjust it to arbitrary resolutions. Experimental results show the superiority of the proposed method on both simulated and real datasets.
Standard hyperspectral (HS) pansharpening relies on fusion to enhance low-resolution HS (LRHS) images to the resolution of their matching panchromatic (PAN) images, whose practical implementation is normally under a stipulation of scale invariance of the model across the training phase and the pansharpening phase. By contrast, arbitrary resolution HS (ARHS) pansharpening seeks to pansharpen LRHS images to any user-customized resolutions. For such a new HS pansharpening task, it is not feasible to train and store convolution neural network (CNN) models for all possible candidate scales, which implies that the single model acquired from the training phase should be capable of being generalized to yield HS images with any resolutions in the pansharpening phase. To address the challenge, a novel variable subpixel convolution (VSPC)-based CNN (VSPC-CNN) method following our arbitrary upsampling CNN (AU-CNN) framework is developed for ARHS pansharpening. The VSPC-CNN method comprises a two-stage elevating thread. The first stage is to improve the spatial resolution of the input HS image to that of the PAN image through a prepansharpening module, and then, a VSPC-encapsulated arbitrary scale attention upsampling (ASAU) module is cascaded for arbitrary resolution adjustment. After training with given scales, it can be generalized to pansharpen HS image to arbitrary scales under the spatial patterns invariance across the training and pansharpening phases. Experimental results from several specific VSPC-CNNs on both simulated and real HS datasets show the superiority of the proposed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available