4.7 Article

Remote Sensing Image Super-Resolution via Saliency-Guided Feedback GANs

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.3042515

Keywords

Visualization; Image reconstruction; Generative adversarial networks; Distortion; Gallium nitride; Sensors; Optimization; Deep learning (DL); generative adversarial network (GAN); remote sensing; saliency detection; super-resolution (SR)

Funding

  1. Beijing Natural Science Foundation [L182029]
  2. National Natural Science Foundation of China [61571050, 41771407]
  3. Beijing Normal University (BNU) Interdisciplinary Research Foundation for the First-Year Doctoral Candidates [BNUXKJC1926]

Ask authors/readers for more resources

In this article, a saliency-guided feedback GAN (SG-FBGAN) is proposed to address the challenges posed by the versatile visual characteristics of different regions in remote sensing images (RSIs). The SG-FBGAN applies different reconstruction principles based on the saliency level of each region and uses feedback connections to improve expressivity while reducing parameters. A saliency-guided multidiscriminator is introduced to measure the visual perception quality of different areas and eliminate pseudotextures. Comprehensive evaluations and ablation studies validate the effectiveness of the proposed SG-FBGAN.
In remote sensing images (RSIs), the visual characteristics of different regions are versatile, which poses a considerable challenge to single image super-resolution (SISR). Most existing SISR methods for RSIs ignore the diverse reconstruction needs of different regions and thus face a serious contradiction between high perception quality and less spatial distortion. The mean square error (MSE) optimization-based methods produce results of unsatisfactory visual quality, while generative adversarial networks (GANs) can produce photo-realistic but severely distorted results caused by pseudotextures. In addition, increasingly deeper networks, although providing powerful feature representations, also face problems of overfitting and occupying too much storage space. In this article, we propose a new saliency-guided feedback GAN (SG-FBGAN) to address these problems. The proposed SG-FBGAN applies different reconstruction principles for areas with varying levels of saliency and uses feedback (FB) connections to improve the expressivity of the network while reducing parameters. First, we propose a saliency-guided FB generator with our carefully designed paired-feedback block (PFBB). The PFBB uses two branches, a salient and a nonsalient branch, to handle the FB information and generate powerful high-level representations for salient and nonsalient areas, respectively. Then, we measure the visual perception quality of salient areas, nonsalient areas, and the global image with a saliency-guided multidiscriminator, which can dramatically eliminate pseudotextures. Finally, we introduce a curriculum learning strategy to enable the proposed SG-FBGAN to handle complex degradation models. Comprehensive evaluations and ablation studies validate the effectiveness of our proposal.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available