4.7 Article

Remote Sensing Image Super-Resolution via Saliency-Guided Feedback GANs

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.3042515

关键词

Visualization; Image reconstruction; Generative adversarial networks; Distortion; Gallium nitride; Sensors; Optimization; Deep learning (DL); generative adversarial network (GAN); remote sensing; saliency detection; super-resolution (SR)

资金

  1. Beijing Natural Science Foundation [L182029]
  2. National Natural Science Foundation of China [61571050, 41771407]
  3. Beijing Normal University (BNU) Interdisciplinary Research Foundation for the First-Year Doctoral Candidates [BNUXKJC1926]

向作者/读者索取更多资源

In this article, a saliency-guided feedback GAN (SG-FBGAN) is proposed to address the challenges posed by the versatile visual characteristics of different regions in remote sensing images (RSIs). The SG-FBGAN applies different reconstruction principles based on the saliency level of each region and uses feedback connections to improve expressivity while reducing parameters. A saliency-guided multidiscriminator is introduced to measure the visual perception quality of different areas and eliminate pseudotextures. Comprehensive evaluations and ablation studies validate the effectiveness of the proposed SG-FBGAN.
In remote sensing images (RSIs), the visual characteristics of different regions are versatile, which poses a considerable challenge to single image super-resolution (SISR). Most existing SISR methods for RSIs ignore the diverse reconstruction needs of different regions and thus face a serious contradiction between high perception quality and less spatial distortion. The mean square error (MSE) optimization-based methods produce results of unsatisfactory visual quality, while generative adversarial networks (GANs) can produce photo-realistic but severely distorted results caused by pseudotextures. In addition, increasingly deeper networks, although providing powerful feature representations, also face problems of overfitting and occupying too much storage space. In this article, we propose a new saliency-guided feedback GAN (SG-FBGAN) to address these problems. The proposed SG-FBGAN applies different reconstruction principles for areas with varying levels of saliency and uses feedback (FB) connections to improve the expressivity of the network while reducing parameters. First, we propose a saliency-guided FB generator with our carefully designed paired-feedback block (PFBB). The PFBB uses two branches, a salient and a nonsalient branch, to handle the FB information and generate powerful high-level representations for salient and nonsalient areas, respectively. Then, we measure the visual perception quality of salient areas, nonsalient areas, and the global image with a saliency-guided multidiscriminator, which can dramatically eliminate pseudotextures. Finally, we introduce a curriculum learning strategy to enable the proposed SG-FBGAN to handle complex degradation models. Comprehensive evaluations and ablation studies validate the effectiveness of our proposal.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据