4.7 Article

Contextual Transformation Network for Lightweight Remote-Sensing Image Super-Resolution

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3132093

Keywords

Feature extraction; Superresolution; Convolution; Remote sensing; Task analysis; Image reconstruction; Benchmark testing; Contextual feature learning; image super-resolution; lightweight neural network; remote sensing

Funding

  1. National Natural Science Foundation of China [61273273]
  2. National Key Research and Development Plan [2017YFC0112001]
  3. China Central Television [JG2018-0247]

Ask authors/readers for more resources

This paper proposes a lightweight super-resolution network that reduces the network burden by replacing convolution layers while maintaining performance. The authors introduce a lightweight convolution layer called the contextual transformation layer (CTL) for remote-sensing image super-resolution. Experimental results demonstrate the effectiveness of the proposed method in remote-sensing image super-resolution, natural image super-resolution, and denoising tasks.
Current super-resolution networks typically reduce network parameters and multiadds operations by designing lightweight structures, but lightening the convolution layer is often ignored. In this work, we observe that convolutions occupy a high percentage of network parameters in most lightweight super-resolution networks. This motivates us to consider lightening super-resolution networks by replacing convolutions with lightweight convolutions, while maintaining the performance. To achieve this, we propose a lightweight convolution layer named contextual transformation layer (CTL). It can yield efficient contextual features through a context feature extraction module and enrich extracted contextual features through a context feature transformation module. Based on CTLs, we build a lightweight super-resolution network called contextual transformation network (CTN) for remote-sensing image super-resolution. Specifically, we use two CTLs to construct a contextual transformation block (CTB) for hierarchical feature learning. Interleaved with a CTB, a context enhancement module (CEM) is employed to enhance the extracted feature representations. All extracted features are processed by a contextual feature aggregation module for final remote-sensing image super-resolution. Extensive experiments are performed on a remote-sensing image super-resolution benchmark named UC Merced. Our method achieves superior results to the other state-of-the-art methods. To demonstrate the generalization ability of our CTL, we extend our CTN to two relevant tasks: natural image super-resolution and natural image denoising. Experimental results on natural image super-resolution benchmarks (i.e., Set5, Set14, B100, Urban100, and Manga109) and natural image denoising benchmarks (i.e., SIDD and DND) further prove the superiority of our method. Our code is publicly available.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available