4.7 Article

Remote Sensing Image Super-Resolution Based on Dense Channel Attention Network

Journal

REMOTE SENSING
Volume 13, Issue 15, Pages -

Publisher

MDPI
DOI: 10.3390/rs13152966

Keywords

remote sensing images; super resolution; dense network; attention mechanism

Funding

  1. National Natural Science Foundation of China [42001307, 62061038]
  2. Ningxia Key RD Program [2020BFG02013]
  3. Natural Science Foundation of Ningxia [2020AAC02006]

Ask authors/readers for more resources

The DCAN method utilizes dense channel attention and spatial attention blocks to reconstruct remote sensing images, effectively capturing high-frequency details and improving the network's discriminative ability.
In the recent years, convolutional neural networks (CNN)-based super resolution (SR) methods are widely used in the field of remote sensing. However, complicated remote sensing images contain abundant high-frequency details, which are difficult to capture and reconstruct effectively. To address this problem, we propose a dense channel attention network (DCAN) to reconstruct high-resolution (HR) remote sensing images. The proposed method learns multi-level feature information and pays more attention to the important and useful regions in order to better reconstruct the final image. Specifically, we construct a dense channel attention mechanism (DCAM), which densely uses the feature maps from the channel attention block via skip connection. This mechanism makes better use of multi-level feature maps which contain abundant high-frequency information. Further, we add a spatial attention block, which makes the network have more flexible discriminative ability. Experimental results demonstrate that the proposed DCAN method outperforms several state-of-the-art methods in both quantitative evaluation and visual quality.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available