4.7 Article

LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.2994150

Keywords

Semantics; Image segmentation; Feature extraction; Decoding; Remote sensing; Correlation; Convolutional neural networks; Convolutional neural network (CNN); deep learning; remote sensing; semantic segmentation

Funding

  1. China Scholarship Council [201703170123]

Ask authors/readers for more resources

Two proposed modules, Patch Attention Module (PAM) and Attention Embedding Module (AEM), enhance feature representation in remote sensing images by bridging the gap between high-level and low-level features. Experimental results show that integrating these modules into a baseline fully convolutional network greatly improves performance and outperforms other attention-based methods.
The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurred spatial details; low-level features extracted from the early layers of a network contain more pixel-level information but are isolated and noisy. It is therefore difficult to bridge the gap between high- and low-level features due to their difference in terms of physical information content and spatial distribution. In this article, we contribute to solve this problem by enhancing the feature representation in two ways. On the one hand, a patch attention module (PAM) is proposed to enhance the embedding of context information based on a patchwise calculation of local attention. On the other hand, an attention embedding module (AEM) is proposed to enrich the semantic information of low-level features by embedding local focus from high-level features. Both proposed modules are lightweight and can be applied to process the extracted features of convolutional neural networks (CNNs). Experiments show that, by integrating the proposed modules into a baseline fully convolutional network (FCN), the resulting local attention network (LANet) greatly improves the performance over the baseline and outperforms other attention-based methods on two RSI data sets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available