4.7 Article

LPASS-Net: Lightweight Progressive Attention Semantic Segmentation Network for Automatic Segmentation of Remote Sensing Images

Journal

REMOTE SENSING
Volume 14, Issue 23, Pages -

Publisher

MDPI
DOI: 10.3390/rs14236057

Keywords

lightweight network; attention mechanism; very high resolution; deep learning

Funding

  1. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education
  2. [2016R1D1A1B02011625]

Ask authors/readers for more resources

Semantic segmentation of remote sensing images is crucial in urban planning and development. This paper proposes a lightweight progressive attention semantic segmentation network that reduces computational costs without sacrificing accuracy, addressing the challenges of semantic segmentation of remote sensing images.
Semantic segmentation of remote sensing images plays a crucial role in urban planning and development. How to perform automatic, fast, and effective semantic segmentation of considerable size and high-resolution remote sensing images has become the key to research. However, the existing segmentation methods based on deep learning are complex and often difficult to apply practically due to the high computational cost of the excessive parameters. In this paper, we propose an end-to-end lightweight progressive attention semantic segmentation network (LPASS-Net), which aims to solve the problem of reducing computational costs without losing accuracy. Firstly, its backbone features are based on a lightweight network, MobileNetv3, and a feature fusion network composed of a reverse progressive attentional feature fusion network work. Additionally, a lightweight non-local convolutional attention network (LNCA-Net) is proposed to effectively integrate global information of attention mechanisms in the spatial dimension. Secondly, an edge padding cut prediction (EPCP) method is proposed to solve the problem of splicing traces in the prediction results. Finally, evaluated on the public datasets BDCI 2017 and ISPRS Potsdam, the mIoU reaches 83.17% and 88.86%, respectively, with an inference time of 0.0271 s.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available