4.6 Article

Grsnet: gated residual supervision network for pixel-wise building segmentation in remote sensing imagery

Journal

INTERNATIONAL JOURNAL OF REMOTE SENSING
Volume 43, Issue 13, Pages 4872-4887

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/01431161.2022.2122892

Keywords

building extraction; deep learning; semantic segmentation; high-resolution imagery; remote sensing

Ask authors/readers for more resources

The increasing development of imaging technology has made aerial image analysis one of the most widely used fields in image processing. In this study, a new hybrid deep learning model named GRSNet is proposed to automatically segment buildings in high-resolution satellite images. GRSNet extends the UNet framework by including attention gates, residual units, and deep supervision, and achieves superior performance compared to other state-of-the-art building segmentation methods.
The increasing development of imaging technology has made aerial image analysis one of the most widely used fields in image processing. Building extraction is the basic step in analysing urban structures, detecting construction violations, updating urban geographical divisions, and forecasting natural disasters. In this study, the aim is to automatically segment buildings in high-resolution satellite images using a new hybrid deep learning model, named Gated Residual Supervision Network (GRSNet). GRSNet extends the UNet framework by including three important components, i.e. attention gates (AG), residual units, and deep supervision part, with the main focus on transferring and reusing features. At first, the AG mechanism utilizes channel and spatial fine features to merge them effectively and deep supervision transfers feature details from the depths of the network. Then, residual units retrieve the information at different levels to train the model quickly. Finally, a fully connected classifier recovers features from the input image. GRSNet is evaluated on Massachusetts buildings public dataset and Inria aerial image labeling benchmark. The obtained results show the superiority of the proposed method compared to other deep learning-based state-of-the-art building segmentation methods with an intersection over union (IOU) of 89.86% and an F1-score of 94.53%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available