4.7 Article

Semantic Segmentation of Large-Size VHR Remote Sensing Images Using a Two-Stage Multiscale Training Architecture

Journal

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Volume 58, Issue 8, Pages 5367-5376

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.2964675

Keywords

Convolutional neural network; deep learning; remote sensing; semantic segmentation

Funding

  1. China Scholarship Council [201703170123]

Ask authors/readers for more resources

Very-high resolution (VHR) remote sensing images (RSIs) have significantly larger spatial size compared to typical natural images used in computer vision applications. Therefore, it is computationally unaffordable to train and test classifiers on these images at a full-size scale. Commonly used methodologies for semantic segmentation of RSIs perform training and prediction on cropped image patches. Thus, they have the limitation of failing to incorporate enough context information. In order to better exploit the correlations between ground objects, we propose a deep architecture with a two-stage multiscale training strategy that is tailored to the semantic segmentation of large-size VHR RSIs. In the first stage of the training strategy, a semantic embedding network is designed to learn high-level features from downscaled images covering a large area. In the second training stage, a local feature extraction network is designed to introduce low-level information from cropped image patches. The resulting training strategy is able to fuse complementary information learned from multiple levels to make predictions. Experimental results on two data sets show that it outperforms local-patch-based training models in terms of both accuracy and stability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available