4.7 Article

Split Depth-Wise Separable Graph-Convolution Network for Road Extraction in Complex Environments From High-Resolution Remote-Sensing Images

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3128033

关键词

Depth-wise (DW) separable convolution; gradient operator; graph convolution; remote sensing; road extraction

资金

  1. Fundamental Research Funds for the Natural Science Foundation of China [U1803117, 41925007, 42071430]

向作者/读者索取更多资源

The article introduces a method to improve the accuracy of road extraction from high-resolution remote-sensing images using a split depth-wise separable graph convolutional network. The results of the experiment show that this method performs better in extracting covered and tiny roads.
Road information from high-resolution remote-sensing images is widely used in various fields, and deep-learning-based methods have effectively shown high road-extraction performance. However, for the detection of roads sealed with tarmac, or covered by trees in high-resolution remote-sensing images, some challenges still limit the accuracy of extraction: 1) large intraclass differences between roads and unclear interclass differences between urban objects, especially roads and buildings; 2) roads occluded by trees, shadows, and buildings are difficult to extract; and 3) lack of high-precision remote-sensing datasets for roads. To increase the accuracy of road extraction from high-resolution remote-sensing images, we propose a split depth-wise (DW) separable graph convolutional network (SGCN). First, we split DW-separable convolution to obtain channel and spatial features, to enhance the expression ability of road features. Thereafter, we present a graph convolutional network to capture global contextual road information in channel and spatial features. The Sobel gradient operator is used to construct an adjacency matrix of the feature graph. A total of 13 deep-learning networks were used on the Massachusetts roads dataset and nine on our self-constructed mountain road dataset, for comparison with our proposed SGCN. Our model achieved a mean intersection over union (mIOU) of 81.65% with an F1-score of 78.99% for the Massachusetts roads dataset, and an mIOU of 62.45% with an F1-score of 45.06% for our proposed dataset. The visualization results showed that SGCN performs better in extracting covered and tiny roads and is able to effectively extract roads from high-resolution remote-sensing images.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据