期刊
出版社
ELSEVIER
DOI: 10.1016/j.jag.2022.102970
关键词
Building extraction; Structural-cue-guided feature alignment; Convolutional neural network (convNet); Building extraction; Structural-cue-guided feature alignment; Convolutional neural network (convNet)
资金
- Natural Science Foundation of China Projects (NSFC) [42192583, 41471354]
Building extraction from remote sensing imagery is a common task in surveying, mapping and geographic information systems. This study proposes a pyramid feature extraction method to address the challenges of automatic building extraction. The method constructs multi-scale representations of buildings and incorporates attention modules and feature alignment modules to improve the accuracy and integrity of the extraction results.
In surveying, mapping and geographic information systems, building extraction from remote sensing imagery is a common task. However, there are still some challenges in automatic building extraction. First, using only single-scale depth features cannot take into account the uncertainty of features such as the hue and texture of buildings in images, and the results are prone to missed detection. Moreover, extracted high-level features often lose structural information and have scale differences with low-level features, which results in less accurate extraction of boundaries. To simultaneously address these problems, we propose pyramid feature extraction (PFE) to construct multi-scale representations of buildings, which is inspired by the feature extraction of scale-invariant feature transform. We also apply attention modules in channel dimension and spatial dimension to PFE and low-level feature maps. Furthermore, we use the structural-cue-guided feature alignment module to learn the correlation between feature maps at different levels, obtaining high-resolution features with strong semantic representation and ensuring the integrity of high-level features in both structural and semantic dimensions. An edge loss is applied to get a highly accurate building boundary. For the WHU Building Dataset, our method achieves an F1 score of 95.3% and an Intersection over Union (IoU) score of 90.9%; for the Massachusetts Buildings Dataset, our method achieves an F1 score of 85.0% and an IoU score of 74.1%.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据