4.7 Article

Building Extraction in Very High Resolution Imagery by Dense-Attention Networks

期刊

REMOTE SENSING
卷 10, 期 11, 页码 -

出版社

MDPI
DOI: 10.3390/rs10111768

关键词

building extraction; deep learning; attention mechanism; very high resolution; imagery

资金

  1. National Natural Science Foundation of China [41501376, 41571400]
  2. Natural Science Foundation of Anhui Province [1608085MD83]
  3. Key Laboratory of Earth Observation and Geospatial Information Science of NASG [201805]
  4. Science Research Project of Anhui Education Department [KJ2018A0007]
  5. open fund for Discipline Construction, Institute of Physical Science and Information Technology, Anhui University

向作者/读者索取更多资源

Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder-decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red-green-blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据