4.7 Article

Understanding urban landuse from the above and ground perspectives: A deep learning, multimodal solution

期刊

REMOTE SENSING OF ENVIRONMENT
卷 228, 期 -, 页码 129-143

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.rse.2019.04.014

关键词

Landuse characterization; Convolutional neural networks; Overhead imagery; Ground-based pictures; Volunteered geographic information; Urban areas; Multi-modal; Canonical correlation analysis; Missing modality

资金

  1. Swiss National Science Foundation [PZ00P2-136827]
  2. FAPESP [2016/14760-5, 2017/10086-0]

向作者/读者索取更多资源

Landuse characterization is important for urban planning. It is traditionally performed with field surveys or manual photo interpretation, two practices that are time-consuming and labor-intensive. Therefore, we aim to automate landuse mapping at the urban-object level with a deep learning approach based on data from multiple sources (or modalities). We consider two image modalities: overhead imagery from Google Maps and ensembles of ground-based pictures (side-views) per urban-object from Google Street View (GSV). These modalities bring complementary visual information pertaining to the urban-objects. We propose an end-to-end trainable model, which uses OpenStreetMap annotations as labels. The model can accommodate a variable number of GSV pictures for the ground-based branch and can also function in the absence of ground pictures at prediction time. We test the effectiveness of our model over the area of Ile-de-France, France, and test its generalization abilities on a set of urban-objects from the city of Nantes, France. Our proposed multimodal Convolutional Neural Network achieves considerably higher-accuracies than methods that use a single image modality, making it suitable for automatic landuse map updates. Additionally, our approach could be easily scaled to multiple cities, because it is based on data sources available for many cities worldwide.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据