4.7 Article

Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks

期刊

出版社

ELSEVIER
DOI: 10.1016/j.isprsjprs.2017.11.011

关键词

Deep learning; Remote sensing; Semantic mapping; Data fusion

资金

  1. Total-ONERA research project NAOMI

向作者/读者索取更多资源

In this work, we investigate various methods to deal with semantic labeling of very high resolution multi modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data. (C) 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据