4.7 Article

Automated LoD-2 model reconstruction from very-high-resolution satellite-derived digital surface model and orthophoto

Journal

Publisher

ELSEVIER
DOI: 10.1016/j.isprsjprs.2021.08.025

Keywords

LoD-2 Building Modeling; Data-driven; Decomposition and merging; Multi-stereo satellite images

Funding

  1. Office of Naval Research [N000141712928]
  2. U.S. Department of Defense (DOD) [N000141712928] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

Digital surface models (DSM) from multi-stereo satellite images are improving in quality due to better data resolution and photogrammetric reconstruction algorithms. Very-high-resolution (VHR) satellite images with sub-meter level resolution are a unique data source for 3D building modeling, providing wider data coverage at lower cost. Existing methods for 3D building modeling are often specific to certain building types and require high-quality data sources. More adaptive and robust methods are needed when applied to satellite-based point clouds or DSMs.
Digital surface models (DSM) generated from multi-stereo satellite images are getting higher in quality owing to the improved data resolution and photogrammetric reconstruction algorithms. Very-high-resolution (VHR, with sub-meter level resolution) satellite images effectively act as a unique data source for 3D building modeling, because it provides a much wider data coverage with lower cost than the traditionally used LiDAR and airborne photogrammetry data. Although 3D building modeling from point clouds has been intensively investigated, most of the methods are still ad-hoc to specific types of buildings and require high-quality and high-resolution data sources as input. Therefore, when applied to satellite-based point cloud or DSMs, these developed approaches are not readily applicable and more adaptive and robust methods are needed. As a result, most of the existing work on building modeling from satellite DSM achieves LoD-1 generation. In this paper, we propose a model-driven method that reconstructs LoD-2 building models following a decomposition-optimization-fitting paradigm. The proposed method starts building detection results through a deep learning-based detector and vectorizes individual segments into polygons using a three-step polygon extraction method, followed by a novel gridbased decomposition method that decomposes the complex and irregularly shaped building polygons to tightly combined elementary building rectangles ready to fit elementary building models. We have optionally introduced OpenStreetMap (OSM) and Graph-Cut (GC) labeling to further refine the orientation of 2D building rectangle. The 3D modeling step takes building-specific parameters such as hip lines, as well as non-rigid and regularized transformations to optimize the flexibility for using a minimal set of elementary models. Finally, roof type of building models s refined and adjacent building models in one building segment are merged into the complex polygonal model. Our proposed method has addressed a few technical caveats over existing methods, resulting in practically high-quality results, based on our evaluation and comparative study on a diverse set of experimental datasets of cities with different urban patterns.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available