4.6 Article

3D Urban Buildings Extraction Based on Airborne LiDAR and Photogrammetric Point Cloud Fusion According to U-Net Deep Learning Model Segmentation

Journal

IEEE ACCESS
Volume 10, Issue -, Pages 20889-20897

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3152744

Keywords

Buildings; Point cloud compression; Three-dimensional displays; Laser radar; Image segmentation; Deep learning; Data mining; Building extraction; point clouds; U-Net; deep learning; segmentation; difference of normals

Funding

  1. Key Area Research and Development Program of Guangdong Province [2020B0101130009]
  2. Guangdong Enterprise Key Laboratory for Urban Sensing, Monitoring and Early Warning [2020B121202019]
  3. Smart Guangzhou Spatiotemporal Information Cloud Platform Construction [GZIT2016-A5-147]
  4. Gao Fen Project of China [30-Y20A34-9010-15/17]
  5. Construction of Public Service Platform: Building Information Modeling (BIM) [TC19083WA]
  6. Construction of Public Service Platform: City Information Modeling (CIM)-Based Integrated Perspective [TC19083WA]

Ask authors/readers for more resources

This article presents and tests the fusion process of LiDAR and photogrammetric point clouds using the U-Net deep learning model segmentation. The results show that the U-Net method is effective for high-resolution image extraction and the fused point clouds with high point density and RGB color information can improve building extraction.
The LiDAR and photogrammetric point clouds fusion procedure for building extraction according to U-Net deep learning model segmentation is provided and tested. Firstly, an initial geo-localization process is performed for photogrammetric point clouds generated using structure-from-motion and dense-matching methods. Then, point cloud segmentation is carried out based on U-Net deep learning model. The precision of the U-Net model for buildings extraction reachs 87%, with F-score of 0.89 and IoU of 0.80. It is shown that the U-Net method is effective for high-resolution image extraction. The detailed information can accurately be identified and extracted, such as vegetation located between buildings and roads. After segmentation, each chunk of the LiDAR and photogrammetric point clouds are finely registered and merged based on the iterative closest point algorithm. Finally, the fused point clouds are obtained. It shows that the structure and shape of the buildings could be delineated from the fused point clouds when both enough ground points and a higher point density are available. Furthermore, color information improves both visualization effect and properties identification. The experiments are conducted to extract individual buildings from three types of point clouds in three plots. A DoN (Difference of Normals) approach is used to isolate 3D buildings from other objects in densely built-up areas. It shows that most building extraction results have a Precision > 0.9 and favorable Recall and F-score values. Although the LiDAR extraction results have some advantages over the photogrammetric and fused ones in terms of Precision, the Recall and F-score results appear best for the fused point clouds. It shows that the fused data contains a high point density and RGB color information and could improve the building extraction.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available