4.7 Article

Tightly-coupled camera/LiDAR integration for point cloud generation from GNSS/INS-assisted UAV mapping systems

Journal

ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING
Volume 180, Issue -, Pages 336-356

Publisher

ELSEVIER
DOI: 10.1016/j.isprsjprs.2021.08.020

Keywords

Camera/LiDAR integration; Bundle adjustment; Unmanned aerial vehicles; Structure from motion; GNSS/INS-assisted mapping; System calibration

Funding

  1. Civil Engineering Center for Applications of UAS for a Sustainable Environment (CE CAUSE)
  2. Advanced Research Projects Agency Energy (ARPA E), U.S. Department of Energy [DE AR0001135]

Ask authors/readers for more resources

Unmanned aerial vehicles equipped with GNSS/INS, cameras, and LiDAR sensors are widely used in topographic mapping. Integrating image-based and LiDAR point clouds can provide a comprehensive 3D model, and ensuring good alignment between data sources is critical. This study proposes an automated tightly-coupled camera/LiDAR integration workflow for UAV systems, which has been proven to accurately estimate system calibration parameters.
Unmanned aerial vehicles (UAVs) equipped with integrated global navigation satellite systems/inertial navigation systems (GNSS/INS) together with cameras and/or LiDAR sensors are being widely used for topographic mapping in a variety of applications such as precision agriculture, coastal monitoring, and archaeological documentation. Integration of image-based and LiDAR point clouds can provide a comprehensive 3D model of the area of interest. For such integration, ensuring a good alignment between data from the different sources is critical. Although many works have been conducted on this topic, there is still a need for a rigorous integration approach that minimizes the discrepancy between camera and LiDAR data caused by inaccurate system calibration parameters and/or trajectory artifacts. This study proposes an automated tightly-coupled camera/LiDAR integration workflow for GNSS/INS-assisted UAV systems. The proposed strategy is conducted in three main steps. First, an image-based point cloud is generated using a LiDAR/GNSS/INS-assisted structure from motion (SfM) strategy. Then, feature correspondences between image-based and LiDAR point clouds are automatically identified. Finally, an integrated-bundle adjustment procedure including image points, LiDAR raw measurements, and GNSS/INS information is conducted to minimize the discrepancy between point clouds from different sensors while estimating system calibration parameters and refining the trajectory information. The proposed SfM strategy and integration framework are evaluated using five datasets. The SfM results show that using LiDAR data can facilitate feature matching and further increase the number of reconstructed 3D points. The experimental results also illustrate that the developed automated camera/LiDAR integration strategy is capable of accurately estimating system calibration parameters to achieve good alignment among camera/LiDAR data from single/multiple systems. An absolute accuracy in the range of 3-5 cm is achieved for the image/LiDAR point clouds after the integration process.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available