4.6 Article

Improvements to Target-Based 3D LiDAR to Camera Calibration

期刊

IEEE ACCESS
卷 8, 期 -, 页码 134101-134110

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3010734

关键词

Laser radar; Three-dimensional displays; Cameras; Calibration; Semantics; Robot vision systems; Quantization (signal); Calibration; camera; camera-LiDAR calibration; computer vision; extrinsic calibration; LiDAR; mapping; robotics; sensor calibration; sensor fusion; simultaneous localization and mapping

资金

  1. Toyota Research Institute (TRI)
  2. NSF [1808051]
  3. TRI [N021515]

向作者/读者索取更多资源

The rigid-body transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM. While determining such a transformation is not considered glamorous in any sense of the word, it is nonetheless crucial for many modern autonomous systems. Indeed, an error of a few degrees in rotation or a few percent in translation can lead to 20 cm reprojection errors at a distance of 5 m when overlaying a LiDAR image on a camera image. The biggest impediments to determining the transformation accurately are the relative sparsity of LiDAR point clouds and systematic errors in their distance measurements. This paper proposes (1) the use of targets of known dimension and geometry to ameliorate target pose estimation in face of the quantization and systematic errors inherent in a LiDAR image of a target, (2) a fitting method for the LiDAR to monocular camera transformation that avoids the tedious task of target edge extraction from the point cloud, and (3) a cross-validation study based on projection of the 3D LiDAR target vertices to the corresponding corners in the camera image. The end result is a 50% reduction in projection error and a 70% reduction in its variance with respect to baseline.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据