3.8 Proceedings Paper

Multi-Camera-LiDAR Auto-Calibration by Joint Structure-from-Motion

向作者/读者索取更多资源

This paper proposes a novel calibration pipeline that can automatically calibrate multiple cameras and LiDARs in a Structure-from-Motion (SfM) process, eliminating the need for manual design of calibration objects.
Multiple sensors, especially cameras and LiDARs, are widely used in autonomous vehicles. In order to fuse data from different sensors accurately, precise calibrations are required, including camera intrinsic parameters, and relative poses between multiple cameras and LiDARs. However, most existing camera-LiDAR calibration methods need to place manually designed calibration objects in multiple locations and multiple times, which are time-consuming and labor-intensive, and are not suitable for frequent use. To address that, in this paper we proposed a novel calibration pipeline that can automatically calibrate multiple cameras and multiple LiDARs in a Structure-from-Motion (SfM) process. In our pipeline, we first perform a global SfM on all images with the help of rough LiDAR data to get the initial poses of all sensors. Then, feature points on lines and planes are extracted from both SfM point cloud and LiDARs. With these features, a global Bundle Adjustment is performed to minimize the point reprojection errors, point-to-line errors, and point-to-plane errors together. During this minimization process, camera intrinsic parameters, camera and LiDAR poses, and SfM point cloud are refined jointly. The proposed method uses the characteristics of natural scenes, does not require manually designed calibration objects, and incorporates all calibration parameters into a unified optimization framework. Experiments on autonomous vehicles with different sensor configurations demonstrate the effectiveness and robustness of the proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据