4.5 Article

Fusion of binocular vision, 2D lidar and IMU for outdoor localization and indoor planar mapping

Journal

MEASUREMENT SCIENCE AND TECHNOLOGY
Volume 34, Issue 2, Pages -

Publisher

IOP Publishing Ltd
DOI: 10.1088/1361-6501/ac9ed0

Keywords

2D lidar; binocular vision; IMU; simultaneous localization and mapping; graph optimization

Ask authors/readers for more resources

Emergent fields like Internet of Things applications, driverless cars, and indoor mobile robots have created a growing demand for simultaneous localization and mapping (SLAM) technology. This study introduces BVLI-SLAM, a SLAM scheme that utilizes binocular vision, 2D lidar, and an inertial measurement unit (IMU) sensor. By combining vision and IMU pose estimation, BVLI-SLAM improves the initial values for the 2D lidar mapping algorithm, resulting in better mapping effects. In weak texture areas, lidar assists vision by providing improved plane and yaw angle constraints, leading to higher precision 6-degree of freedom pose estimation. BVLI-SLAM uses graph optimization to integrate data from IMU, binocular camera, and laser. It employs IMU pre-integration to calculate pose in real time through a sliding window-based bundle adjustment optimization that combines visual reprojection error and laser matching error. Outdoor and indoor experiments demonstrate that BVLI-SLAM outperforms VINS-Fusion and Cartographer in terms of mapping effect, positioning accuracy, and robustness, addressing the challenges of positioning and plane mapping in complex indoor scenes.
Emergent fields such as Internet of Things applications, driverless cars, and indoor mobile robots have brought about an increasing demand for simultaneous localization and mapping (SLAM) technology. In this study, we design a SLAM scheme called BVLI-SLAM based on binocular vision, 2D lidar, and an inertial measurement unit (IMU) sensor. The pose estimation provided by vision and the IMU can provide better initial values for the 2D lidar mapping algorithm and improve the mapping effect. Lidar can also assist vision to provide better plane and yaw angle constraints in weak texture areas and obtain higher precision 6-degree of freedom pose. BVLI-SLAM uses graph optimization to fuse the data of the IMU, binocular camera, and laser. The IMU pre-integration combines the visual reprojection error and the laser matching error to form an error equation, which is processed by a sliding window-based bundle adjustment optimization to calculate the pose in real time. Outdoor experiments based on KITTI datasets and indoor experiments based on the trolley mobile measurement platform show that BVLI-SLAM has different degrees of improvement in mapping effect, positioning accuracy, and robustness compared with VINS-Fusion and Cartographer, and can solve the problem of positioning and plane mapping in indoor complex scenes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available