期刊
IEEE ROBOTICS AND AUTOMATION LETTERS
卷 8, 期 2, 页码 504-511出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3226074
关键词
Sensor fusion; SLAM
类别
We propose mVIL-Fusion, a three-level multisensor fusion system that achieves robust state estimation and globally consistent mapping in perceptually degraded environments. Our system uses LiDAR depth-assisted visual-inertial odometry (VIO) as the frontend, with synchronous prediction and distortion correction functions. It also applies a novel double-sliding-window-based optimization to enhance state estimation accuracy and robustness. Loop closures and pose-only factor graph smoothing are used in the backend to generate a global map. The system has been validated on public datasets and self-collected sequences.
We propose mVIL-Fusion, a three-level multisensor fusion system that is able to achieve robust state estimation and globally consistent mapping in perceptually degraded environments. First, LiDAR depth-assisted visual-inertial odometry (VIO) with LiDAR odometry (LO) synchronous prediction and distortion correction functions is proposed as the frontend of our system. Second, a novel double-sliding-window-based optimization of midend joints of LiDAR scan-to-scan translation constraints (VIO status detection function) and scan-to-map rotation constraints (local mapping function) is used to enhance the accuracy and robustness of the state estimation. In the backend, loop closures of local-map-based keyframes are identified with altitude verification, and the global map is generated by incremental smoothing of a pose-only factor graph with altitude prior. The performance of our system is verified on both a public dataset and several self-collected sequences in challenging environments. To benefit the robotics community, our implementation is available at https://github.com/Stan994265/mVIL-Fusion.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据