期刊
IEEE SENSORS JOURNAL
卷 23, 期 13, 页码 14650-14661出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3260104
关键词
Data fusion; light detection and ranging (LiDAR); loop closure; simultaneous localization and mapping (SLAM); visual feature extraction
In this study, a loop closure algorithm named FSLAM is proposed, which fuses visual and scan information and uses deep features for loop detection. By combining camera and LiDAR data for loop verification, FSLAM achieves successful mapping in scenes with similar geometric structures and significantly improves localization and mapping accuracy compared to other algorithms.
Simultaneous localization and mapping (SLAM) is the key technology in the implementation of robot intelligence. Compared with the camera, the higher accuracy and stability can be achieved with light detection and ranging (LiDAR) in the indoor environment. However, LiDAR can only acquire the geometric structure information of the environment, and LiDAR SLAM with loop detection is prone to failure in scenes where the geometric structure information is missing or similar. Therefore, we propose a loop closure algorithm, which fuses visual and scan information, makes use of the deep features for loop detection, and then combines camera and LiDAR data for loop verification. We name it fusion SLAM (FSLAM), which uses a tight coupling to fuse the two for loop correction. We compare the differences between visual feature extraction based on deep neural network hierarchical feature network (HF-Net) and handcrafted feature extraction algorithm ORB. The proposed FSLAM method is able to successfully mapping in scenes with similar geometric structures, while its localization and mapping accuracy is significantly improved compared to other algorithms.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据