Journal
IEEE SENSORS JOURNAL
Volume 23, Issue 13, Pages 14650-14661Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3260104
Keywords
Data fusion; light detection and ranging (LiDAR); loop closure; simultaneous localization and mapping (SLAM); visual feature extraction
Ask authors/readers for more resources
In this study, a loop closure algorithm named FSLAM is proposed, which fuses visual and scan information and uses deep features for loop detection. By combining camera and LiDAR data for loop verification, FSLAM achieves successful mapping in scenes with similar geometric structures and significantly improves localization and mapping accuracy compared to other algorithms.
Simultaneous localization and mapping (SLAM) is the key technology in the implementation of robot intelligence. Compared with the camera, the higher accuracy and stability can be achieved with light detection and ranging (LiDAR) in the indoor environment. However, LiDAR can only acquire the geometric structure information of the environment, and LiDAR SLAM with loop detection is prone to failure in scenes where the geometric structure information is missing or similar. Therefore, we propose a loop closure algorithm, which fuses visual and scan information, makes use of the deep features for loop detection, and then combines camera and LiDAR data for loop verification. We name it fusion SLAM (FSLAM), which uses a tight coupling to fuse the two for loop correction. We compare the differences between visual feature extraction based on deep neural network hierarchical feature network (HF-Net) and handcrafted feature extraction algorithm ORB. The proposed FSLAM method is able to successfully mapping in scenes with similar geometric structures, while its localization and mapping accuracy is significantly improved compared to other algorithms.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available