4.7 Review

Review of Visual Simultaneous Localization and Mapping Based on Deep Learning

Journal

REMOTE SENSING
Volume 15, Issue 11, Pages -

Publisher

MDPI
DOI: 10.3390/rs15112740

Keywords

simultaneous localization and mapping; machine vision; deep learning; visual odometry; loop closure detection; mapping

Ask authors/readers for more resources

Due to the limitations of LiDAR, visual sensors with lightweight and low cost have gained more attention and become a research hotspot for visual simultaneous localization and mapping (VSLAM). This paper reviews VSLAM methods based on deep learning, discussing the integration of deep learning and VSLAM in visual odometry, loop closure detection, and mapping. The contribution and weakness of each algorithm are analyzed, and widely used datasets and evaluation metrics are provided.
Due to the limitations of LiDAR, such as its high cost, short service life and massive volume, visual sensors with their lightweight and low cost are attracting more and more attention and becoming a research hotspot. As the hardware computation power and deep learning develop by leaps and bounds, new methods and ideas for dealing with visual simultaneous localization and mapping (VSLAM) problems have emerged. This paper systematically reviews the VSLAM methods based on deep learning. We briefly review the development process of VSLAM and introduce its fundamental principles and framework. Then, we focus on the integration of deep learning and VSLAM from three aspects: visual odometry (VO), loop closure detection, and mapping. We summarize and analyze the contribution and weakness of each algorithm in detail. In addition, we also provide a summary of widely used datasets and evaluation metrics. Finally, we discuss the open problems and future directions of combining VSLAM with deep learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available