3.8 Proceedings Paper

Review of vision-based Simultaneous Localization and Mapping

Publisher

IEEE
DOI: 10.1109/itnec.2019.8729285

Keywords

visual simultaneous localization and mapping; robot; visual odometry; graph optimization; loop closure detection

Funding

  1. General Program for Beijing Natural Science Foundation [4174083]
  2. National Natural Science Foundation of China [61773027]
  3. Key Project of S&T Plan of Beijing Municipal Commission of Education [KZ201610005010]

Ask authors/readers for more resources

Vision-based simultaneous localization and mapping (VSLAM) which uses visual sensor to make a robot locate itself in an unknown environment while simultaneously construct a map of the environment. With the continuous development of computer vision and robotics, VSLAM has become a supporting technology for popular fields such as unmanned aerial vehicle, virtual reality and unmanned driving. In this paper, the classical framework of visual SLAM is introduced briefly. On this basis, the key technologies and latest research progress of VSLAM from indirect and direct methods are surveyed. Then the research progress of deep learning techniques applied to VSLAM is reviewed. Finally, the development tendency of VSLAM is discussed.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available