Journal
PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019)
Volume -, Issue -, Pages 117-123Publisher
IEEE
DOI: 10.1109/itnec.2019.8729285
Keywords
visual simultaneous localization and mapping; robot; visual odometry; graph optimization; loop closure detection
Categories
Funding
- General Program for Beijing Natural Science Foundation [4174083]
- National Natural Science Foundation of China [61773027]
- Key Project of S&T Plan of Beijing Municipal Commission of Education [KZ201610005010]
Ask authors/readers for more resources
Vision-based simultaneous localization and mapping (VSLAM) which uses visual sensor to make a robot locate itself in an unknown environment while simultaneously construct a map of the environment. With the continuous development of computer vision and robotics, VSLAM has become a supporting technology for popular fields such as unmanned aerial vehicle, virtual reality and unmanned driving. In this paper, the classical framework of visual SLAM is introduced briefly. On this basis, the key technologies and latest research progress of VSLAM from indirect and direct methods are surveyed. Then the research progress of deep learning techniques applied to VSLAM is reviewed. Finally, the development tendency of VSLAM is discussed.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available