4.6 Article

DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 3, Issue 4, Pages 4076-4083

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2018.2860039

Keywords

SLAM; visual-based navigation; localization

Categories

Funding

  1. NVIDIA
  2. Spanish Ministry of Economy and Competitiveness [DPI2015-68905-P, DPI2015-67275-P]
  3. FPI [BES-2016-077836]
  4. Aragon regional government (Grupo DGA T04-FSE)

Ask authors/readers for more resources

The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use of most visual SLAM systems in populated real-world environments, which are the target of several relevant applications like service robotics or autonomous vehicles. In this letter we present DynaSLAM, a visual SLAM system that, building on ORB-SLAM2, adds the capabilities of dynamic object detection and background inpainting. DynaSLAM is robust in dynamic scenarios for monocular, stereo, and RGB-D configurations. We are capable of detecting the moving objects either by multiview geometry, deep learning, or both. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects. We evaluate our system in public monocular, stereo, and RGB-D datasets. We study the impact of several accuracy/speed trade-offs to assess the limits of the proposed methodology. DynaSLAM outperforms the accuracy of standard visual SLAM baselines in highly dynamic scenarios. And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available