Journal
2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)
Volume -, Issue -, Pages 7123-7129Publisher
IEEE
DOI: 10.1109/icra.2019.8793728
Keywords
-
Categories
Funding
- ARC Laureate Fellowship [FL130100102]
- ARC Centre of Excellence for Robotic Vision [CE140100016]
Ask authors/readers for more resources
Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as independent landmarks in the map. Extensive experiments support our proposed inclusion of semantic objects and planar structures directly in the bundle-adjustment of SLAM - Semantic SLAM - that enriches the reconstructed map semantically, while significantly improving the camera localization.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available