This paper describes BoWSLAM, a scheme for a robot to reliably navigate and map previously unknown environments, in real time, using monocular vision alone. BoWSLAM can navigate challenging dynamic and self-similar environments and can recover from gross errors. Key innovations allowing this include new uses for the bag-of-words image representation; this is used to select the best set of frames from which to reconstruct positions and to give efficient wide-baseline correspondences between many pairs of frames, providing multiple position hypotheses. A graph-based representation of these position hypotheses enables the modeling and optimization of errors in scale in a dual graph and the selection of only reliable position estimates in the presence of gross outliers. BoWSLAM is demonstrated mapping a 25-min, 2.5-km trajectory through a challenging and dynamic outdoor environment without any other sensor input, considerably farther than previous single-camera simultaneous localization and mapping (SLAM) schemes. (C) 2010 Wiley Periodicals, Inc.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据