4.7 Article

Pose Interpolation for Laser-based Visual Odometry

期刊

JOURNAL OF FIELD ROBOTICS
卷 31, 期 5, 页码 787-813

出版社

WILEY
DOI: 10.1002/rob.21537

关键词

-

类别

资金

  1. NSERC
  2. Canada Foundation for Innovation
  3. DRDC Suffield
  4. Canadian Space Agency
  5. MDA Space Missions

向作者/读者索取更多资源

In this paper, we present two methods for obtaining visual odometry (VO) estimates using a scanning laser rangefinder. Although common VO implementations utilize stereo camera imagery, passive cameras are dependent on ambient light. In contrast, actively illuminated sensors such as laser rangefinders work in a variety of lighting conditions, including full darkness. We leverage previous successes by applying sparse appearance-based methods to laser intensity images, and we address the issue of motion distortion by considering the timestamps of the interest points detected in each image. To account for the unique timestamps, we introduce two estimator formulations. In the first method, we extend the conventional discrete-time batch estimation formulation by introducing a novel frame-to-frame linear interpolation scheme, and in the second method, we consider the estimation problem by starting with a continuous-time process model. This is facilitated by Gaussian process Gauss-Newton (GPGN), an algorithm for nonparametric, continuous-time, nonlinear, batch state estimation. Both laser-based VO methods are compared and validated using datasets obtained by two experimental configurations. These datasets consist of 11 km of field data gathered by a high-frame-rate scanning lidar and a 365 m traverse using a sweeping planar laser rangefinder. Statistical analysis shows a 5.3% average translation error as a percentage of distance traveled for linear interpolation and 4.4% for GPGN in the high-frame-rate scenario. (C) 2014 Wiley Periodicals, Inc.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据