4.6 Article

Optimal Target Shape for LiDAR Pose Estimation

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 7, 期 2, 页码 1238-1245

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3138779

关键词

Computer vision for automation; range sensing; visual tracking

类别

资金

  1. NSF [1808051, 2118818]
  2. Toyota Research Institute
  3. Direct For Computer & Info Scie & Enginr
  4. Division Of Computer and Network Systems [2118818] Funding Source: National Science Foundation
  5. Div Of Electrical, Commun & Cyber Sys
  6. Directorate For Engineering [1808051] Funding Source: National Science Foundation

向作者/读者索取更多资源

This letter introduces the concept of optimizing target shape for LiDAR point clouds to eliminate pose ambiguity and proposes a method to estimate target vertices using the target's geometry. By using the optimal shape and the global solver, high localization accuracy can be achieved even at a distance of 30 meters away.
Targets are essential in problems such as object tracking in cluttered or textureless environments, camera (and multisensor) calibration tasks, and simultaneous localization and mapping (SLAM). Target shapes for these tasks typically are symmetric (square, rectangular, or circular) and work well for structured, dense sensor data such as pixel arrays (i.e., image). However, symmetric shapes lead to pose ambiguity when using sparse sensor data such as LiDAR point clouds and suffer from the quantization uncertainty of the LiDAR. This letter introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds. A target is designed to induce large gradients at edge points under rotation and translation relative to the LiDAR to ameliorate the quantization uncertainty associated with point cloud sparseness. Moreover, given a target shape, we present a means that leverages the target's geometry to estimate the target's vertices while globally estimating the pose. Both the simulation and the experimental results (verified by a motion capture system) confirm that by using the optimal shape and the global solver, we achieve centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed 30 meters away. All the implementations and datasets are available at https://github.com/UMich-BipedLab/global_pase_estimation_for_optimal_shape.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据