期刊
IEEE ROBOTICS AND AUTOMATION LETTERS
卷 6, 期 2, 页码 1495-1502出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3058072
关键词
Visual learning; sensor fusion
类别
资金
- ONR [N00014-19-1-2229]
- ARO [W911NF-17-1-0304]
This method leverages synthetic data to learn the association of sparse point clouds with dense natural shapes and uses images as evidence to validate depth maps, achieving state-of-the-art performance on both indoor and outdoor benchmark datasets.
We present a method for inferring dense depth maps from images and sparse depth measurements by leveraging synthetic data to learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map. Our learned prior for natural shapes uses only sparse depth as input, not images, so the method is not affected by the covariate shift when attempting to transfer learned models from synthetic data to real ones. This allows us to use abundant synthetic data with ground truth to learn the most difficult component of the reconstruction process, which is topology estimation, and use the image to refine the prediction based on photometric evidence. Our approach uses fewer parameters than previous methods, yet, achieves the state of the art on both indoor and outdoor benchmark datasets.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据