3.8 Proceedings Paper

Omnidirectional Depth Extension Networks

出版社

IEEE
DOI: 10.1109/icra40945.2020.9197123

关键词

-

向作者/读者索取更多资源

Omnidirectional 360 degrees camera proliferates rapidly for autonomous robots since it significantly enhances the perception ability by widening the field of view (FoV). However, corresponding 360 degrees depth sensors, which are also critical for the perception system, are still difficult or expensive to have. In this paper, we propose a low-cost 3D sensing system that combines an omnidirectional camera with a calibrated projective depth camera, where the depth from the limited FoV can be automatically extended to the rest of recorded omnidirectional image. To accurately recover the missing depths, we design an omnidirectional depth extension convolutional neural network (ODE-CNN), in which a spherical feature transform layer (SFTL) is embedded at the end of feature encoding layers, and a deformable convolutional spatial propagation network (D-CSPN) is appended at the end of feature decoding layers. The former re-samples the neighborhood of each pixel in the omnidirectional coordination to the projective coordination, which reduce the difficulty of feature learning, and the later automatically finds a proper context to well align the structures in the estimated depths via CNN w.r.t. the reference image, which significantly improves the visual quality. Finally, we demonstrate the effectiveness of proposed ODE-CNN over the popular 360D dataset, and show that ODE-CNN significantly outperforms (relatively 33% reduction in depth error) other state-of-the-art (SoTA) methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据