3.8 Proceedings Paper

Self-supervised Visual-LiDAR Odometry with Flip Consistency

出版社

IEEE
DOI: 10.1109/WACV48630.2021.00389

关键词

-

资金

  1. Natural Science Foundation of Zhejiang Province, China [LY17F010007, LY18F010004]

向作者/读者索取更多资源

The paper proposes a self-supervised visual-lidar odometry (Self-VLO) framework that incorporates sparse but accurate depth measurements from lidars into visual methods for more precise ego-motion estimation. Experimental results show that the proposed approach outperforms other self-supervised visual or lidar odometries on the KITTI odometry benchmark, and even outperforms fully supervised visual odometries.
Most learning-based methods estimate ego-motion by utilizing visual sensors, which suffer from dramatic lighting variations and textureless scenarios. In this paper, we incorporate sparse but accurate depth measurements obtained from lidars to overcome the limitation of visual methods. To this end, we design a self-supervised visual-lidar odometry (Self-VLO) framework. It takes both monocular images and sparse depth maps projected from 3D lidar points as input, and produces pose and depth estimations in an end-to-end learning manner, without using any ground truth labels. To effectively fuse two modalities, we design a two-pathway encoder to extract features from visual and depth images and fuse the encoded features with those in decoders at multiple scales by our fusion module. We also adopt a siamese architecture and design an adaptively weighted flip consistency loss to facilitate the self-supervised learning of our VLO. Experiments on the KITTI odometry benchmark show that the proposed approach outperforms all self-supervised visual or lidar odometries. It also performs better than fully supervised VOs, demonstrating the power of fusion.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据