3.8 Proceedings Paper

Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

Publisher

IEEE
DOI: 10.1109/icra.2019.8793479

Keywords

-

Funding

  1. Hong Kong RGC [T22-603/15N]
  2. Hong Kong ITC [PSKL12EG02]
  3. Google Cloud Platform

Ask authors/readers for more resources

Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available