4.6 Article

Don't Forget The Past: Recurrent Depth Estimation from Monocular Video

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 5, 期 4, 页码 6813-6820

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2020.3017478

关键词

Deep learning for visual perception; RGBD perception; sensor fusion; novel deep learning methods; autonomous vehicle navigation

类别

资金

  1. Toyota Motor Europe via the research project TRACE Zurich

向作者/读者索取更多资源

Autonomous cars need continuously updated depth information. Thus far, depth is mostly estimated independently for a single frame at a time, even if themethod starts fromvideo input. Our method produces a time series of depth maps, which makes it an ideal candidate for online learning approaches. In particular, we put three different types of depth estimation (supervised depth prediction, self-supervised depth prediction, and self-supervised depth completion) into a common framework. We integrate the corresponding networks with a ConvLSTM such that the spatiotemporal structures of depth across frames can be exploited to yield a more accurate depth estimation. Our method is flexible. It can be applied to monocular videos only or be combined with different types of sparse depth patterns. We carefully study the architecture of the recurrent network and its training strategy. We are first to successfully exploit recurrent networks for real-time self-supervised monocular depth estimation and completion. Extensive experiments show that our recurrent method outperforms its image-based counterpart consistently and significantly in both self-supervised scenarios. It also outperforms previous depth estimation methods of the three popular groups. Please refer to our webpage(1) for details.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据