3.8 Proceedings Paper

A Benchmark and a Baseline for Robust Multi-view Depth Estimation

期刊

出版社

IEEE
DOI: 10.1109/3DV57658.2022.00074

关键词

-

资金

  1. German Federal Ministry for Economic Affairs and Climate Action within the project KI Delta Learning [19A19013N]
  2. Deutsche Forschungsgemeinschaft (DFG) [417962828]

向作者/读者索取更多资源

Recent deep learning approaches in multi-view depth estimation are utilized in depth-from-video or multi-view stereo settings. These approaches are technically similar as they correlate multiple source views with a keyview to estimate a depth map for the keyview. This work introduces the Robust Multi-view Depth Benchmark that evaluates performance using public datasets in different domains. It is found that recent approaches do not generalize well across datasets when camera poses are available, due to their cost volume output running out of distribution. To address this, the Robust MVD Baseline model is proposed, which employs a novel scale augmentation procedure for robust multi-view depth estimation.
Recent deep learning approaches for multi-view depth estimation are employed either in a depth-from-video or a multi-view stereo setting. Despite different settings, these approaches are technically similar: they correlate multiple source views with a keyview to estimate a depth map for the keyview. In this work, we introduce the Robust Multi-view Depth Benchmark that is built upon a set of public datasets and allows evaluation in both settings on data from different domains. We evaluate recent approaches and find imbalanced performances across domains. Further, we consider a third setting where camera poses are available and the objective is to estimate the corresponding depth maps with their correct scale. We show that recent approaches do not generalize across datasets in this setting. This is because their cost volume output runs out of distribution. To resolve this, we present the Robust MVD Baseline model for multi-view depth estimation, which is built upon existing components but employs a novel scale augmentation procedure. It can be applied for robust multi-view depth estimation, independent of the target data. We provide code for the proposed benchmark and baseline model at https://github.com/lmb-freiburg/robustmvd.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据