3.8 Proceedings Paper

De-rendering 3D Objects in the Wild

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01794

关键词

-

资金

  1. Meta Research
  2. Innovate UK on behalf of UK Research and Innovation (UKRI) [71653]
  3. Department of Engineering Science at the University of Oxford

向作者/读者索取更多资源

This paper presents a weakly supervised method that can decompose a single image of an object into shape, material, and global lighting parameters. The method successfully de-renders 2D images into decomposed 3D representations and generalizes to unseen object categories. Additionally, a photo-realistic synthetic test set is introduced for quantitative evaluation.
With increasing focus on augmented and virtual reality (XR) applications comes the demand for algorithms that can lift objects from images into representations that are suitable for a wide variety of related 3D tasks. Large-scale deployment of XR devices and applications means that we cannot solely rely on supervised learning, as collecting and annotating data for the unlimited variety of objects in the real world is infeasible. We present a weakly supervised method that is able to decompose a single image of an object into shape (depth and normals), material (albedo, reflectivity and shininess) and global lighting parameters. For training, the method only relies on a rough initial shape estimate of the training objects to bootstrap the learning process. This shape supervision can come for example from a pretrained depth network or more generically from a traditional structure-from-motion pipeline. In our experiments, we show that the method can successfully de-render 2D images into a decomposed 3D representation and generalizes to unseen object categories. Since in-the-wild evaluation is difficult due to the lack of ground truth data, we also introduce a photo-realistic synthetic test set that allows for quantitative evaluation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据