3.8 Proceedings Paper

Differentiable Point-Based Radiance Fields for Efficient View Synthesis

期刊

PROCEEDINGS SIGGRAPH ASIA 2022
卷 -, 期 -, 页码 -

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3550469.3555413

关键词

Neural Rendering; Image-based Rendering; Novel View Synthesis

向作者/读者索取更多资源

The proposed differentiable rendering algorithm significantly improves efficiency in memory and run-time compared to existing methods by utilizing a learned point representation. The method trains the model to reproduce a set of input training images with a given pose using a differentiable splat-based renderer.
We propose a differentiable rendering algorithm for efficient novel view synthesis. By departing from volume-based representations in favor of a learned point representation, we improve on existing methods more than an order of magnitude in memory and run-time, both in training and inference. The method begins with a uniformly-sampled random point cloud and learns per-point position and view-dependent appearance, using a differentiable splat-based renderer to train the model to reproduce a set of input training images with the given pose. Our method is up to 300 x faster than NeRF in both training and inference, with only a marginal sacrifice in quality, while using less than 10 MB of memory for a static scene. For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at a near interactive rate, while maintaining high image quality and temporal coherence even without imposing any temporal-coherency regularizers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据