3.8 Proceedings Paper

Block-NeRF: Scalable Large Scene Neural View Synthesis

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00807

关键词

-

向作者/读者索取更多资源

We propose a variant called Block-NeRF to represent large-scale environments. By decomposing the scene into individually trained NeRFs, rendering time can be decoupled from scene size, enabling rendering in arbitrarily large environments. We also introduce several architectural changes to make NeRF robust to different environmental conditions, and a procedure for aligning appearance between adjacent NeRFs for seamless combination.
We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments. Specifically, we demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to decompose the scene into individually trained NeRFs. This decomposition decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment. We adopt several architectural changes to make NeRF robust to data captured over months under different environmental conditions. We add appearance embeddings, learned pose refinement, and controllable exposure to each individual NeRF, and introduce a procedure for aligning appearance between adjacent NeRFs so that they can be seamlessly combined. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据