4.6 Article

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

期刊

COMMUNICATIONS OF THE ACM
卷 65, 期 1, 页码 99-106

出版社

ASSOC COMPUTING MACHINERY

关键词

-

资金

  1. ONR [N000141712687, N000141912293, N000142012529]
  2. NSF
  3. Hertz Foundation Fellowship
  4. NSF Graduate Fellowship
  5. U.S. Department of Defense (DOD) [N000142012529, N000141912293] Funding Source: U.S. Department of Defense (DOD)

向作者/读者索取更多资源

This method achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. The algorithm represents a scene using a fully connected deep network and synthesizes views by querying 5D coordinates and using volume rendering techniques.
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (theta, phi)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据