3.8 Proceedings Paper

Neural Radiance Flow for 4D View Synthesis and Video Processing

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.01406

关键词

-

资金

  1. NSF graduate fellowship
  2. ONR MURI [N00014-18-1-2846]
  3. IBM Thomas J. Watson Research Center [CW3031624]
  4. Samsung Global Research Outreach (GRO) program
  5. Amazon
  6. Autodesk
  7. Qualcomm

向作者/读者索取更多资源

The method utilizes Neural Radiance Flow (NeRFlow) to learn a 4D spatial-temporal representation of dynamic scenes from RGB images. By using a neural implicit representation, it captures 3D occupancy, radiance, and dynamics of scenes, enabling multi-view rendering and video processing tasks without additional supervision.
We present a method, Neural Radiance Flow (NeRFlow), to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when being provided only a single monocular real video. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据