3.8 Proceedings Paper

GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01041

Keywords

-

Ask authors/readers for more resources

The study aims to generate 3D-consistent images with controllable camera poses through 3D-aware image generative modeling. A novel approach is proposed to regulate point sampling and radiance field learning on 2D manifolds, addressing the limitations in handling fine details and stable training in existing generators.
3D-aware image generative modeling aims to generate 3D-consistent images with explicitly controllable camera poses. Recent works have shown promising results by training neural radiance field (NeRF) generators on unstructured 2D images, but still cannot generate highly-realistic images with fine details. A critical reason is that the high memory and computation cost of volumetric representation learning greatly restricts the number of point samples for radiance integration during training. Deficient sampling not only limits the expressive power of the generator to handle fine details but also impedes effective GAN training due to the noise caused by unstable Monte Carlo sampling. We propose a novel approach that regulates point sampling and radiance field learning on 2D manifolds, embodied as a set of learned implicit surfaces in the 3D volume. For each viewing ray, we calculate ray-surface intersections and accumulate their radiance generated by the network. By training and rendering such radiance manifolds, our generator can produce high quality images with realistic fine details and strong visual 3D consistency.(1)

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available