3.8 Proceedings Paper

GRF: Learning a General Radiance Field for 3D Representation and Rendering

Publisher

IEEE
DOI: 10.1109/ICCV48922.2021.01490

Keywords

-

Funding

  1. HK PolyU (UGC) [P0034792]

Ask authors/readers for more resources

The translation introduces a neural network that represents and renders 3D objects and scenes from 2D observations. Key to the approach is learning local features for each pixel and projecting them to 3D points, resulting in high-quality and realistic novel views.
We present a simple yet powerful neural network that implicitly represents and renders 3D objects and scenes only from 2D observations. The network models 3D geometries as a general radiance field, which takes a set of 2D images with camera poses and intrinsics as input, constructs an internal representation for each point of the 3D space, and then renders the corresponding appearance and geometry of that point viewed from an arbitrary position. The key to our approach is to learn local features for each pixel in 2D images and to then project these features to 3D points, thus yielding general and rich point representations. We additionally integrate an attention mechanism to aggregate pixel features from multiple 2D views, such that visual occlusions are implicitly taken into account. Extensive experiments demonstrate that our method can generate high- quality and realistic novel views for novel objects, unseen categories and challenging real- world scenes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available