3.8 Proceedings Paper

Learning View Selection for 3D Scenes

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.01423

Keywords

-

Funding

  1. NSF [IIS-1934932]

Ask authors/readers for more resources

This paper introduces a novel approach to efficiently represent a 3D object/scene by learning a view prediction network and trainable aggregation module, avoiding dense view sampling and converting the multi-view representation optimization problem into a continuous optimization problem. Experimental results show that this approach achieves similar or better solutions with a 10x speedup in running time compared to standard methods.
Efficient 3D space sampling to represent an underlying 3D object/scene is essential for 3D vision, robotics, and beyond. A standard approach is to explicitly sample a dense collection of views and formulate it as a view selection problem, or, more generally, a set cover problem. In this paper, we introduce a novel approach that avoids dense view sampling. The key idea is to learn a view prediction network and a trainable aggregation module that takes the predicted views as input and outputs an approximation of their generic scores (e.g., surface coverage, viewing angle from surface normals). This methodology allows us to turn the set cover problem (or multi-view representation optimization) into a continuous optimization problem. We then explain how to effectively solve the induced optimization problem using continuation, i.e., aggregating a hierarchy of smoothed scoring modules. Experimental results show that our approach arrives at similar or better solutions with about 10 x speed up in running time, comparing with the standard methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available