3.8 Proceedings Paper

Enhancing 2D Representation via Adjacent Views for 3D Shape Retrieval

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/ICCV.2019.00383

Keywords

-

Funding

  1. NFS
  2. Beijing Municipal Natural Science Foundation [L182014]
  3. Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University [VRLAB2019C05]
  4. foundation of Science and Technology on Parallel and Distributed Processing laboratory (PDL)
  5. Fundamental Research Funds for the Central Universities

Ask authors/readers for more resources

Multi-view shape descriptors obtained from various 2D images are commonly adopted in 3D shape retrieval. One major challenge is that significant shape information is discarded during 2D view rendering through projection. In this paper, we propose a convolutional neural network based method, Neighbor-Center Enhanced Network, to enhance each 2D view using its neighboring ones. By exploiting cross-view correlations, Neighbor-Center Enhanced Network learns how adjacent views can be maximally incorporated for an enhanced 2D representation to effectively describe shapes. We observe that a very small amount of, e.g., six, enhanced 2D views, are already sufficient for panoramic shape description. Thus, by simply aggregating features from six enhanced 2D views, we arrive at a highly compact yet discriminative shape descriptor. The proposed shape descriptor significantly outperforms state-of-the-art 3D shape retrieval methods on the ModelNet and ShapeNet-Core55 benchmarks, and also exhibits robustness against object occlusion.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available