3.8 Proceedings Paper

Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views

Publisher

IEEE
DOI: 10.1109/CVPR.2016.648

Keywords

-

Funding

  1. ANR [ANR-13-CORD-0003]
  2. Intel
  3. Agence Nationale de la Recherche (ANR) [ANR-13-CORD-0003] Funding Source: Agence Nationale de la Recherche (ANR)

Ask authors/readers for more resources

This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for chair detection on a subset of the Pascal VOC dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available