4.6 Article

A comparison of 3D shape retrieval methods based on a large-scale benchmark supporting multimodal queries

Journal

COMPUTER VISION AND IMAGE UNDERSTANDING
Volume 131, Issue -, Pages 1-27

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.cviu.2014.10.006

Keywords

3D shape retrieval; Large-scale benchmark; Multimodal queries; Unified; Performance evaluation; Query-by-Model; Query-by-Sketch; SHREC

Funding

  1. Texas State University Research Enhancement Program (REP), Army Research Office [W911NF-12-1-0057]
  2. NSF CRI [1305302]
  3. Fraunhofer IDM@NTU
  4. National Research Foundation (NRF)
  5. Direct For Computer & Info Scie & Enginr
  6. Division Of Computer and Network Systems [1305302] Funding Source: National Science Foundation
  7. Grants-in-Aid for Scientific Research [26330133, 26280038, 26120517, 25880013] Funding Source: KAKEN

Ask authors/readers for more resources

Large-scale 3D shape retrieval has become an important research direction in content-based 3D shape retrieval. To promote this research area, two Shape Retrieval Contest (SHREC) tracks on large scale comprehensive and sketch-based 3D model retrieval have been organized by us in 2014. Both tracks were based on a unified large-scale benchmark that supports multimodal queries (3D models and sketches). This benchmark contains 13680 sketches and 8987 3D models, divided into 171 distinct classes. It was compiled to be a superset of existing benchmarks and presents a new challenge to retrieval methods as it comprises generic models as well as domain-specific model types. Twelve and six distinct 3D shape retrieval methods have competed with each other in these two contests, respectively. To measure and compare the performance of the participating and other promising Query-by-Model or Query-by-Sketch 3D shape retrieval methods and to solicit state-of-the-art approaches, we perform a more comprehensive comparison of twenty-six (eighteen originally participating algorithms and eight additional state-of-the-art or new) retrieval methods by evaluating them on the common benchmark. The benchmark, results, and evaluation tools are publicly available at our websites (C) 2014 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available