Journal
NEUROCOMPUTING
Volume 207, Issue -, Pages 684-692Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2016.05.053
Keywords
Object retrieval; Multi-scale; Learning on graph; Multimodal data
Categories
Ask authors/readers for more resources
Object retrieval has attracted much research attention in recent years. Confronting object retrieval, how to estimate the relevance among objects is a challenging task. In this paper, we focus on view-based object retrieval and propose a multi-scale object retrieval algorithm via learning on graph from multi modal data. In our work, shape features are extracted from each view of objects. The relevance among objects is formulated in a hypergraph Structure, where the distance of different views in. the feature space is employed to generate the connection in the hypergraph. To achieve better representation performance, we propose a multi-scale hypergraph structure to model object correlations. The learning on graph is conducted to estimate the optimal relevance among these objects, which are used for object retrieval. To evaluate the performance of the proposed method, we conduct experiments on the National Taiwan University dataset and the ETH dataset. Experimental results and comparisons with the state-of-the-art methods demonstrate the effectiveness of the proposed method. (C) 2016 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available