4.7 Article

Learning salient visual word for scalable mobile image retrieval

Journal

PATTERN RECOGNITION
Volume 48, Issue 10, Pages 3093-3101

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2014.12.017

Keywords

Mobile image retrieval; Scalable retrieval; Salient visual word (SVW); Multiple relevant photos; Spatial verification

Funding

  1. Program 973 [2012CB316400]
  2. NSFC [60903121, 61173109, 61332018]
  3. Microsoft Research Asia

Ask authors/readers for more resources

Owing to the portable and excellent phone camera, people now prefer to-take photos and share them in social networks with their friends. If a user wants to obtain relevant information about an image, content based image retrieval method can be utilized. Taking the limited bandwidth and instability of wireless channel into account, in this paper we propose an effective scalable mobile image retrieval approach by exploiting the advantage of mobile end that people usually take multiple photos of an object in different viewpoints and focuses. The proposed algorithm first determines the truly relevant photos according to visual similarity in mobile end, then learns salient visual words by exploring saliency from these relevant images, and finally determines the contribution order of salient visual words to carry out scalable retrieval. Moreover, to improve the retrieval performance, soft spatial verification is proposed to re-rank the results. Compared to the existing approaches of mobile image retrieval, our approach transmits less data and reduces the computational cost of spatial verification. Most importantly, when the bandwidth is limited, we can transmit only a part of features according their contributions to retrieval. Experimental results show the effectiveness of the proposed approach. (C) 2015 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available