4.7 Article

Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 22, Issue 1, Pages 363-376

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2012.2202676

Keywords

Hypergraph learning; social image search; tag; visual-textual

Funding

  1. National Basic Research Program of China (973 Program) [2012CB316400]
  2. National Natural Science Foundation of China [61125106]
  3. National 863 Program of China [2012AA011005]
  4. U.S. National Science Foundation (NSF) [CCF-0905337]

Ask authors/readers for more resources

Due to the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Comparative results of the experiments conducted on a dataset including 370+ images are presented, which demonstrate the effectiveness of the proposed approach.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available