4.7 Article

A Multi-View Embedding Space for Modeling Internet Images, Tags, and Their Semantics

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 106, Issue 2, Pages 210-233

Publisher

SPRINGER
DOI: 10.1007/s11263-013-0658-4

Keywords

Image search; Canonical correlation; Internet images; Tags

Funding

  1. NSF [IIS 1228082]
  2. DARPA Computer Science Study Group [D12AP00305]
  3. Microsoft Research Faculty Fellowship

Ask authors/readers for more resources

This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available