4.7 Article

Deep Supervised Hashing for Fast Image Retrieval

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 127, Issue 9, Pages 1217-1234

Publisher

SPRINGER
DOI: 10.1007/s11263-019-01174-4

Keywords

Image retrieval; Hashing; Convolutional network; Contrastive loss; Triplet ranking loss

Funding

  1. 973 Program [2015CB351802]
  2. Natural Science Foundation of China [61390511, 61772500]
  3. Frontier Science Key Research Project CAS [QYZDJ-SSW-JSC009]
  4. Youth Innovation Promotion Association CAS [2015085]

Ask authors/readers for more resources

In this paper, we present a new hashing method to learn compact binary codes for highly efficient image retrieval on large-scale datasets. While the complex image appearance variations still pose a great challenge to reliable retrieval, in light of the recent progress of Convolutional Neural Networks (CNNs) in learning robust image representation on various vision tasks, this paper proposes a novel Deep Supervised Hashing method to learn compact similarity-preserving binary code for the huge body of image data. Specifically, we devise a CNN architecture that takes pairs/triplets of images as training inputs and encourages the output of each image to approximate discrete values (e.g. +1). To this end, the loss functions are elaborately designed to maximize the discriminability of the output space by encoding the supervised information from the input image pairs/triplets, and simultaneously imposing regularization on the real-valued outputs to approximate the desired discrete values. For image retrieval, new-coming query images can be easily encoded by forward propagating through the network and then quantizing the network outputs to binary codes representation. Extensive experiments on three large scale datasets CIFAR-10, NUS-WIDE, and SVHN show the promising performance of our method compared with the state-of-the-arts.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available