3.8 Proceedings Paper

DistillHash: Unsupervised Deep Hashing by Distilling Data Pairs

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR.2019.00306

关键词

-

资金

  1. National Natural Science Foundation of China [61572388, 61703327]
  2. Key R&D Program-The Key Industry Innovation Chain of Shaanxi [2017ZDCXL-GY-05-04-02, 2017ZDCXLGY-05-02, 2018ZDXM-GY-176]
  3. National Key R&D Program of China [2017YFE0104100]
  4. Australian Research Council [DP-180103424, DE-1901014738, FL170100117]
  5. Australian Research Council [FL170100117] Funding Source: Australian Research Council

向作者/读者索取更多资源

Due to the high storage and search efficiency, hashing has become prevalent for large-scale similarity search. Particularly, deep hashing methods have greatly improved the search performance under supervised scenarios. In contrast, unsupervised deep hashing models can hardly achieve satisfactory performance due to the lack of reliable supervisory similarity signals. To address this issue, we propose a novel deep unsupervised hashing model, dubbed DistillHash, which can learn a distilled data set consisted of data pairs, which have confidence similarity signals. Specifically, we investigate the relationship between the initial noisy similarity signals learned from local structures and the semantic similarity labels assigned by a Bayes optimal classifier. We show that under a mild assumption, some data pairs, of which labels are consistent with those assigned by the Bayes optimal classifier, can be potentially distilled. Inspired by this fact, we design a simple yet effective strategy to distill data pairs automatically and further adopt a Bayesian learning framework to learn hash functions from the distilled data set. Extensive experimental results on three widely used benchmark datasets show that the proposed DistillHash consistently accomplishes the state-of-the-art search performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据