4.8 Article

Fast Class-Wise Updating for Online Hashing

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.3042193

关键词

Training; Hash functions; Optimization; Binary codes; Training data; Complexity theory; Boosting; Image retrieval; similarity preserving; online hashing; binary codes

资金

  1. National Natural Science Foundation of China [62025603, U1705262, 617 72443, 61572410, 61802324, 61702136]
  2. CCF-Baidu Open Fund
  3. Australian Research Council [FL-170100117]

向作者/读者索取更多资源

In this paper, a novel supervised online hashing scheme called FCOH is proposed to address the adaptivity and efficiency issues in online image hashing. By introducing a novel and efficient inner product operation, FCOH achieves fast online adaptivity and efficiency through class-wise updating and semi-relaxation optimization.
Online image hashing has received increasing research attention recently, which processes large-scale data in a streaming fashion to update the hash functions on-the-fly. To this end, most existing works exploit this problem under a supervised setting, i.e., using class labels to boost the hashing performance, which suffers from the defects in both adaptivity and efficiency: First, large amounts of training batches are required to learn up-to-date hash functions, which leads to poor online adaptivity. Second, the training is time-consuming, which contradicts with the core need of online learning. In this paper, a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH), is proposed to address the above two challenges by introducing a novel and efficient inner product operation. To achieve fast online adaptivity, a class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches. Quantitatively, such a decomposition further leads to at least 75 percent storage saving. To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently. Without additional constraints and variables, the time complexity is significantly reduced. Such a scheme is also quantitatively shown to well preserve past information during updating hashing functions. We have quantitatively demonstrated that the collective effort of class-wise updating and semi-relaxation optimization provides a superior performance comparing to various state-of-the-art methods, which is verified through extensive experiments on three widely-used datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据