4.7 Article

Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 28, 期 4, 页码 1993-2007

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2018.2882155

关键词

Video hashing; balanced rotation; similarity retrieval; feature representation; deep learning

资金

  1. Royal Society Newton Mobility Grant [IE150997]
  2. Shenzhen Government [GJHZ20180419190732022]
  3. National Natural Science Foundation of China [61773301, 61571269]
  4. EPSRC [EP/R00692X/1] Funding Source: UKRI

向作者/读者索取更多资源

This paper proposes a deep hashing framework, namely, unsupervised deep video hashing (UDVH), for large-scale video similarity search with the aim to learn compact yet effective binary codes. Our UDVH produces the hash codes in a self-taught manner by jointly integrating discriminative video representation with optimal code learning, where an efficient alternating approach is adopted to optimize the objective function. The key differences from most existing video hashing methods lie in: 1) UDVH is an unsupervised hashing method that generates hash codes by cooperatively utilizing feature clustering and a specifically designed binarization with the original neighborhood structure preserved in the binary space and 2) a specific rotation is developed and applied onto video features such that the variance of each dimension can be balanced, thus facilitating the subsequent quantization step. Extensive experiments performed on three popular video datasets show that the UDVH is overwhelmingly better than the state of the arts in terms of various evaluation metrics, which makes it practical in real-world applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据