4.5 Article

Multi-label semantics preserving based deep cross-modal hashing

期刊

出版社

ELSEVIER
DOI: 10.1016/j.image.2020.116131

关键词

Multi-modal retrieval; Deep cross-modal hashing; Multi-label semantic learning

资金

  1. National Natural Science Foundation of China [61806168]
  2. Fundamental Research Funds for the Central Universities, China [SWU117059]
  3. Venture & Innovation Support Program for Chongqing Overseas Returnees, China [CX2018075]

向作者/读者索取更多资源

This paper introduces a deep cross-modal hashing method based on multi-label semantics preservation, aiming to improve the accuracy of hashing retrieval by leveraging multiple labels of training data. Experimental results demonstrate that the proposed method outperforms prominent baselines and achieves state-of-the-art performance in cross-modal hashing retrieval.
Due to the storage and retrieval efficiency of hashing, as well as the highly discriminative feature extraction by deep neural networks, deep cross-modal hashing retrieval has been attracting increasing attention in recent years. However, most of existing deep cross-modal hashing methods simply employ single-label to directly measure the semantic relevance across different modalities, but neglect the potential contributions from multiple category labels. With the aim to improve the accuracy of cross-modal hashing retrieval by fully exploring the semantic relevance based on multiple labels of training data, in this paper, we propose a multi-label semantics preserving based deep cross-modal hashing (MLSPH) method. MLSPH firstly utilizes multi-labels of instances to calculate semantic similarity of the original data. Subsequently, a memory bank mechanism is introduced to preserve the multiple labels semantic similarity constraints and enforce the distinctiveness of learned hash representations over the whole training batch. Extensive experiments on several benchmark datasets reveal that the proposed MLSPH surpasses prominent baselines and reaches the state-ofthe-art performance in the field of cross-modal hashing retrieval. Code is available at: https://github.com/SWUCS-MediaLab/MLSPH.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据