4.6 Article

Multi-label enhancement based self-supervised deep cross-modal hashing

Journal

NEUROCOMPUTING
Volume 467, Issue -, Pages 138-162

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.09.053

Keywords

Multi-modal retrieval; Deep cross-modal hashing; Multi-label semantic learning

Funding

  1. National Natural Science Foundation of China [61806168]
  2. Fundamental Research Funds for the Central Universities [SWU117059]
  3. Venture & Innovation Support Program for Chongqing Overseas Returnees [CX2018075]

Ask authors/readers for more resources

In this paper, a novel multi-label enhancement based self-supervised deep cross-modal hashing approach is proposed to capture semantic affinity more accurately and avoid noise in modalities, achieving state-of-the-art performance in cross-modal hashing retrieval applications.
Deep cross-modal hashing which integrates deep learning and hashing into cross-modal retrieval, achieves better performance than traditional cross-modal retrieval methods. Nevertheless, most previous deep cross-modal hashing methods only utilize single-class labels to compute the semantic affinity across modalities but overlook the existence of multiple category labels, which can capture the semantic affinity more accurately. Additionally, almost all existing cross-modal hashing methods straightforwardly employ all modalities to learn hash functions but neglect the fact that original instances in all modalities may contain noise. To avoid the above weaknesses, in this paper, a novel multi-label enhancement based self-supervised deep cross-modal hashing (MESDCH) approach is proposed. MESDCH first propose a multi-label semantic affinity preserving module, which uses ReLU transformation to unify the similarities of learned hash representations and the corresponding multi-label semantic affinity of original instances and defines a positive-constraint Kullback-Leibler loss function to preserve their similarity. Then this module is integrated into a self-supervised semantic generation module to further enhance the perfor-mance of deep cross-modal hashing. Extensive evaluation experiments on four well-known datasets demonstrate that the proposed MESDCH achieves state-of-the-art performance and outperforms several excellent baseline methods in the application of cross-modal hashing retrieval. Code is available at: https://github.com/SWU-CS-MediaLab/MESDCH. (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available