4.6 Article

Cross-modal image retrieval with deep mutual information maximization

Journal

NEUROCOMPUTING
Volume 496, Issue -, Pages 166-177

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.01.078

Keywords

Cross -modal Image Retrieval; Mutual Information; Deep Metric Learning; Self-supervised Learning

Funding

  1. Alibaba-Zhejiang University Joint Institute of Frontier Technologies
  2. National Key R&D Program of China [2018YFC2002603, 2018YFB1403202]
  3. Zhejiang Provincial Natural Science Foundation of China [LZ13F020001]
  4. National Natural Science Foundation of China [61972349, 61173185, 61173186]
  5. National Key Technology R&D Program of China [2012BAI34B01, 2014BAK15B02]

Ask authors/readers for more resources

This paper addresses the issue of cross-modal image retrieval through a new approach based on contrastive self-supervised learning methods to bridge the gap between modalities. The experiments demonstrate that the method achieves state-of-the-art retrieval performance on three large-scale benchmarks.
In this paper, we study the cross-modal image retrieval, where the inputs contain a source image plus some text that describes certain modifications to this image and the desired image. Prior work usually uses a three-stage strategy to tackle this task: 1) extracting the features of the inputs; 2) fusing the features of the source image and its modified text to obtain the fusion feature; 3) learning a similarity metric between the desired image and the source image plus modified text via deep metric learning. Since classical image/text encoders can learn useful representations and common pair-based loss functions of distance metric learning are enough for cross-modal retrieval, people usually improve retrieval accuracy by designing new fusion networks. However, these methods do not successfully handle the modality gap caused by the inconsistent feature distributions of different modalities, which greatly influences the feature fusion and the similarity learning. To alleviate this problem, we apply the contrastive self-supervised learning method Deep InfoMax (DIM) [1] to our approach to bridge this gap by enhancing the dependence between the text, the image, and their fusion. Specifically, our method narrows the modality gap between the text modality and the image modality by maximizing mutual information between their semantically inconsistent representations. Moreover, we seek an effective common subspace for the semantically consistent features of the fusion and the desired images by utilizing Deep InfoMax between the low-level layer of the image encoder and the high-level layer of the fusion network. Extensive experiments on three large-scale benchmarks show that we have bridged the modality gap between different modalities and achieve the state-of-the-art retrieval performance. (c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available