4.7 Article

Adaptive Label-Aware Graph Convolutional Networks for Cross-Modal Retrieval

Journal

IEEE TRANSACTIONS ON MULTIMEDIA
Volume 24, Issue -, Pages 3520-3532

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3101642

Keywords

Correlation; Semantics; Task analysis; Adaptation models; Adaptive systems; Birds; Oceans; Cross-modal retrieval; Deep learning; Graph convolutional networks

Funding

  1. National Key Research and Development Program of China [2017YFB1002804]
  2. National Natural Science Foundation of China [62036012, 62072456, 61720106006, 61572503, 61802405, 61872424, 61702509, 61832002, 61936005, U1705262]
  3. Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-JSC039]
  4. Open Research Projects of Zhejiang Laboratory [2021KE0AB05]
  5. Tencent WeChat Rhino-Bird Focused Research Program

Ask authors/readers for more resources

In this paper, a novel end-to-end adaptive label-aware graph convolutional network (ALGCN) is proposed for cross-modal retrieval, achieving modality-invariant and discriminative representations by designing instance and label representation learning branches. The ALGCN model outperforms state-of-the-art methods in cross-modal retrieval on benchmark datasets like NUS-WIDE, MIRFlickr and MS-COCO.
The cross-modal retrieval task has raised continuous attention in recent years with the increasing scale of multi-modal data, which has broad application prospects including multimedia data management and intelligent search engine. Most existing methods mainly project data of different modalities into a common representation space where label information is often exploited to distinguish samples from different semantic categories. However, they typically treat each label as an independent individual and ignore the underlying semantic structure of labels. In this paper, we propose an end-to-end adaptive label-aware graph convolutional network (ALGCN) by designing both the instance representation learning branch and the label representation learning branch, which can obtain modality-invariant and discriminative representations for cross-modal retrieval. Firstly, we construct an instance representation learning branch to transform instances of different modalities into a common representation space. Secondly, we adopt Graph Convolutional Network (GCN) to learn inter-dependent classifiers in the label representation learning branch. In addition, a novel adaptive correlation matrix is proposed to efficiently explore and preserve the semantic structure of labels in a data-driven manner. Together with a robust self-supervision loss for GCN, the GCN model can be supervised to learn an effective and robust correlation matrix for feature propagation. Comprehensive experimental results on three benchmark datasets, NUS-WIDE, MIRFlickr and MS-COCO, demonstrate the superiority of ALGCN, compared with the state-of-the-art methods in cross-modal retrieval.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available