4.7 Article

Augmented Adversarial Training for Cross-Modal Retrieval

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 23, 期 -, 页码 559-571

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2020.2985540

关键词

Cross-modal retrieval; data alignment; adversa-rial training

资金

  1. National Key R&D Program of China [2018AAA0102003]
  2. National Natural Science Foundation of China [61672497, 61620106009, 61836002, 61931008, U1636214]
  3. Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-SYS013]
  4. China Postdoctoral Science Foundation [119103S291]

向作者/读者索取更多资源

This paper proposes a cross-modal retrieval method that employs augmented adversarial training to align data from different modalities by incorporating more semantic relevant and irrelevant sample pairs, which improves the alignment effectiveness. Extensive experiments demonstrate the promising power of the approach compared with state-of-the-art methods.
Cross-modal retrieval has received considerable attention in recent years. The core of cross-modal retrieval is to find a representation space to align data from different modalities according to their semantics. In this paper, we propose a cross-modal retrieval method that aligns data from different modalities by transferring one source modality to another target modality with augmented adversarial training. To preserve the semantic meaning in the modality transfer process, we employ the idea of conditional GANs and augment it. The key idea is to incorporate semantic information from the label space into the adversarial training process by sampling more semantic relevant and irrelevant source-target sample pairs. The augmented sample pairs improve the alignment from two aspects. First, relevant source-target sample pairs provide more training samples, leading to a better guidance of the alignment of fake targets and true paired targets. Second, relevant and irrelevant source-target sample pairs teach the discriminator to better distinguish true relevant pairs from fake relevant pairs, which guides the generator to better transfer from the source modality to the target modality. Extensive experiments compared with state-of-the-art methods show the promising power of our approach.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据