期刊
IEEE TRANSACTIONS ON MULTIMEDIA
卷 23, 期 -, 页码 559-571出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2020.2985540
关键词
Cross-modal retrieval; data alignment; adversa-rial training
资金
- National Key R&D Program of China [2018AAA0102003]
- National Natural Science Foundation of China [61672497, 61620106009, 61836002, 61931008, U1636214]
- Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-SYS013]
- China Postdoctoral Science Foundation [119103S291]
This paper proposes a cross-modal retrieval method that employs augmented adversarial training to align data from different modalities by incorporating more semantic relevant and irrelevant sample pairs, which improves the alignment effectiveness. Extensive experiments demonstrate the promising power of the approach compared with state-of-the-art methods.
Cross-modal retrieval has received considerable attention in recent years. The core of cross-modal retrieval is to find a representation space to align data from different modalities according to their semantics. In this paper, we propose a cross-modal retrieval method that aligns data from different modalities by transferring one source modality to another target modality with augmented adversarial training. To preserve the semantic meaning in the modality transfer process, we employ the idea of conditional GANs and augment it. The key idea is to incorporate semantic information from the label space into the adversarial training process by sampling more semantic relevant and irrelevant source-target sample pairs. The augmented sample pairs improve the alignment from two aspects. First, relevant source-target sample pairs provide more training samples, leading to a better guidance of the alignment of fake targets and true paired targets. Second, relevant and irrelevant source-target sample pairs teach the discriminator to better distinguish true relevant pairs from fake relevant pairs, which guides the generator to better transfer from the source modality to the target modality. Extensive experiments compared with state-of-the-art methods show the promising power of our approach.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据