4.6 Article

CAESAR: concept augmentation based semantic representation for cross-modal retrieval

期刊

MULTIMEDIA TOOLS AND APPLICATIONS
卷 81, 期 24, 页码 34213-34243

出版社

SPRINGER
DOI: 10.1007/s11042-020-09983-3

关键词

Cross-modal retrieval; Deep learning; Multi-modal representation learning; Concept augmentation

资金

  1. National Natural Science Foundation of China [61702560, 61472450, 61972203]
  2. Key Research Program of Hunan Province [2016JC2018]
  3. Science and Technology Plan of Hunan Province [2018JJ3691]
  4. Research and Innovation Project of Central South University Graduate Students [2018zzts177]

向作者/读者索取更多资源

The paper presents a concept augmentation-based method CAESAR for cross-modal retrieval, which includes cross-modal correlation learning and concept augmentation-based semantic mapping learning. By developing a multi-modal CNNs based CCA model and a concept learning model CaeNet, the approach captures semantic information and learns semantic relationships between multi-modal samples.
With the increasing amount of multimedia data, cross-modal retrieval has attracted more attentions in the area of multimedia and computer vision. To bridge the semantic gap between multi-modal data and improve the performance of retrieval, we propose an effective concept augmentation based method, named CAESAR, which is an end-to-end framework including cross-modal correlation learning and concept augmentation based semantic mapping learning. To enhance the representation and correlation learning, a novel multi-modal CNNs based CCA model is developed, which is to capture high-level semantic information during the cross-modal feature learning, and then capture maximal nonlinear correlation. In addition, to learn the semantic relationships between multi-modal samples, a concept learning model named CaeNet is proposed, which is realized by word2vec and LDA to capture the closer relations between texts and abstract concepts. Reenforce by the abstract concept information, cross-modal semantic mappings are learnt with a semantic alignment strategy. We conduct comprehensive experiments on four benchmark multimedia datasets. The results show that our method has great performance for cross-modal retrieval.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据