4.6 Article

Multimodal Learning of Social Image Representation by Exploiting Social Relations

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 51, 期 3, 页码 1506-1518

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2019.2896100

关键词

Learning systems; Correlation; Visualization; Social networking (online); Task analysis; Image edge detection; Cybernetics; Multimodal; social image; triplet network; variational autoencoder (VAE)

资金

  1. National Natural Science Foundation of China [U1636211]
  2. Beijing Natural Science Foundation of China [4182037]
  3. National Key Research and Development Plan of China [2017YFB0802203]
  4. Guangdong Provincial Special Funds for Applied Technology Research and Development and Transformation of Important Scientific and Technological Achieve [2017B010124002]
  5. Guangdong Key Laboratory of Data Security and Privacy Preserving [2017B030301004]
  6. Natural Science Foundation of Guangdong Province, China [2017A030313334]

向作者/读者索取更多资源

In this paper, a novel correlational multimodal variational autoencoder (CMVAE) model is proposed for learning representations of social images using a triplet network. The method shows effectiveness in two tasks, outperforming existing methods.
Learning the representation for social images has recently made remarkable achievements for many tasks, such as cross-modal retrieval and multilabel classification. However, since social images contain both multimodal contents (e.g., visual images and textual descriptions) and social relations among images, simply modeling the content information may lead to suboptimal embedding. In this paper, we propose a novel multimodal representation learning model for social images, that is, correlational multimodal variational autoencoder (CMVAE) via triplet network. Specifically, in order to mine the highly nonlinear correlation between the visual content and the textual content, a CMVAE is proposed to learn a unified representation for the multiple modalities of social images. Both common information in all modalities and private information in each modality are encoded for the representation learning. To incorporate the social relations among images, we employ the triplet network to embed multiple types of social links in the representation learning. Then, a joint embedding model is proposed to combine the social relations for representation learning of the multimodal contents. Comprehensive experiment results on four datasets confirm the effectiveness of our method in two tasks, namely, multilabel classification and cross-modal retrieval. Our method outperforms the state-of-the-art multimodal representation learning methods with significant improvement of performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据