4.6 Article

Multimodal Learning of Social Image Representation by Exploiting Social Relations

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 51, Issue 3, Pages 1506-1518

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2019.2896100

Keywords

Learning systems; Correlation; Visualization; Social networking (online); Task analysis; Image edge detection; Cybernetics; Multimodal; social image; triplet network; variational autoencoder (VAE)

Funding

  1. National Natural Science Foundation of China [U1636211]
  2. Beijing Natural Science Foundation of China [4182037]
  3. National Key Research and Development Plan of China [2017YFB0802203]
  4. Guangdong Provincial Special Funds for Applied Technology Research and Development and Transformation of Important Scientific and Technological Achieve [2017B010124002]
  5. Guangdong Key Laboratory of Data Security and Privacy Preserving [2017B030301004]
  6. Natural Science Foundation of Guangdong Province, China [2017A030313334]

Ask authors/readers for more resources

In this paper, a novel correlational multimodal variational autoencoder (CMVAE) model is proposed for learning representations of social images using a triplet network. The method shows effectiveness in two tasks, outperforming existing methods.
Learning the representation for social images has recently made remarkable achievements for many tasks, such as cross-modal retrieval and multilabel classification. However, since social images contain both multimodal contents (e.g., visual images and textual descriptions) and social relations among images, simply modeling the content information may lead to suboptimal embedding. In this paper, we propose a novel multimodal representation learning model for social images, that is, correlational multimodal variational autoencoder (CMVAE) via triplet network. Specifically, in order to mine the highly nonlinear correlation between the visual content and the textual content, a CMVAE is proposed to learn a unified representation for the multiple modalities of social images. Both common information in all modalities and private information in each modality are encoded for the representation learning. To incorporate the social relations among images, we employ the triplet network to embed multiple types of social links in the representation learning. Then, a joint embedding model is proposed to combine the social relations for representation learning of the multimodal contents. Comprehensive experiment results on four datasets confirm the effectiveness of our method in two tasks, namely, multilabel classification and cross-modal retrieval. Our method outperforms the state-of-the-art multimodal representation learning methods with significant improvement of performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available