4.7 Article

Multimodal deep generative adversarial models for scalable doubly semi-supervised learning

期刊

INFORMATION FUSION
卷 68, 期 -, 页码 118-130

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2020.11.003

关键词

Multiview learning; Multimodal fusion; Generative adversarial networks; Deep generative models; Semi-supervised learning

资金

  1. National Natural Science Foundation of China [61976209, 62020106015, 61906188]
  2. Chinese Academy of Sciences (CAS) International Collaboration Key, China [173211KYSB20190024]
  3. Strategic Priority Research Program of CAS, China [XDB32040000]

向作者/读者索取更多资源

The paper proposed a novel doubly semi-supervised multimodal learning (DSML) framework for the comprehensive utilization of incomplete multi-modality data. By using a modality-shared latent space and multiple modality-specific generators, DSML effectively associates multiple modalities together. Experimental results demonstrate that DSML outperforms baselines on tasks such as semi-supervised classification, missing modality imputation, and cross-modality retrieval.
The comprehensive utilization of incomplete multi-modality data is a difficult problem with strong practical value. Most of the previous multimodal learning algorithms require massive training data with complete modalities and annotated labels, which greatly limits their practicality. Although some existing algorithms can be used to complete the data imputation task, they still have two disadvantages: (1) they cannot control the semantics of the imputed modalities accurately; and (2) they need to establish multiple independent converters between any two modalities when extended to multimodal cases. To overcome these limitations, we propose a novel doubly semi-supervised multimodal learning (DSML) framework. Specifically, DSML uses a modality-shared latent space and multiple modality-specific generators to associate multiple modalities together. Here we divided the shared latent space into two independent parts, the semantic labels and the semantic-free styles, which allows us to easily control the semantics of generated samples. In addition, each modality has its own separate encoder and classifier to infer the corresponding semantic and semantic-free latent variables. The above DSML framework can be adversarially trained by using our specially designed softmax-based discriminators. Large amounts of experimental results show that the DSML obtains better performance than the baselines on three tasks, including semi-supervised classification, missing modality imputation and cross-modality retrieval.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据