4.7 Article

Variational cold-start resistant recommendation

Journal

INFORMATION SCIENCES
Volume 605, Issue -, Pages 267-285

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.05.025

Keywords

Variational autoencoder; Cold-start; Social-trust information; Graph convolutional network

Funding

  1. National Natural Science Foundation of China [62176043, 62072077]

Ask authors/readers for more resources

In this paper, a recommendation model called CORE-VAE is proposed to address the complexity and sparsity of social networks by utilizing a social-aware similarity function and a graph convolutional network. The model generates cold-start resistant rating vectors by producing robust social-aware user representations and explores user rating information using an expressive variational autoencoder. Experimental results demonstrate that CORE-VAE outperforms competitive models on real-world datasets.
Conventionally, cold-start limitations are managed by leveraging side information such as social-trust relationships. However, the relationships between users in social networks are complex, uncertain, and sparse. Therefore, it is necessary to extract beneficial social connections to make the recommendation models cold-start resistant. Towards this end, we propose a novel recommendation model called Variational Cold-start Resistant Recommendation (CORE-VAE). More concretely, we employ a social-aware similarity function and a graph convolutional network (GCN) to generate robust social-aware user representations that account for the complexities, uncertainties, and sparse nature of the social-trust network. Subsequently, these powerful social-aware representations aid us in producing cold-start resistant rating vectors for all users. To explore the rich user rating information, we propose an expressive variational autoencoder (VAE) model. Unlike earlier VAE-based CF models, CORE-VAE utilizes a novel prior distribution and a well-designed skip-generative network to alleviate the posterior collapse issue considerably. Besides, CORE-VAE can also capture the latent space's uncertainty and ensure that observations and their accompanying latent variables have high mutual information. Overall, these novel techniques dramatically help produce better latent representations for generating more accurate recommendations. We show that CORE-VAE outperforms numerous competitive baseline models on real-world datasets through comprehensive empirical evaluation and analysis. (C) 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available