4.6 Article

Interest-Aware Contrastive-Learning-Based GCN for Recommendation

Journal

IEEE ACCESS
Volume 10, Issue -, Pages 126315-126325

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3226369

Keywords

Task analysis; Convolutional neural networks; Collaboration; Recommender systems; Nonhomogeneous media; Data models; Collaborative filtering; Neural networks; graph neural networks; big data applications

Ask authors/readers for more resources

This paper proposes an interest-aware contrastive-learning-based GCN model (IC-GCN), which can effectively utilize signals from higher-order neighbors in recommender systems and address some issues in GCN models. Experimental results demonstrate the effectiveness of the IC-GCN model.
Graph convolutional networks (GCNs) have shown great potential in recommender systems. GCN models contain multiple layers of graph convolutions to exploit signals from higher-order neighbors. In each graph convolution, the embedding of a user or item is influenced by its directly connected neighbors. Some of the main problems with this approach are as follows. First, too many graph convolutional layers make different users or items have similar embeddings. Second, the obtained interaction data have some unfavorable characteristics, such as the sparsity of the data, the noise inside the data, and the distribution skewness of the data, that may impair the model's performance. This paper proposes an interest-aware contrastive-learning-based GCN (IC-GCN) model. IC-GCN applies an interest-aware mechanism, divides users into different subgraphs according to their interests, and performs multilayer graph convolution on the subgraphs, where all collaborative signals received from multi-hop neighbors are positive. Furthermore, IC-GCN takes the contrastive learning task as an auxiliary task, where the interest-aware encoder receives two modified graphs generated by applying the node dropout operator on the full interaction graph. These two graphs generate two sets of embeddings as two additional views of the nodes. The contrastive learning loss function compares these two sets of embeddings. Extensive experiments are conducted to demonstrate the effectiveness of our model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available