4.7 Article

L-BGNN: Layerwise Trained Bipartite Graph Neural Networks

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3171199

Keywords

Bipartite graph; Training; Message passing; Task analysis; Generative adversarial networks; Electronic commerce; Semantics; Adversarial learning; bipartite graphs; graph neural networks (GNNs); scalable learning; unsupervised learning

Ask authors/readers for more resources

This work proposes a layerwise-trained bipartite graph neural network (L-BGNN) embedding method for e-commerce applications, such as recommendation, classification, and link prediction. The method utilizes customized interdomain message passing and intradomain alignment operations to aggregate information in the bipartite graph, and employs a layerwise training algorithm to capture multihop relationships and improve training efficiency.
Learning low-dimensional representations of bipartite graphs enables e-commerce applications, such as recommendation, classification, and link prediction. A layerwise-trained bipartite graph neural network (L-BGNN) embedding method, which is unsupervised, efficient, and scalable, is proposed in this work. To aggregate the information across and within two partitions of a bipartite graph, a customized interdomain message passing (IDMP) operation and an intradomain alignment (IDA) operation are adopted by the proposed L-BGNN method. Furthermore, we develop a layerwise training algorithm for L-BGNN to capture the multihop relationship of large bipartite networks and improve training efficiency. We conduct extensive experiments on several datasets and downstream tasks of various scales to demonstrate the effectiveness and efficiency of the L-BGNN method as compared with state-of-the-art methods. Our codes are publicly available at https://github.com/TianXieUSC/L-BGNN.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available