Journal
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020)
Volume -, Issue -, Pages 2704-2710Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3366423.3380027
Keywords
Graph Neural Networks; Heterogeneous Information Networks; Representation Learning; Graph Embedding; Graph Attention
Funding
- NSF [III-1705169, 1937599]
- NSF CAREER Award [1741634]
- Okawa Foundation
- Amazon Research Award
- Division of Computing and Communication Foundations
- Direct For Computer & Info Scie & Enginr [1937599] Funding Source: National Science Foundation
Ask authors/readers for more resources
Recent years have witnessed the emerging success of graph neural networks (GNNs) for modeling structured data. However, most GNNs are designed for homogeneous graphs, in which all nodes and edges belong to the same types, making it infeasible to represent heterogeneous structures. In this paper, we present the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous graphs. To model heterogeneity, we design node- and edge-type dependent parameters to characterize the heterogeneous attention over each edge, empowering wir to maintain dedicated representations for different types of nodes and edges. To handle Web-scale graph data, we design the heterogeneous mini-batch graph sampling algorithm-HGSampling-for efficient and scalable training. Extensive experiments on the Open Academic Graph of 179 million nodes and 2 billion edges show that the proposed HGT model consistently outperforms all the state-of-the-art GNN baselines by 9%-21% on various downstream tasks. The dataset and source code of HGT are publicly available at https://github.com/acbull/pyHGT.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available