4.7 Article

Sparse Graph Attention Networks

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2021.3072345

关键词

Graph neural networks; attention networks; sparsity learning

向作者/读者索取更多资源

Graph Neural Networks (GNNs) are effective for representation learning on graph-structured data and have achieved state-of-the-art performance on various predictive tasks. Graph Attention Networks (GATs) improve the performance of graph learning tasks by assigning dense attention coefficients to node neighbors. However, GATs are prone to overfitting on large and noisy graphs, and may fail on disassortative graphs. Sparse Graph Attention Networks (SGATs) learn sparse attention coefficients using $L_0$-norm regularization, allowing feature aggregation on informative neighbors. SGATs outperform GATs on assortative and disassortative graphs and can remove noisy edges from assortative graphs while maintaining classification accuracies. This is the first graph learning algorithm that demonstrates redundancies in graphs and shows that edge-sparsified graphs can achieve similar or higher predictive performances than original graphs.
Graph Neural Networks (GNNs) have proved to be an effective representation learning framework for graph-structured data, and have achieved state-of-the-art performance on many practical predictive tasks, such as node classification, link prediction and graph classification. Among the variants of GNNs, Graph Attention Networks (GATs) learn to assign dense attention coefficients over all neighbors of a node for feature aggregation, and improve the performance of many graph learning tasks. However, real-world graphs are often very large and noisy, and GATs are prone to overfitting if not regularized properly. Even worse, the local aggregation mechanism of GATs may fail on disassortative graphs, where nodes within local neighborhood provide more noise than useful information for feature aggregation. In this paper, we propose Sparse Graph Attention Networks (SGATs) that learn sparse attention coefficients under an $L_0$L0-norm regularization, and the learned sparse attentions are then used for all GNN layers, resulting in an edge-sparsified graph. By doing so, we can identify noisy/task-irrelevant edges, and thus perform feature aggregation on most informative neighbors. Extensive experiments on synthetic and real-world (assortative and disassortative) graph learning benchmarks demonstrate the superior performance of SGATs. In particular, SGATs can remove about 50-80 percent edges from large assortative graphs, such as PPI and Reddit, while retaining similar classification accuracies. On disassortative graphs, SGATs prune majority of noisy edges and outperform GATs in classification accuracies by significant margins. Furthermore, the removed edges can be interpreted intuitively and quantitatively. To the best of our knowledge, this is the first graph learning algorithm that shows significant redundancies in graphs and edge-sparsified graphs can achieve similar (on assortative graphs) or sometimes higher (on disassortative graphs) predictive performances than original graphs. Our code is available at https://github.com/Yangyeeee/SGAT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据