相关参考文献
注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article
Computer Science, Artificial Intelligence
Sparse Graph Attention Networks
Yang Ye et al.
Summary: Graph Neural Networks (GNNs) are effective for representation learning on graph-structured data and have achieved state-of-the-art performance on various predictive tasks. Graph Attention Networks (GATs) improve the performance of graph learning tasks by assigning dense attention coefficients to node neighbors. However, GATs are prone to overfitting on large and noisy graphs, and may fail on disassortative graphs. Sparse Graph Attention Networks (SGATs) learn sparse attention coefficients using $L_0$-norm regularization, allowing feature aggregation on informative neighbors. SGATs outperform GATs on assortative and disassortative graphs and can remove noisy edges from assortative graphs while maintaining classification accuracies. This is the first graph learning algorithm that demonstrates redundancies in graphs and shows that edge-sparsified graphs can achieve similar or higher predictive performances than original graphs.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2023)
Article
Multidisciplinary Sciences
Human-level control through deep reinforcement learning
Volodymyr Mnih et al.
NATURE (2015)