3.8 Proceedings Paper

Interpreting and Unifying Graph Neural Networks with An Optimization Framework

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3442381.3449953

关键词

Graph neural networks; network representation learning; deep learning

资金

  1. National Natural Science Foundation of China [U20B2045, U1936104, 61972442, 61772082, 62002029]
  2. National Key Research and Development Program of China [2018YFB1402600]
  3. BUPT Excellent Ph.D.
  4. Students Foundation [CX2020311]

向作者/读者索取更多资源

This paper studies the propagation mechanisms of graph neural networks, proposes a unified optimization framework, summarizes the commonalities between various GNNs, and introduces novel optimization strategies to flexibly design new graph neural networks.
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, not only provides a macroscopic view on surveying the relations between different GNNs, but also further opens up new opportunities for flexibly designing new GNNs. With the proposed framework, we discover that existing works usually utilize naive graph convolutional kernels for feature fitting function, and we further develop two novel objective functions considering adjustable graph kernels showing low-pass or high-pass filtering capabilities respectively. Moreover, we provide the convergence proofs and expressive power comparisons for the proposed models. Extensive experiments on benchmark datasets clearly show that the proposed GNNs not only outperform the state-of-the-art methods but also have good ability to alleviate oversmoothing, and further verify the feasibility for designing GNNs with our unified optimization framework.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据