4.7 Article

Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3044146

Keywords

Laplace equations; Partitioning algorithms; Symmetric matrices; Clustering algorithms; Training; Topology; Task analysis; Graph classification; graph neural networks (GNNs); graph pooling; Kron reduction; Maxcut optimization

Funding

  1. Swiss National Science Foundation [200021 172671]
  2. Swiss National Science Foundation (SNF) [200021_172671] Funding Source: Swiss National Science Foundation (SNF)

Ask authors/readers for more resources

This paper proposes a pooling operator for graph neural networks (GNNs) called Node Decimation Pooling (NDP). NDP generates coarser graphs through node decimation, Kron reduction, and sparsification while preserving the overall graph topology and reducing computational cost. Experimental results show that NDP is more efficient and achieves competitive performance compared to other graph pooling operators in various graph classification tasks.
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a preprocessing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the MAXCUT solution. Afterward, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available