Journal
KNOWLEDGE-BASED SYSTEMS
Volume 279, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.knosys.2023.110923
Keywords
Graph neural network; Graph-shaped kernels; Node classification; Large graph
Categories
Ask authors/readers for more resources
Graph neural networks have achieved great success in graph processing, but have limitations in feature aggregations and update mechanisms. To address these limitations, researchers propose a scalable graph convolution network (SGCN) with an expressive feature aggregation mechanism to enhance structural information learning.
Graph neural networks (GNNs) have demonstrated great success in graph processing. However, current message-passing-based GNNs have limitations in terms of feature aggregations and update mechanisms that rely on a fixed mode, resulting in inadequate representations of the neighborhood structure's richness. Furthermore, the convolution layer in most GNNs lacks flexibility and multiple channels compared to convolutional neural networks (CNNs), which employ multiple channels, where different kernels capture shared patterns of images. To address these limitations, we propose a novel scalable graph convolution network (SGCN) to enhance structural information learning by introducing an expressive feature aggregation mechanism. Drawing inspiration from the design of CNNs, the SGCN includes graph-shaped kernels that perform multichannel convolutions on the subgraph structure. A sampler based on degree centrality is employed to simplify the computation costs, and extensive experiments demonstrate that the SGCN achieved state-of-the-art performances on graph datasets with various scales (with accuracy improvements of up to 5.38%).& COPY; 2023 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available