4.7 Article

Wide and Deep Graph Neural Network With Distributed Online Learning

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 70, Issue -, Pages 3862-3877

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2022.3192606

Keywords

Robot kinematics; Signal processing algorithms; Convergence; Heuristic algorithms; Graph neural networks; Testing; Training; Graph neural networks; distributed learning; online learning; stability analysis; convergence analysis

Funding

  1. NSF CCF [1717120]
  2. ARO [W911NF1710438]
  3. ARL DCIST CRA [W911NF-17-2-0181]
  4. Division of Computing and Communication Foundations
  5. Direct For Computer & Info Scie & Enginr [1717120] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper introduces the Wide and Deep GNN (WD-GNN), an architecture that utilizes distributed online learning to train graph neural networks (GNNs). The WD-GNN combines wide and deep components to learn non-linear representations, and employs a distributed online learning algorithm for updates. Experimental results demonstrate the potential of WD-GNN for distributed online learning.
Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data. This renders them suitable candidates for decentralized tasks. In these scenarios, the underlying graph often changes with time due to link failures or topology variations, creating a mismatch between the graphs on which GNNs were trained and the ones on which they are tested. Online learning can be leveraged to retrain GNNs at testing time to overcome this issue. However, most online algorithms are centralized and usually offer guarantees only on convex problems, which GNNs rarely lead to. This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms. The WD-GNN consists of two components: the wide part is a linear graph filter and the deep part is a nonlinear GNN. At training time, the joint wide and deep architecture learns nonlinear representations from data. At testing time, the wide, linear part is retrained, while the deep, nonlinear one remains fixed. This often leads to a convex formulation. We further propose a distributed online learning algorithm that can be implemented in a decentralized setting. We also show the stability of the WD-GNN to changes of the underlying graph and analyze the convergence of the proposed online learning procedure. Experiments on movie recommendation, source localization and robot swarm control corroborate theoretical findings and show the potential of the WD-GNN for distributed online learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available