4.7 Article

Topology and Content Co-Alignment Graph Convolutional Learning

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3084125

Keywords

Network topology; Topology; Training; Learning systems; Convolution; Semantics; Noise measurement; Graph convolutional learning; graph mining; network embedding; network representation learning; neural networks

Funding

  1. US National Science Foundation (NSF) [OAC-2017597, IIS-1763452, CNS-1828181]

Ask authors/readers for more resources

In this study, a co-alignment graph convolutional learning (CoGL) paradigm is proposed to align topology and content networks for maximized consistency, which demonstrates comparable or better performance than existing state-of-the-art GNN models in experiments on six benchmarks.
In traditional graph neural networks (GNNs), graph convolutional learning is carried out through topology-driven recursive node content aggregation for network representation learning. In reality, network topology and node content each provide unique and important information, and they are not always consistent because of noise, irrelevance, or missing links between nodes. A pure topology-driven feature aggregation approach between unaligned neighborhoods may deteriorate learning from nodes with poor structure-content consistency, due to the propagation of incorrect messages over the whole network. Alternatively, in this brief, we advocate a co-alignment graph convolutional learning (CoGL) paradigm, by aligning topology and content networks to maximize consistency. Our theme is to enforce the learning from the topology network to be consistent with the content network while simultaneously optimizing the content network to comply with the topology for optimized representation learning. Given a network, CoGL first reconstructs a content network from node features then co-aligns the content network and the original network through a unified optimization goal with: 1) minimized content loss; 2) minimized classification loss; and 3) minimized adversarial loss. Experiments on six benchmarks demonstrate that CoGL achieves comparable and even better performance compared with existing state-of-the-art GNN models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available