4.7 Article

Semi-Supervised Mixture Learning for Graph Neural Networks With Neighbor Dependence

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3263463

Keywords

Generalized expectation-maximization (GEM); graph neural networks (GNNs); neighbor dependence; semi-supervised learning (SSL)

Ask authors/readers for more resources

A graph neural network (GNN) is a powerful architecture for semi-supervised learning. This article proposes a novel framework called graph coneighbor neural network (GCoNN) for node classification. GCoNN consists of two modules: GCoNN(G) and GCoNN(G)?. GCoNN(G) establishes the fundamental prototype for attribute learning while GCoNNi learns neighbor dependence through pseudolabels generated by GCoNN(G).
A graph neural network (GNN) is a powerful architecture for semi-supervised learning (SSL). However, the data-driven mode of GNNs raises some challenging problems. In particular, these models suffer from the limitations of incomplete attribute learning, insufficient structure capture, and the inability to distinguish between node attribute and graph structure, especially on label-scarce or attribute-missing data. In this article, we propose a novel framework, called graph coneighbor neural network (GCoNN), for node classification. It is composed of two modules: GCoNN(G) and GCoNN (G)?. GCoNN(G) is trained to establish the fundamental prototype for attribute learning on labeled data, while GCoNNi, learns neighbor dependence on transductive data through pseudolabels generated by GCoNN(G). Next, GCoNN(G) is retrained to improve integration of node attribute and neighbor structure through feedback from GCoNN(G)?. GCoNN tends to convergence iteratively using such 0 an approach. From a theoretical perspective, we analyze this iteration process from a generalized expectation-maximization (GEM) framework perspective which optimizes an evidence lower bound (ELBO) by amortized variational inference. Empirical evidence demonstrates that the state-of-the-art performance of the proposed approach outperforms other methods. We also apply GCoNN to brain functional networks, the results of which reveal response features across the brain which are physiologically plausible with respect to known language and visual functions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available