4.7 Article

Negative samples selecting strategy for graph contrastive learning

期刊

INFORMATION SCIENCES
卷 613, 期 -, 页码 667-681

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.09.024

关键词

Graph neural networks; Contrastive learning; Semi-supervised learning

资金

  1. National Natural Science Foundation of China [62272191, 61872161]
  2. Interdisciplinary and integrated innovation of JLU [JLUXKJC2020207]
  3. Science and Technology Development Program of Jilin Province [20220201153GX]
  4. Foun- dation of the Major Project of Science and Technology Innovation 2030-New Generation of Artificial Intelligence [2021ZD0112500]

向作者/读者索取更多资源

Graph neural networks (GNNs) have been successful in handling graph structured data. Due to limited labeled data, contrastive learning is applied to the graph domain. Existing node-level graph contrastive learning methods face challenges of high computational cost and inaccurate negative sample selection. To address these challenges, we propose a strategy for sampling a subset of nodes and utilize classification prediction to guide the selection of negative samples. Our approach, named GCNSS, achieves faster training and improves the performance of existing GNN models in semi-supervised node classification tasks.
Graph neural networks (GNNs) have emerged as a successful method on graph structured data. Limited by expensive labeled data, contrastive learning has been adopted to the graph domain. In most existing node-level graph contrastive learning methods, when applying contrastive learning to a certain unlabeled node (the center node), its corresponding sim-ilar node (positive sample) is usually generated by data augmentation. Other nodes in the graph are served as the dissimilar nodes (negative samples), which leads to two major problems. First, the computational cost can be prohibitively expensive, especially when the graph is large. Second, utilizing some nodes which share the same label with the center node as the negative samples will damage the learning process. Hence, to address these issues, we explore the feasibility of only sampling a part of nodes for graph contrastive learning process. And unlike the previous self-supervised contrastive methods, we use joint training to exploit supervised signals as much as possible in contrastive learning. Hence, we propose a Negative Samples Selecting Strategy to utilize the classification prediction to guide the selection of the negative samples for sampled nodes. Then, we further incor-porate this strategy for performing contrastive learning on graphs and propose a frame-work named Graph Contrastive Learning with Negative Samples Selecting Strategy (GCNSS). We demonstrate that GCNSS can be trained much faster with much less compu-tation memory than graph contrastive learning baselines, and GCNSS can effectively boost the performance of existing GNN models on semi-supervised node classification tasks across many different datasets. The code is in: https://github.com/MR9812/GCNSS.(c) 2022 Published by Elsevier Inc.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据