4.7 Article

Graph Barlow Twins: A self-supervised representation learning framework for graphs

期刊

KNOWLEDGE-BASED SYSTEMS
卷 256, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.109631

关键词

Representation learning; Self-supervised learning; Graph embedding

资金

  1. National Science Centre, Poland [2021/41/N/ST6/03694, 2016/21/D/ST6/02948]
  2. Department of Artificial Intelligence

向作者/读者索取更多资源

Self-supervised learning is an important area of research that aims to eliminate the need for expensive data labeling. We propose a new framework called Graph Barlow Twins for self-supervised graph representation learning, which utilizes a cross-correlation-based loss function instead of negative samples.
The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning - Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures - in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones while requiring fewer hyperparameters and substantially shorter computation time (ca. 30 times faster than BGRL). (c) The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据