4.7 Article

A2AE: Towards adaptive multi-view graph representation learning via all-to-all graph autoencoder architecture

Journal

APPLIED SOFT COMPUTING
Volume 125, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.asoc.2022.109193

Keywords

Multi -view graph; Graph autoencoder; Network embedding; Graph neural networks

Funding

  1. National Natural Science Foundation of China [61906002, 62076005, U20A20398]
  2. Natural Science Foundation of Anhui Province, China [2008085MF191, 2008085QF306, 1908085MF185]
  3. Uni- versity Synergy Innovation Program of Anhui Province, China [GXXT-2021-002]

Ask authors/readers for more resources

This paper proposes a novel all-to-all graph autoencoder model, named A2AE, for multi-view graph representation learning. It utilizes the rich relational information in multiple views and recognizes the importance of different views.
The multi-view graph is a fundamental data model, which is used to describe complex networks in the real world. Learning the representation of multi-view graphs is a vital step for understanding complex systems and extracting knowledge accurately. However, most existing methods focus on a certain view or simply add multiple views together, which prevents them from making the best of the rich relational information in multiple views and ignores the importance of different views. In this paper, a novel all-to-all graph autoencoder is proposed for multi-view graph representation learning, namely A2AE. The all-to-all model first embeds the attribute multi-view graph into compact representations by semantic fusing the view-specific compact representations from multi-encoders, and then multi -decoders are trained to reconstruct graph structure and attributes. Finally, a self-training clustering module is attached for clustering tasks.(c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available