Journal
JOURNAL OF MACHINE LEARNING RESEARCH
Volume 24, Issue -, Pages -Publisher
MICROTOME PUBL
Keywords
graph coarsening; graph learning; optimization; spectral properties; Laplacian matrix; clustering; graph classification; adjacency matrix; spectral similarity
Ask authors/readers for more resources
Graph coarsening is a dimensionality reduction technique for large-scale graph machine-learning problems. Existing methods only focus on the graph matrix, while this paper proposes a novel framework that considers both the graph matrix and the node features to learn and simplify the graph structure using optimization methods.
Graph coarsening is a widely used dimensionality reduction technique for approaching large-scale graph machine-learning problems. Given a large graph, graph coarsening aims to learn a smaller-tractable graph while preserving the properties of the originally given graph. Graph data consist of node features and graph matrix (e.g., adjacency and Laplacian). The existing graph coarsening methods ignore the node features and rely solely on a graph matrix to simplify graphs. In this paper, we introduce a novel optimization-based framework for graph dimensionality reduction. The proposed framework lies in the unification of graph learning and dimensionality reduction. It takes both the graph matrix and the node features as the input and learns the coarsen graph matrix and the coarsen feature matrix jointly while ensuring desired properties. The proposed optimization formulation is a multi-block non-convex optimization problem, which is solved efficiently by leveraging block majorization-minimization, log determinant, Dirichlet energy, and regularization frameworks. The proposed algorithms are provably convergent and practically amenable to numerous tasks. It is also established that the learned coarsened graph is epsilon E (0, 1) similar to the original graph. Extensive experiments elucidate the efficacy of the proposed framework for real-world applications. The code for all the experimental results is available at CODE.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available