3.9 Article

Data Analytics on Graphs Part III: Machine Learning on Graphs, from Graph Topology to Applications

Journal

FOUNDATIONS AND TRENDS IN MACHINE LEARNING
Volume 13, Issue 4, Pages 332-530

Publisher

NOW PUBLISHERS INC
DOI: 10.1561/2200000078-3

Keywords

graph theory; random data on graphs; big data on graphs; signal processing on graphs; machine learning on graphs; graph topology learning; systems on graphs; vertex-frequency estimation; graph neural networks; graphs and tensors

Ask authors/readers for more resources

Modern data analytics applications on graphs often operate on domains where graph topology is not known a priori, and hence its determination becomes part of the problem definition, rather than serving as prior knowledge which aids the problem solution. Part III of this monograph starts by a comprehensive account of ways to learn the pertinent graph topology, ranging from the simplest case where the physics of the problem already suggest a possible graph structure, through to general cases where the graph structure is to be learned from the data observed on a graph. A particular emphasis is placed on the use of standard relationship measures in this context, including the correlation and precision matrices, together with the ways to combine these with the available prior knowledge and structural conditions, such as the smoothness of the graph signals or sparsity of graph connections. Next, for learning sparse graphs (that is, graphs with a small number of edges), the utility of the least absolute shrinkage and selection operator, known as (LASSO) is addressed, along with its graph specific variant, the graphical LASSO. For completeness, both variants of LASSO are derived in an intuitive way, starting from basic principles. An in-depth elaboration of the graph topology learning paradigm is provided through examples on physically well defined graphs, such as electric circuits, linear heat transfer, social and computer networks, and spring-mass systems. We also review main trends in graph neural networks (GNN) and graph convolutional networks (GCN) from the perspective of graph signal filtering. Particular insight is given to the role of diffusion processes over graphs, to show that GCNs can be understood from the graph diffusion perspective. Given the largely heuristic nature of the existing GCNs, their treatment through graph diffusion processes may also serve as a basis for new designs of GCNs. Tensor representation of lattice-structured graphs is next considered, and it is shown that tensors (multidimensional data arrays) can be treated as a special class of graph signals, whereby the graph vertices reside on a high-dimensional regular lattice structure. Finally, the concept of graph tensor networks is shown to provide a unifying framework for learning of big data on irregular domains. This part of monograph concludes with an in-dept account of emerging applications in financial data processing and underground transportation network modeling. More specifically, by means of portfolio cuts of an asset graph, we show how domain knowledge can be meaningfully incorporated into investment analysis, while the underground transportation example addresses vulnerability of stations in the London underground network to traffic disruption.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available