4.8 Article

A Novel Approach to Large-Scale Dynamically Weighted Directed Network Representation

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3132503

Keywords

Tensors; Computational modeling; Numerical models; Data models; Convergence; Analytical models; Adaptation models; Dynamically weighted directed network; terminal interaction pattern analysis system; latent factorization of tensors; high dimensional and incomplete tensor; link prediction; representation learning; latent feature

Funding

  1. National Key R&D Program of China [2020YFA0713900]
  2. National Natural Science Foundation of China [61772493]
  3. Natural Science Foundation of Chongqing (China) [cstc2019jcyjjqX0013]
  4. CAAIHuawei MindSpore Open Fund [CAAIXSJLJJ-2020-004B, CAAIXSJLJJ-2021-035A]
  5. Pioneer Hundred Talents Program of Chinese Academy of Sciences
  6. Macao Science and Technology Development Fund [061/2020/A2]

Ask authors/readers for more resources

This study proposes a novel and efficient approach to represent large-scale dynamically weighted directed networks (DWDNs), which can extract rich knowledge from incomplete DWDNs.
A dynamically weighted directed network (DWDN) is frequently encountered in various big data-related applications like a terminal interaction pattern analysis system (TIPAS) concerned in this study. It consists of large-scale dynamic interactions among numerous nodes. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant DWDN High Dimensional and Incomplete (HDI). An HDI DWDN, in spite of its incompleteness, contains rich knowledge regarding involved nodes' various behavior patterns. To extract such knowledge from an HDI DWDN, this paper proposes a novel Alternating direction method of multipliers (ADMM)-based Nonnegative Latent-factorization of Tensors (ANLT) model. It adopts three-fold ideas: a) building a data density-oriented augmented Lagrangian function for efficiently handling an HDI tensor's incompleteness and nonnegativity; b) splitting the optimization task in each iteration into an elaborately designed subtask series where each one is solved based on the previously solved ones following the ADMM principle to achieve fast convergence; and c) theoretically proving that its convergence is guaranteed with its efficient learning scheme. Experimental results on six DWDNs from real applications demonstrate that the proposed ANLT outperforms state-of-the-art models significantly in both computational efficiency and prediction accuracy for missing links of an HDI DWDN. Hence, this study proposes a novel and efficient approach to large-scale DWDN representation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available