4.8 Article

A Novel Approach to Large-Scale Dynamically Weighted Directed Network Representation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3132503

关键词

Tensors; Computational modeling; Numerical models; Data models; Convergence; Analytical models; Adaptation models; Dynamically weighted directed network; terminal interaction pattern analysis system; latent factorization of tensors; high dimensional and incomplete tensor; link prediction; representation learning; latent feature

资金

  1. National Key R&D Program of China [2020YFA0713900]
  2. National Natural Science Foundation of China [61772493]
  3. Natural Science Foundation of Chongqing (China) [cstc2019jcyjjqX0013]
  4. CAAIHuawei MindSpore Open Fund [CAAIXSJLJJ-2020-004B, CAAIXSJLJJ-2021-035A]
  5. Pioneer Hundred Talents Program of Chinese Academy of Sciences
  6. Macao Science and Technology Development Fund [061/2020/A2]

向作者/读者索取更多资源

This study proposes a novel and efficient approach to represent large-scale dynamically weighted directed networks (DWDNs), which can extract rich knowledge from incomplete DWDNs.
A dynamically weighted directed network (DWDN) is frequently encountered in various big data-related applications like a terminal interaction pattern analysis system (TIPAS) concerned in this study. It consists of large-scale dynamic interactions among numerous nodes. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant DWDN High Dimensional and Incomplete (HDI). An HDI DWDN, in spite of its incompleteness, contains rich knowledge regarding involved nodes' various behavior patterns. To extract such knowledge from an HDI DWDN, this paper proposes a novel Alternating direction method of multipliers (ADMM)-based Nonnegative Latent-factorization of Tensors (ANLT) model. It adopts three-fold ideas: a) building a data density-oriented augmented Lagrangian function for efficiently handling an HDI tensor's incompleteness and nonnegativity; b) splitting the optimization task in each iteration into an elaborately designed subtask series where each one is solved based on the previously solved ones following the ADMM principle to achieve fast convergence; and c) theoretically proving that its convergence is guaranteed with its efficient learning scheme. Experimental results on six DWDNs from real applications demonstrate that the proposed ANLT outperforms state-of-the-art models significantly in both computational efficiency and prediction accuracy for missing links of an HDI DWDN. Hence, this study proposes a novel and efficient approach to large-scale DWDN representation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据