4.6 Article

A proportional-integral-derivative-incorporated stochastic gradient descent-based latent factor analysis model

期刊

NEUROCOMPUTING
卷 427, 期 -, 页码 29-39

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2020.11.029

关键词

Big data; Stochastic gradient descent; Proportional integral derivation; PID controller; High-dimensional and sparse matrix; Latent factor analysis

资金

  1. National Natural Science Foundation of China [61772493]
  2. Guangdong Province Universities and College Pearl River Scholar Funded Scheme (2019)
  3. Natural Science Foundation of Chongqing (China) [cstc2019jcyjjqX0013]

向作者/读者索取更多资源

The proposed PID-incorporated SGD-based LFA (PSL) model accelerates model convergence by rebuilding instant errors based on the PID principle. Empirical studies show that the PSL model achieves significantly higher computational efficiency and competitive prediction accuracy for missing data in high-dimensional and sparse matrices compared to state-of-the-art LFA models.
Large-scale relationships like user-item preferences in a recommender system are mostly described by a high-dimensional and sparse (HiDS) matrix. A latent factor analysis (LFA) model extracts useful knowledge from an HiDS matrix efficiently, where stochastic gradient descent (SGD) is frequently adopted as the learning algorithm. However, a standard SGD algorithm updates a decision parameter with the stochastic gradient on the instant loss only, without considering information described by prior updates. Hence, an SGD-based LFA model commonly consumes many iterations to converge, which greatly affects its practicability. On the other hand, a proportional-integral-derivative (PID) controller makes a learning model converge fast with the consideration of its historical errors from the initial state till the current moment. Motivated by this discovery, this paper proposes a PID-incorporated SGD-based LFA (PSL) model. Its main idea is to rebuild the instant error on a single instance following the principle of PID, and then substitute this rebuilt error into an SGD algorithm for accelerating model convergence. Empirical studies on six widely-accepted HiDS matrices indicate that compared with state-of-the-art LFA models, a PSL model achieves significantly higher computational efficiency as well as highly competitive prediction accuracy for missing data of an HiDS matrix. (c) 2020 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据