4.7 Article

An efficient manifold regularized sparse non-negative matrix factorization model for large-scale recommender systems on GPUs

期刊

INFORMATION SCIENCES
卷 496, 期 -, 页码 464-484

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2018.07.060

关键词

Collaborative filtering recommender systems; Data mining; Euclidean distance and KL-divergence; GPU parallelization; Manifold regularization; Non-negative matrix factorization

资金

  1. National Key Research and Development Program of China [2016YFB0201303]
  2. National Outstanding Youth Science Program of the National Natural Science Foundation of China [61625202]
  3. Key Program of the National Natural Science Foundation of China [61432005]
  4. Natural Science Foundation of China [61370097, 61672224]
  5. Natural Science Foundation of Hunan Province, China [2018JJ2063]
  6. National Key R AMP
  7. D Program of China [2016YT80201900]

向作者/读者索取更多资源

Non-negative Matrix Factorization (NMF) plays an important role in many data mining applications for low-rank representation and analysis. Due to the sparsity that is caused by missing information in many high-dimension scenes, e.g., social networks or recommender systems, NMF cannot mine a more accurate representation from the explicit information. Manifold learning can incorporate the intrinsic geometry of the data, which is combined with a neighborhood with implicit information. Thus, manifold-regularized NMF (MNMF) can realize a more compact representation for the sparse data. However, MNMF suffers from (a) the forming of large-scale Laplacian matrices, (b) frequent large-scale matrix manipulation, and (c) the involved K-nearest neighbor points, which will result in the over-writing problem in parallelization. To address these issues, a single-thread-based MNMF model is proposed on two types of divergence, i.e., Euclidean distance and Kullback-Leibler (KL) divergence, which depends only on the involved feature-tuples' multiplication and summation and can avoid large-scale matrix manipulation. Furthermore, this model can remove the dependence among the feature vectors with fine-grain parallelization inherence. On that basis, a CUDA parallelization MNMF (CUMNMF) is presented on GPU computing. From the experimental results, CUMNMF achieves a 20X speedup compared with MNMF, as well as a lower time complexity and space requirement. (C) 2018 Published by Elsevier Inc.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据