4.4 Article

Distributed GraphLab: A Framework for Machine Learning and Data Mining in the Cloud

期刊

PROCEEDINGS OF THE VLDB ENDOWMENT
卷 5, 期 8, 页码 716-727

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.14778/2212351.2212354

关键词

-

资金

  1. ONR Young Investigator Program [N00014-08-1-0752]
  2. ARO [MURI W911NF0810242]
  3. ONR [PECASE-N00014-10-1-0672]
  4. National Science Foundation [IIS-0803333]
  5. Intel Science and Technology Center for Cloud Computing
  6. National Science Foundation
  7. ATT Labs

向作者/读者索取更多资源

While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据