4.4 Article

Block splitting for distributed optimization

期刊

MATHEMATICAL PROGRAMMING COMPUTATION
卷 6, 期 1, 页码 77-102

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s12532-013-0061-8

关键词

Distributed optimization; Alternating direction method of multipliers; Operator splitting; Proximal operators; Cone programming; Machine learning

资金

  1. National Science Foundation Graduate Research Fellowship [DGE-0645962]

向作者/读者索取更多资源

This paper describes a general purpose method for solving convex optimization problems in a distributed computing environment. In particular, if the problem data includes a large linear operator or matrix A, the method allows for handling each sub-block of A on a separate machine. The approach works as follows. First, we define a canonical problem form called graph form, in which we have two sets of variables related by a linear operator A, such that the objective function is separable across these two sets of variables. Many types of problems are easily expressed in graph form, including cone programs and a wide variety of regularized loss minimization problems from statistics, like logistic regression, the support vector machine, and the lasso. Next, we describe graph projection splitting, a form of Douglas-Rachford splitting or the alternating direction method of multipliers, to solve graph form problems serially. Finally, we derive a distributed block splitting algorithm based on graph projection splitting. In a statistical or machine learning context, this allows for training models exactly with a huge number of both training examples and features, such that each processor handles only a subset of both. To the best of our knowledge, this is the only general purpose method with this property. We present several numerical experiments in both the serial and distributed settings.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据