4.7 Article

Randomized Block Proximal Methods for Distributed Stochastic Big-Data Optimization

Journal

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Volume 66, Issue 9, Pages 4000-4014

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2020.3027647

Keywords

Optimization; Convergence; Linear programming; Stochastic processes; Distributed algorithms; Approximation algorithms; Heuristic algorithms; Big data applications; distributed algorithms; optimization methods; Stochastic systems

Funding

  1. European Research Council under the European Union [638992-OPT4SMART]

Ask authors/readers for more resources

This article introduces a class of novel distributed algorithms for solving stochastic big-data convex optimization problems, involving consensus steps and updates on decision variables. It discusses the convergence of dynamic consensus algorithm and the algorithm's convergence to the optimal cost in expected value. The algorithm is tested on synthetic and real text data, showing promising results.
In this article, we introduce a class of novel distributed algorithms for solving stochastic big-data convex optimization problems over directed graphs. In the addressed set-up, the dimension of the decision variable can be extremely high and the objective function can be nonsmooth. The general algorithm consists of two main steps: a consensus step and an update on a single block of the optimization variable, which is then broadcast to neighbors. Three special instances of the proposed method, involving particular problem structures, are then presented. In the general case, the convergence of a dynamic consensus algorithm over random row stochastic matrices is shown. Then, the convergence of the proposed algorithm to the optimal cost is proven in expected value. Exact convergence is achieved when using diminishing (local) stepsizes, whereas approximate convergence is attained when constant stepsizes are employed. The convergence rate is shown to be sublinear and an explicit rate is provided in the case of constant stepsizes. Finally, the algorithm is tested on a distributed classification problem, first on synthetic data and, then, on a real, high-dimensional, text dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available