3.8 Proceedings Paper

On the exponential convergence rate of proximal gradient flow algorithms

Journal

2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC)
Volume -, Issue -, Pages 4246-4251

Publisher

IEEE

Keywords

Distributed optimization; forward-backward envelope; global exponential stability; gradient flow; large-scale systems; proximal algorithms; primal-dual method; proximal augmented Lagrangian

Funding

  1. National Science Foundation [ECCS-1809833]
  2. Air Force Office of Scientific Research [FA9550-16-1-0009]

Ask authors/readers for more resources

Many modern large-scale and distributed optimization problems can be cast into a form in which the objective function is a sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method which generalizes standard gradient descent to a nonsmooth setup. In this paper, we leverage the tools from control theory to study global convergence of proximal gradient flow algorithms. We utilize the fact that the proximal gradient algorithm can be interpreted as a variable-metric gradient method on the forward-backward envelope. This continuously differentiable function can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and it enjoys a number of favorable properties. We prove that global exponential convergence can be achieved even in the absence of strong convexity. Moreover, for in-network optimization problems, we provide a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian and prove global exponential stability for strongly convex problems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available