4.7 Article

Distributed Continuous-Time Optimization: Nonuniform Gradient Gains, Finite-Time Convergence, and Convex Constraint Set

Journal

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Volume 62, Issue 5, Pages 2239-2253

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2016.2604324

Keywords

Consensus; convex set constraint; distributed optimization; finite-time convergence; multi-agent systems; nonuniform gradient gains

Funding

  1. National Science Foundation [ECCS-1611423]
  2. National Natural Science Foundation of China [61203080, 61573082, 61528301]
  3. State Key Laboratory of Intelligent Control and Decision of Complex Systems of Beijing Institute of Technology
  4. Div Of Electrical, Commun & Cyber Sys
  5. Directorate For Engineering [1611423] Funding Source: National Science Foundation

Ask authors/readers for more resources

In this paper, a distributed optimization problem with general differentiable convex objective functions is studied for continuous-time multi-agent systems with single-integrator dynamics. The objective is for multiple agents to cooperatively optimize a team objective function formed by a sum of local objective functions with only local interaction and information while explicitly taking into account nonuniform gradient gains, finite-time convergence, and a common convex constraint set. First, a distributed nonsmooth algorithm is introduced for a special class of convex objective functions that have a quadratic-like form. It is shown that all agents reach a consensus in finite time while minimizing the team objective function asymptotically. Second, a distributed algorithm is presented for general differentiable convex objective functions, in which the interaction gains of each agent can be self-adjusted based on local states. A corresponding condition is then given to guarantee that all agents reach a consensus in finite time while minimizing the team objective function asymptotically. Third, a distributed optimization algorithm with state-dependent gradient gains is given for general differentiable convex objective functions. It is shown that the distributed continuous-time optimization problem can be solved even though the gradient gains are not identical. Fourth, a distributed tracking algorithm combined with a distributed estimation algorithm is given for general differentiable convex objective functions. It is shown that all agents reach a consensus while minimizing the team objective function in finite time. Fifth, as an extension of the previous results, a distributed constrained optimization algorithm with nonuniform gradient gains and a distributed constrained finite-time optimization algorithm are given. It is shown that both algorithms can be used to solve a distributed continuous-time optimization problem with a common convex constraint set. Numerical examples are included to illustrate the obtained theoretical results.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available