4.6 Article

Optimal consensus of a class of discrete-time linear multi-agent systems via value iteration with guaranteed admissibility

Journal

NEUROCOMPUTING
Volume 516, Issue -, Pages 1-10

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.10.032

Keywords

Multi-agent system; Reinforcement learning; Optimal consensus; Value iteration

Ask authors/readers for more resources

This paper investigates the optimal consensus problem for heterogeneous discrete-time linear multi-agent systems. A reinforcement learning value iteration algorithm is introduced to obtain the optimal policies, and the admissibility of the iterative control policies for multi-agent systems is considered.
This paper investigates the optimal consensus problem for heterogeneous discrete-time(DT) linear multi-agent systems. The optimal consensus problem is formulated as finding a global Nash equilibrium solution subjected to the defined local performance index. A reinforcement learning(RL) value iteration (VI) algorithm is introduced to obtain the optimal policies in the sense of Nash equilibrium. To ensure the effectiveness of the VI algorithm, the admissibility of the iterative control policies for multi-agent sys-tems is considered. With theoretical analysis, a new termination criterion is established to guarantee the admissibility of the iterative control policies. Furthermore, an online learning framework is designed with an actor-critic neural network(NN) to implement the VI algorithm. Finally, two simulation examples are presented respectively for leader-follower and leaderless multi-agent systems to verify the effectiveness of the proposed method.(c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available