期刊
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS
卷 70, 期 5, 页码 2120-2132出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSI.2023.3246001
关键词
Nonlinear MASs; optimized consensus; event-triggered control (ETC); reinforcement learning (RL); Hamilton- Jacobi-Bellman (HJB) equation
In this paper, a novel approach is proposed to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent systems (MASs). An adaptive reinforcement learning algorithm based on the actor-critic architecture and the backstepping method is utilized to optimize control performance. The proposed optimized controller employs a novel event-triggered strategy to dynamically adjust sampling errors online and reduce communication resource usage and computational complexity.
In this paper, we present a novel approach to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent systems (MASs). To optimize control performance, we employ an adaptive reinforcement learning (RL) algorithm based on the actor-critic architecture and utilize the backstepping method. The proposed RL-based optimized controller employs a novel event-triggered strategy, dynamically adjusting sampling errors online to reduce communication resource usage and computational complexity through the intermittent transmission of state signals. We establish the boundedness of all signals in the closed-loop MAS through stability analysis using the Lyapunov method, and demonstrate the prevention of Zeno behavior. Numerical simulations of a practical multi-electromechanical system are provided to validate the effectiveness of the proposed scheme.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据