4.7 Article

Dynamic Event-Triggered Reinforcement Learning-Based Consensus Tracking of Nonlinear Multi-Agent Systems

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSI.2023.3246001

关键词

Nonlinear MASs; optimized consensus; event-triggered control (ETC); reinforcement learning (RL); Hamilton- Jacobi-Bellman (HJB) equation

向作者/读者索取更多资源

In this paper, a novel approach is proposed to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent systems (MASs). An adaptive reinforcement learning algorithm based on the actor-critic architecture and the backstepping method is utilized to optimize control performance. The proposed optimized controller employs a novel event-triggered strategy to dynamically adjust sampling errors online and reduce communication resource usage and computational complexity.
In this paper, we present a novel approach to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent systems (MASs). To optimize control performance, we employ an adaptive reinforcement learning (RL) algorithm based on the actor-critic architecture and utilize the backstepping method. The proposed RL-based optimized controller employs a novel event-triggered strategy, dynamically adjusting sampling errors online to reduce communication resource usage and computational complexity through the intermittent transmission of state signals. We establish the boundedness of all signals in the closed-loop MAS through stability analysis using the Lyapunov method, and demonstrate the prevention of Zeno behavior. Numerical simulations of a practical multi-electromechanical system are provided to validate the effectiveness of the proposed scheme.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据