4.5 Article

Observer-based dynamic ETC optimized tracking of nonlinear systems with stochastic disturbances

期刊

出版社

WILEY
DOI: 10.1002/oca.3020

关键词

event-triggered control; Hamilton-Jacobi-Bellman equation; optimized control; output feedback; reinforcement learning; stochastic systems

向作者/读者索取更多资源

This paper proposes a simple and efficient adaptive event-triggered optimized control scheme using reinforcement learning for stochastic nonlinear systems. The scheme includes an online state observer to estimate unmeasured states and a dynamically adjustable event-triggered mechanism that reduces communication resources. The theoretical analysis proves that all closed-loop signals remain bounded under the proposed output-feedback ETOC method.
This paper proposes a simple and efficient adaptive event-triggered optimized control (ETOC) scheme using reinforcement learning (RL) for stochastic nonlinear systems. The scheme includes an online state observer to estimate unmeasured states and a dynamically adjustable event-triggered mechanism that reduces communication resources. The RL algorithm is based on the negative gradient of a simple positive function and employs the identifier-actor-critic architecture. The proposed ETOC approach is in the sensor-to-controller channel and directly activates control behavior through triggered states, which saves network resources. The theoretical analysis proves that all closed-loop signals remain bounded under the proposed output-feedback ETOC method. Overall, this paper presents a practical and effective ETOC scheme using RL for stochastic nonlinear systems, which has the potential to save communication resources and maintain closed-loop signal stability. Finally, a simulation example is proposed to validate the presented control algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据