期刊
OPTIMAL CONTROL APPLICATIONS & METHODS
卷 -, 期 -, 页码 -出版社
WILEY
DOI: 10.1002/oca.3020
关键词
event-triggered control; Hamilton-Jacobi-Bellman equation; optimized control; output feedback; reinforcement learning; stochastic systems
This paper proposes a simple and efficient adaptive event-triggered optimized control scheme using reinforcement learning for stochastic nonlinear systems. The scheme includes an online state observer to estimate unmeasured states and a dynamically adjustable event-triggered mechanism that reduces communication resources. The theoretical analysis proves that all closed-loop signals remain bounded under the proposed output-feedback ETOC method.
This paper proposes a simple and efficient adaptive event-triggered optimized control (ETOC) scheme using reinforcement learning (RL) for stochastic nonlinear systems. The scheme includes an online state observer to estimate unmeasured states and a dynamically adjustable event-triggered mechanism that reduces communication resources. The RL algorithm is based on the negative gradient of a simple positive function and employs the identifier-actor-critic architecture. The proposed ETOC approach is in the sensor-to-controller channel and directly activates control behavior through triggered states, which saves network resources. The theoretical analysis proves that all closed-loop signals remain bounded under the proposed output-feedback ETOC method. Overall, this paper presents a practical and effective ETOC scheme using RL for stochastic nonlinear systems, which has the potential to save communication resources and maintain closed-loop signal stability. Finally, a simulation example is proposed to validate the presented control algorithm.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据