期刊
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL
卷 -, 期 -, 页码 -出版社
WILEY
DOI: 10.1002/rnc.6988
关键词
chaos control; deep reinforcement learning; Lorenz chaotic system; proximal policy optimization
This article presents a DRL-based control method for nonlinear chaotic systems without prior knowledge of the system's equations. Experimental results demonstrate that the PPO algorithm is the most efficient and effective for controlling chaotic systems.
Deep reinforcement learning (DRL) algorithms are suitable for modeling and controlling complex systems. Methods for controlling chaos, a difficult task, require improvement. In this article, we present a DRL-based control method that can control a nonlinear chaotic system without any prior knowledge of the system's equations. We use proximal policy optimization (PPO) to train an agent. The environment is a Lorenz chaotic system, and our goal is to stabilize this chaotic system as quickly as possible and minimize the error by adding extra control terms to the chaotic system. Therefore, the reward function accounts for the total triaxial error. The experimental results demonstrated that the trained agent can rapidly suppress chaos in the system, regardless of the system's random initial conditions. A comprehensive comparison of different DRL algorithms indicated that PPO is the most efficient and effective algorithm for controlling the chaotic system. Moreover, different maximum control forces were applied to determine the relationship between the control forces and controller performance. To verify the robustness of the controller, random disturbances were introduced during training and testing, and the empirical results indicated that the agent trained with random noise performed better. The chaotic system has highly nonlinear characteristics and is extremely sensitive to initial conditions, and DRL is suitable for modeling such systems.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据