4.7 Article

Reinforcement learning to achieve real-time control of triple inverted pendulum

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.engappai.2023.107518

关键词

Triple pendulum on a cart; Swing-up control; Reinforcement learning; Virtual experience replay

向作者/读者索取更多资源

This work utilizes reinforcement learning to achieve real-time control of a non-simulated triple inverted pendulum, using a structure-aware virtual experience replay method to enhance learning efficiency, and demonstrates its effectiveness on an actual system.
This work uses reinforcement learning (RL) to achieve the first-ever data-driven real-time control of an actual, not simulated, triple inverted pendulum (TIP) in a model-free way. A swing-up control task for the TIP is formulated as a Markov decision process with a dense reward function, then conducted in real time by using a model-free RL approach. To increase the sample efficiency of learning, a structure-aware virtual experience replay (VER) method is proposed; it works together with an off-policy actor-critic algorithm. The VER exploits the geometrically-symmetric property of TIPs to create virtual sample trajectories from measured ones, then uses the resulting multifold augmented dataset to effectively train actor and critic networks during the learning process. These structure-infused training data serve to obtain additional information and hence increase the convergence speed of network learning. We combine the proposed VER with a state-of-the-art actor-critic algorithm, and then validate its effectiveness through numerical simulations. Notably, the inclusion of VER amplifies computational efficiency, slashing the requisite trials, training steps, and overall duration by approximately 66.67%. Finally experiments demonstrate the real-time control capability of the proposed approach on an actual TIP system.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据