4.7 Article

H∞ control of linear discrete-time systems: Off-policy reinforcement learning

期刊

AUTOMATICA
卷 78, 期 -, 页码 144-152

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.automatica.2016.12.009

关键词

H-infinity control; Off-policy reinforcement learning; Optimal control

资金

  1. NSF [ECCS-1405173, IIS-1208623, ECCS-1101401, ECCS-1230040]
  2. ONR grant [N00014-13-1-0562, N000141410718]

向作者/读者索取更多资源

In this paper, a model-free solution to the H-infinity control of linear discrete-time systems is presented. The proposed approach employs off-policy reinforcement learning (RL) to solve the game algebraic Riccati equation online using measured data along the system trajectories. Like existing model-free RL algorithms, no knowledge of the system dynamics is required. However, the proposed method has two main advantages. First, the disturbance input does not need to be adjusted in a specific manner. This makes it more practical as the disturbance cannot be specified in most real-world applications. Second, there is no bias as a result of adding a probing noise to the control input to maintain persistence of excitation (PE) condition. Consequently, the convergence of the proposed algorithm is not affected by probing noise. An example of the H-infinity control for an F-16 aircraft is given. It is seen that the convergence of the new off-policy RL algorithm is insensitive to probing noise. (C) 2016 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据