期刊
2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC)
卷 -, 期 -, 页码 4059-4064出版社
IEEE
DOI: 10.1109/CDC51059.2022.9992584
关键词
-
资金
- Ministero degli Affari Esteri e della Cooperazione Internazionale [PGR10067]
This paper discusses a linear quadratic optimal control problem in which the system dynamics is unknown and the feedback control is required to have a desired sparsity pattern. The authors propose a reinforcement learning framework based on Q-learning to address this problem. Numerical tests on a scenario with randomly generated graph and unstable dynamics show the effectiveness of the algorithm in producing stabilizing and sparse feedback control.
In this paper, we consider a Linear Quadratic optimal control problem with the assumptions that the system dynamics is unknown and that the designed feedback control has to comply with a desired sparsity pattern. An important application where this set-up arises is distributed control of network systems, where the aim is to find an optimal sparse controller matching the communication graph. To tackle the problem, we propose a Reinforcement Learning framework based on a Q-learning scheme preserving a desired policy structure. At each time step the performance of the current candidate feedback is first evaluated through the computation of its Q-function, and then a new sparse feedback matrix, improving on the previous one, is computed. We prove that the scheme produces at each iteration a stabilizing feedback control with the desired sparsity and with non-increasing cost, which in turns indicates that every limit point of the computed feedback matrices is sparse and stabilizing. The algorithm is numerically tested on a distributed control scenario with randomly generated graph and unstable dynamics.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据