4.7 Article

Data-driven optimal control with a relaxed linear program

期刊

AUTOMATICA
卷 136, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.automatica.2021.110052

关键词

Approximate dynamic programming; Optimal control; Data-driven control; Linear programming; Stochastic systems

资金

  1. European Research Council [787845]
  2. European Research Council (ERC) [787845] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

This paper introduces a relaxed version of the Bellman operator for q-functions and proves its monotone contraction property with a unique fixed point. Based on this operator, a relaxed linear program (RLP) is constructed, which has better scalability and computational efficiency compared to the standard LP formulation. The theoretical results are validated through simulations.
The linear programming (LP) approach has a long history in the theory of approximate dynamic programming. When it comes to computation, however, the LP approach often suffers from poor scalability. In this work, we introduce a relaxed version of the Bellman operator for q-functions and prove that it is still a monotone contraction mapping with a unique fixed point. In the spirit of the LP approach, we exploit the new operator to build a relaxed linear program (RLP). Compared to the standard LP formulation, our RLP has only one family of constraints and half the decision variables, making it more scalable and computationally efficient. For deterministic systems, the RLP trivially returns the correct q-function. For stochastic linear systems in continuous spaces, the solution to the RLP preserves the minimizer of the optimal q-function, hence retrieves the optimal policy. Theoretical results are backed up in simulation where we solve sampled versions of the LPs with data collected by interacting with the environment. For general nonlinear systems, we observe that the RLP again tends to preserve the minimizers of the solution to the LP, though the relative performance is influenced by the specific geometry of the problem. (c) 2021 The Authors. Published by Elsevier Ltd.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据