期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 34, 期 2, 页码 635-649出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3098985
关键词
Optimal control; Heuristic algorithms; Convergence; Regulation; Performance analysis; Mathematical model; Approximation algorithms; Algebraic Riccati equation (ARE); fixed-point theory; off-policy reinforcement learning (RL); optimal control
This article presents a model-free lambda-policy iteration algorithm for the discrete-time linear quadratic regulation problem. It introduces the weighted Bellman operator and the composite Bellman operator to solve the algebraic Riccati equation. Compared to the PI algorithm, the lambda-PI algorithm does not require an initial policy and has a faster convergence rate. The model-free extension of the lambda-PI algorithm is developed using the off-policy reinforcement learning technique.
This article presents a model-free lambda-policy iteration (lambda-PI) for the discrete-time linear quadratic regulation (LQR) problem. To solve the algebraic Riccati equation arising from solving the LQR in an iterative manner, we define two novel matrix operators, named the weighted Bellman operator and the composite Bellman operator. Then, the lambda-PI algorithm is first designed as a recursion with the weighted Bellman operator, and its equivalent formulation as a fixed-point iteration with the composite Bellman operator is shown. The contraction and monotonic properties of the composite Bellman operator guarantee the convergence of the lambda-PI algorithm. In contrast to the PI algorithm, the lambda-PI does not require an admissible initial policy, and the convergence rate outperforms the value iteration (VI) algorithm. Model-free extension of the lambda-PI algorithm is developed using the off-policy reinforcement learning technique. It is also shown that the off-policy variants of the lambda-PI algorithm are robust against the probing noise. Finally, simulation examples are conducted to validate the efficacy of the lambda-PI algorithm.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据