4.7 Article

Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon

期刊

出版社

MICROTOME PUBL

关键词

continuous-time; stochastic control; linear-quadratic; episodic reinforcement learning; regret analysis

向作者/读者索取更多资源

This paper studies finite-time horizon continuous-time linear-quadratic reinforcement learning problems. A least-squares algorithm based on continuous-time observations and controls is proposed and a logarithmic regret bound is established. Furthermore, a practically implementable least-squares algorithm based on discrete-time observations and piecewise constant controls is introduced.
We study finite-time horizon continuous-time linear-quadratic reinforcement learning prob-lems in an episodic setting, where both the state and control coefficients are unknown to the controller. We first propose a least-squares algorithm based on continuous-time observa-tions and controls, and establish a logarithmic regret bound of magnitude O((ln M)(ln ln M)), with M being the number of learning episodes. The analysis consists of two components: perturbation analysis, which exploits the regularity and robustness of the associated Ric-cati differential equation; and parameter estimation error, which relies on sub-exponential properties of continuous-time least-squares estimators. We further propose a practically implementable least-squares algorithm based on discrete-time observations and piecewise constant controls, which achieves similar logarithmic regret with an additional term de-pending explicitly on the time stepsizes used in the algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据