4.7 Article

Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon

Journal

JOURNAL OF MACHINE LEARNING RESEARCH
Volume 23, Issue -, Pages -

Publisher

MICROTOME PUBL

Keywords

continuous-time; stochastic control; linear-quadratic; episodic reinforcement learning; regret analysis

Ask authors/readers for more resources

This paper studies finite-time horizon continuous-time linear-quadratic reinforcement learning problems. A least-squares algorithm based on continuous-time observations and controls is proposed and a logarithmic regret bound is established. Furthermore, a practically implementable least-squares algorithm based on discrete-time observations and piecewise constant controls is introduced.
We study finite-time horizon continuous-time linear-quadratic reinforcement learning prob-lems in an episodic setting, where both the state and control coefficients are unknown to the controller. We first propose a least-squares algorithm based on continuous-time observa-tions and controls, and establish a logarithmic regret bound of magnitude O((ln M)(ln ln M)), with M being the number of learning episodes. The analysis consists of two components: perturbation analysis, which exploits the regularity and robustness of the associated Ric-cati differential equation; and parameter estimation error, which relies on sub-exponential properties of continuous-time least-squares estimators. We further propose a practically implementable least-squares algorithm based on discrete-time observations and piecewise constant controls, which achieves similar logarithmic regret with an additional term de-pending explicitly on the time stepsizes used in the algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available