4.6 Article

Continuous-Time Q-Learning for Infinite-Horizon Discounted Cost Linear Quadratic Regulator Problems

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 45, Issue 2, Pages 165-176

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2014.2322116

Keywords

Approximate dynamic programming (ADP); continuous-time dynamical systems; infinite-horizon discounted cost function; integral reinforcement learning (IRL); optimal control; Q-learning; value iteration (VI)

Funding

  1. Indo-U.S. Science and Technology Forum (IUSSTF), New Delhi, INDIA, under NSF Grant [ECCS-1128050, IIS-1208623]
  2. AFOSR EOARD [13-3055]
  3. China NNSF Grant [61120106011]
  4. China Education Ministry Project 111 [B08015]
  5. Direct For Computer & Info Scie & Enginr
  6. Div Of Information & Intelligent Systems [1208623] Funding Source: National Science Foundation
  7. Div Of Electrical, Commun & Cyber Sys
  8. Directorate For Engineering [1128050, 1405173] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper presents a method of Q-learning to solve the discounted linear quadratic regulator (LQR) problem for continuous-time (CT) continuous-state systems. Most available methods in the existing literature for CT systems to solve the LQR problem generally need partial or complete knowledge of the system dynamics. Q-learning is effective for unknown dynamical systems, but has generally been well understood only for discrete-time systems. The contribution of this paper is to present a Q-learning methodology for CT systems which solves the LQR problem without having any knowledge of the system dynamics. A natural and rigorous justified parameterization of the Q-function is given in terms of the state, the control input, and its derivatives. This parameterization allows the implementation of an online Q-learning algorithm for CT systems. The simulation results supporting the theoretical development are also presented.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available