4.7 Article

Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3042589

Keywords

Mathematical model; Trajectory; Heuristic algorithms; Optimal control; System dynamics; Artificial neural networks; Convergence; Approximate optimal control design; auxiliary trajectory; completely model-free; integral reinforcement learning (IRL)

Funding

  1. JSPS KAKENHI [17H03284]
  2. Grants-in-Aid for Scientific Research [17H03284] Funding Source: KAKEN

Ask authors/readers for more resources

A novel IRL algorithm is proposed in this article to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. By embedding an auxiliary trajectory to learn the optimal solution, the algorithm guarantees convergence and stability of the closed-loop system.
In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise. This article challenges the issue by embedding an auxiliary trajectory that is designed as an exciting signal to learn the optimal solution. First, the auxiliary trajectory is used to decompose the state trajectory of the controlled system. Then, by using the decoupled trajectories, a model-free policy iteration (PI) algorithm is developed, where the policy evaluation step and the policy improvement step are alternated until convergence to the optimal solution. It is noted that an appropriate external input is introduced at the policy improvement step to eliminate the requirement of the input-to-state dynamics. Finally, the algorithm is implemented on the actor-critic structure. The output weights of the critic neural network (NN) and the actor NN are updated sequentially by the least-squares methods. The convergence of the algorithm and the stability of the closed-loop system are guaranteed. Two examples are given to show the effectiveness of the proposed algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available