Journal
NEURAL COMPUTING & APPLICATIONS
Volume 23, Issue 7-8, Pages 1843-1850Publisher
SPRINGER LONDON LTD
DOI: 10.1007/s00521-012-1249-y
Keywords
Adaptive dynamic programming; Reinforcement learning; Policy iteration; Adaptive optimal control; Neural network; Online control; Nonlinear system
Categories
Funding
- National Natural Science Foundation of China [61034002, 61233001, 61273140]
Ask authors/readers for more resources
This paper develops an online algorithm based on policy iteration for optimal control with infinite horizon cost for continuous-time nonlinear systems. In the present method, a discounted value function is employed, which is considered to be a more general case for optimal control problems. Meanwhile, without knowledge of the internal system dynamics, the algorithm can converge uniformly online to the optimal control, which is the solution of the modified Hamilton-Jacobi-Bellman equation. By means of two neural networks, the algorithm is able to find suitable approximations of both the optimal control and the optimal cost. The uniform convergence to the optimal control is shown, guaranteeing the stability of the nonlinear system. A simulation example is provided to illustrate the effectiveness and applicability of the present approach.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available