4.6 Article

Discrete-Time Optimal Control via Local Policy Iteration Adaptive Dynamic Programming

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 47, Issue 10, Pages 3367-3379

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2016.2586082

Keywords

Adaptive critic designs; adaptive dynamic programming (ADP); approximate dynamic programming; local policy iteration; neuro-dynamic programming; nonlinear systems; optimal control

Funding

  1. National Natural Science Foundation of China [61233001, 61273140, 61374105, 61503379, 61533017, 61304079, U1501251]

Ask authors/readers for more resources

In this paper, a discrete-time optimal control scheme is developed via a novel local policy iteration adaptive dynamic programming algorithm. In the discrete-time local policy iteration algorithm, the iterative value function and iterative control law can be updated in a subset of the state space, where the computational burden is relaxed compared with the traditional policy iteration algorithm. Convergence properties of the local policy iteration algorithm are presented to show that the iterative value function is monotonically nonincreasing and converges to the optimum under some mild conditions. The admissibility of the iterative control law is proven, which shows that the control system can be stabilized under any of the iterative control laws, even if the iterative control law is updated in a subset of the state space. Finally, two simulation examples are given to illustrate the performance of the developed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available