4.7 Article

Nonlinear-nonquadratic optimal and inverse optimal control for stochastic dynamical systems

Journal

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL
Volume 27, Issue 18, Pages 4723-4751

Publisher

WILEY
DOI: 10.1002/rnc.3829

Keywords

stochastic stability; Lyapunov functions; inverse optimal control; stochastic Hamilton-Jacobi-Bellman equation; polynomial cost functionals; multilinear forms

Funding

  1. Air Force Office of Scientific Research [FA9550-16-1-0100]

Ask authors/readers for more resources

In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for nonlinear stochastic dynamical systems. Specifically, we provide a simplified and tutorial framework for stochastic optimal control and focus on connections between stochastic Lyapunov theory and stochastic Hamilton-Jacobi-Bellman theory. In particular, we show that asymptotic stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton-Jacobi-Bellman equation and, hence, guaranteeing both stochastic stability and optimality. In addition, we develop optimal feedback controllers for affine nonlinear systems using an inverse optimality framework tailored to the stochastic stabilization problem. These results are then used to provide extensions of the nonlinear feedback controllers obtained in the literature that minimize general polynomial and multilinear performance criteria. Copyright (c) 2017 John Wiley & Sons, Ltd.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available