4.5 Article

A survey of average cost problems in deterministic discrete-time control systems

Journal

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jmaa.2022.126906

Keywords

Average cost; Markov decision processes; Dynamic programming; Discrete time systems

Ask authors/readers for more resources

This paper investigates optimal control problems for infinite-horizon discrete-time deterministic systems with the long-run average cost criterion. It presents a survey of some main approaches to study the cost problem, including the AC optimality equation, the steady state approach, and the vanishing discount approach. The paper emphasizes the difference between the deterministic control problem and the corresponding stochastic MDP, and provides examples and open problems for further research.
This paper concerns optimal control problems for infinite-horizon discrete-time deterministic systems with the long-run average cost (AC) criterion. This optimality criterion can be traced back to a paper by Bellman [6] for a class of Markov decision processes (MDPs). We present a survey of some of the main approaches to study the AC problem, namely, the AC optimality (or dynamic programming) equation, the steady state approach, and the vanishing discount approach, emphasizing the difference between the deterministic control problem and the corresponding (stochastic) MDP. Several examples illustrate these approaches and related results. We also state some open problems. (c) 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available