4.5 Article

A survey of average cost problems in deterministic discrete-time control systems

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jmaa.2022.126906

关键词

Average cost; Markov decision processes; Dynamic programming; Discrete time systems

向作者/读者索取更多资源

This paper investigates optimal control problems for infinite-horizon discrete-time deterministic systems with the long-run average cost criterion. It presents a survey of some main approaches to study the cost problem, including the AC optimality equation, the steady state approach, and the vanishing discount approach. The paper emphasizes the difference between the deterministic control problem and the corresponding stochastic MDP, and provides examples and open problems for further research.
This paper concerns optimal control problems for infinite-horizon discrete-time deterministic systems with the long-run average cost (AC) criterion. This optimality criterion can be traced back to a paper by Bellman [6] for a class of Markov decision processes (MDPs). We present a survey of some of the main approaches to study the AC problem, namely, the AC optimality (or dynamic programming) equation, the steady state approach, and the vanishing discount approach, emphasizing the difference between the deterministic control problem and the corresponding (stochastic) MDP. Several examples illustrate these approaches and related results. We also state some open problems. (c) 2022 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据