期刊
JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS
卷 522, 期 1, 页码 -出版社
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jmaa.2022.126906
关键词
Average cost; Markov decision processes; Dynamic programming; Discrete time systems
This paper investigates optimal control problems for infinite-horizon discrete-time deterministic systems with the long-run average cost criterion. It presents a survey of some main approaches to study the cost problem, including the AC optimality equation, the steady state approach, and the vanishing discount approach. The paper emphasizes the difference between the deterministic control problem and the corresponding stochastic MDP, and provides examples and open problems for further research.
This paper concerns optimal control problems for infinite-horizon discrete-time deterministic systems with the long-run average cost (AC) criterion. This optimality criterion can be traced back to a paper by Bellman [6] for a class of Markov decision processes (MDPs). We present a survey of some of the main approaches to study the AC problem, namely, the AC optimality (or dynamic programming) equation, the steady state approach, and the vanishing discount approach, emphasizing the difference between the deterministic control problem and the corresponding (stochastic) MDP. Several examples illustrate these approaches and related results. We also state some open problems. (c) 2022 Elsevier Inc. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据