期刊
JOURNAL OF APPLIED PROBABILITY
卷 58, 期 2, 页码 523-550出版社
CAMBRIDGE UNIV PRESS
DOI: 10.1017/jpr.2020.105
关键词
Continuous-time Markov decision process; unbounded transition and cost rate; risk-sensitive average optimality equation; optimal policy; finite approximation
资金
- National Natural Science Foundation of China [11931018, 61773411, 11961005]
- Guangdong Province Key Laboratory of Computational Science at the Sun Yat-Sen University [2020B1212060032]
This paper discusses risk-sensitive average optimization for denumerable continuous-time Markov decision processes, deriving principles and proving the existence of solutions. It also demonstrates that optimal policies for finite states can approximate those for infinitely countable states.
This paper considers risk-sensitive average optimization for denumerable continuous-time Markov decision processes (CTMDPs), in which the transition and cost rates are allowed to be unbounded, and the policies can be randomized history dependent. We first derive the multiplicative dynamic programming principle and some new facts for risk-sensitive finite-horizon CTMDPs. Then, we establish the existence and uniqueness of a solution to the risk-sensitive average optimality equation (RS-AOE) through the results for risk-sensitive finite-horizon CTMDPs developed here, and also prove the existence of an optimal stationary policy via the RS-AOE. Furthermore, for the case of finite actions available at each state, we construct a sequence of models of finite-state CTMDPs with optimal stationary policies which can be obtained by a policy iteration algorithm in a finite number of iterations, and prove that an average optimal policy for the case of infinitely countable states can be approximated by those of the finite-state models. Finally, we illustrate the conditions and the iteration algorithm with an example.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据