4.1 Article

ON GRADUAL-IMPULSE CONTROL OF CONTINUOUS-TIME MARKOV DECISION PROCESSES WITH EXPONENTIAL UTILITY

期刊

ADVANCES IN APPLIED PROBABILITY
卷 53, 期 2, 页码 301-334

出版社

CAMBRIDGE UNIV PRESS
DOI: 10.1017/apr.2020.64

关键词

Continuous-time Markov decision processes; dynamic programming; gradual-impulse control; optimality equation

资金

  1. Royal Society [IE160503]
  2. Daiwa Anglo-Japanese Foundation (UK) [4530/12801]
  3. EPSRC [EP/T018216/1, EP/I001328/1] Funding Source: UKRI

向作者/读者索取更多资源

This study investigates a gradual-impulse control problem of continuous-time Markov decision processes and demonstrates the existence of a deterministic stationary optimal policy under natural conditions, allowing multiple simultaneous impulses, randomized selection of impulses with random effects, and accumulation of jumps. The problem is simplified to an equivalent simple discrete-time Markov decision process, where the action space is the union of gradual and impulsive actions.
We consider a gradual-impulse control problem of continuous-time Markov decision processes, where the system performance is measured by the expectation of the exponential utility of the total cost. We show, under natural conditions on the system primitives, the existence of a deterministic stationary optimal policy out of a more general class of policies that allow multiple simultaneous impulses, randomized selection of impulses with random effects, and accumulation of jumps. After characterizing the value function using the optimality equation, we reduce the gradual-impulse control problem to an equivalent simple discrete-time Markov decision process, whose action space is the union of the sets of gradual and impulsive actions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据