4.7 Article

Markov decision processes with burstiness constraints

期刊

EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
卷 312, 期 3, 页码 877-889

出版社

ELSEVIER
DOI: 10.1016/j.ejor.2023.07.045

关键词

Dynamic programming; Constrained Markov decision processes; Burstiness constraints

向作者/读者索取更多资源

This paper discusses a Markov Decision Process (MDP) model considering (sigma, rho)-burstiness constraints over a finite or infinite horizon, and explores the corresponding constrained optimization problems. By introducing a recursive form of constraints, an augmented-state model is proposed to recover sufficiency of Markov or stationary policies and apply standard theory.
We consider a Markov Decision Process (MDP), over a finite or infinite horizon, augmented by so-called (sigma, rho)-burstiness constraints. Such constraints, which had been introduced within the framework of net-work calculus, are meant to limit some additive quantity to a given rate over any time interval, plus a term which allows for occasional and limited bursts. We introduce this class of constraints for MDP models, and formulate the corresponding constrained optimization problems. Due to the burstiness constraints, constrained optimal policies are generally history-dependent. We use a recursive form of the constraints to define an augmented-state model, for which sufficiency of Markov or stationary policies is recovered and the standard theory may be applied, albeit over a larger state space. The analysis is mainly devoted to a characterization of feasible policies, followed by application to the constrained MDP optimization problem. A simple queuing example serves to illustrate some of the concepts and calculations involved. (c) 2023 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据