4.7 Article

Bounded-parameter markov decision processes

期刊

ARTIFICIAL INTELLIGENCE
卷 122, 期 1-2, 页码 71-109

出版社

ELSEVIER SCIENCE BV
DOI: 10.1016/S0004-3702(00)00047-3

关键词

decision-theoretic planning; planning under uncertainty; approximate planning; Markov decision processes

向作者/读者索取更多资源

In this paper, we introduce the notion of a bounded-parameter Markov decision process (BMDP) as a generalization of the familiar exact MDP. A bounded-parameter MDP is a set of exact MDPs specified by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). BMDPs form an efficiently solvable special case of the already known class of MDPs with imprecise parameters (MDPIPs). Bounded-parameter MDPs can be used to represent variation or uncertainty concerning the parameters of sequential decision problems in cases where no prior probabilities on the parameter values are available. Bounded-parameter MDPs can also be used in aggregation schemes to represent the variation in the transition probabilities for different base states aggregated together in the same aggregate state. We introduce interval value functions as a natural extension of traditional value functions. An interval value function assigns a closed real interval to each state, representing the assertion that the value of that state falls within that interval. An interval value function can be used to bound the performance of a policy over the set of exact MDPs associated with a given bounded-parameter MDP. We describe an iterative dynamic programming algorithm called interval policy evaluation that computes an interval value function for a given BMDP and specified policy. Interval policy evaluation on a policy pi computes the most restrictive interval value function that is sound, i.e., that bounds the value function for pi in every exact MDP in the set defined by the bounded-parameter MDP. We define optimistic and pessimistic criteria for optimality, and provide a variant of value iteration (Bellman, 1957) that we call interval value iteration that computes policies for a BMDP that are optimal with respect to these criteria. We show that each algorithm we present converges to the desired values in a polynomial number of iterations given a fixed discount factor. (C) 2000 Elsevier Science B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据