4.3 Article

Variance reduced value iteration and faster algorithms for solving Markov decision processes

Journal

NAVAL RESEARCH LOGISTICS
Volume 70, Issue 5, Pages 423-442

Publisher

WILEY
DOI: 10.1002/nav.21992

Keywords

linear programming algorithm; Markov decision processes; value iteration

Ask authors/readers for more resources

This paper provides faster algorithms for approximately solving discounted Markov decision processes in multiple parameter regimes. The algorithms achieve linear time and linear convergence, improving upon previous best algorithms. By cleverly modifying approximate value iteration and combining classic analysis with variance reduction techniques, the paper ensures monotonic progress towards the optimal value and utilizes sampling to obtain linearly convergent linear programming algorithms.
In this paper we provide faster algorithms for approximately solving discounted Markov decision processes in multiple parameter regimes. Given a discounted Markov decision process (DMDP) with vertical bar S vertical bar states, vertical bar A vertical bar actions, discount factor gamma is an element of ( 0, 1), and rewards in the range [-M, M], we show how to compute an epsilon-optimal policy, with probability 1 - delta in time (Note: We use (O) over tilde to hide polylogarithmic factors in the input parameters, that is, (O) over tilde (f (x)) = O(f (x) . log (f (x))O(1)).) (O) over tilde ((vertical bar S vertical bar(2)vertical bar A vertical bar + vertical bar S vertical bar vertical bar A vertical bar/(1-gamma)(3)) log (M/epsilon) log (1/delta)). This contribution reflects the first nearly linear time, nearly linearly convergent algorithm for solving DMDPs for intermediate values of gamma. We also show how to obtain improved sublinear time algorithms providedwe can sample from the transition function in O(1) time. Under this assumption we provide an algorithm which computes an epsilon- optimal policy for epsilon is an element of (0, M root 1-gamma] with probability 1-delta in time (O) over tilde (vertical bar S vertical bar vertical bar A vertical bar M-2/(1-gamma)(4)epsilon(2) log (1/delta)). Furthermore, we extend both these algorithms to solve finite horizon MDPs. Our algorithms improve upon the previous best for approximately computing optimal policies for fixed-horizon MDPs in multiple parameter regimes. Interestingly, we obtain our results by a careful modification of approximate value iteration. We show how to combine classic approximate value iteration analysis with new techniques in variance reduction. Our fastest algorithms leverage further insights to ensure that our algorithms make monotonic progress towards the optimal value. This paper is one of few instances in using sampling to obtain a linearly convergent linear programming algorithm and we hope that the analysis may be useful more broadly.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available