4.7 Article

Approximate Information State for Approximate Planning and Reinforcement Learning in Partially Observed Systems

Journal

JOURNAL OF MACHINE LEARNING RESEARCH
Volume 23, Issue -, Pages 1-83

Publisher

MICROTOME PUBL

Keywords

Partially observed reinforcement learning; partially observable Markov decision processes; approximate dynamic programming; information state; approximate information state

Ask authors/readers for more resources

This paper proposes a theoretical framework for approximate planning and learning in partially observed systems, based on the concept of information state. By defining two types of information state, the authors demonstrate their applications and properties, and prove the approximate optimality of policies computed using approximate information states. Additionally, the paper explores several approximations in state, observation, and action spaces, and presents an AIS-based policy gradient algorithm.
We propose a theoretical framework for approximate planning and learning in partially observed systems. Our framework is based on the fundamental notion of information state. We provide two definitions of information state i) a function of history which is sufficient to compute the expected reward and predict its next value; ii) a function of the history which can be recursively updated and is sufficient to compute the expected reward and predict the next observation. An information state always leads to a dynamic programming decomposition. Our key result is to show that if a function of the history (called approximate information state (AIS)) approximately satisfies the properties of the information state, then there is a corresponding approximate dynamic program. We show that the policy computed using this is approximately optimal with bounded loss of optimality. We show that several approximations in state, observation and action spaces in literature can be viewed as instances of AIS. In some of these cases, we obtain tighter bounds. A salient feature of AIS is that it can be learnt from data. We present AIS based multi-time scale policy gradient algorithms and detailed numerical experiments with low, moderate and high dimensional environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available