4.7 Article

Policy Evaluation in Continuous MDPs With Efficient Kernelized Gradient Temporal Difference

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 66, 期 4, 页码 1856-1863

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2020.3029315

关键词

Kernel; Complexity theory; Markov processes; Hilbert space; Convergence; Memory management; Automobiles; Iterative learning control; markov processes; optimization methods; stochastic systems

资金

  1. SMART Scholarship
  2. ARL DCIST CRA [W911NF-17-2-0181]
  3. NSF [DGE-1321851]
  4. Intel DevCloud and Intel Science and Technology Center for Wireless Autonomous Systems (ISTC-WAS)

向作者/读者索取更多资源

The paper introduces a memory-efficient nonparametric stochastic method that converges exactly to the Bellman fixed point and has finite complexity in nonlinear parameterized value function estimates. In the Mountain Car domain, the method shows faster convergence and requires less memory compared to existing approaches.
We consider policy evaluation in infinite-horizon discounted Markov decision problems with continuous compact state and action spaces. We reformulate this task as a compositional stochastic program with a function-valued decision variable that belongs to a reproducing kernel Hilbert space (RKHS). We approach this problem via a new functional generalization of stochastic quasi-gradient methods operating in tandem with stochastic sparse subspace projections. The result is an extension of gradient temporal difference learning that yields nonlinearly parameterized value function estimates of the solution to the Bellman evaluation equation. We call this method parsimonious kernel gradient temporal difference learning. Our main contribution is a memory-efficient nonparametric stochastic method guaranteed to converge exactly to the Bellman fixed point with probability 1 with attenuating step-sizes under the hypothesis that it belongs to the RKHS. Further, with constant step-sizes and compression budget, we establish mean convergence to a neighborhood and that the value function estimates have finite complexity. In the Mountain Car domain, we observe faster convergence to lower Bellman error solutions than existing approaches with a fraction of the required memory.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据