4.7 Article Proceedings Paper

Policy gradient in Lipschitz Markov Decision Processes

期刊

MACHINE LEARNING
卷 100, 期 2-3, 页码 255-283

出版社

SPRINGER
DOI: 10.1007/s10994-015-5484-1

关键词

Reinforcement learning; Markov Decision Process; Lipschitz continuity; Policy gradient algorithm

向作者/读者索取更多资源

This paper is about the exploitation of Lipschitz continuity properties for Markov Decision Processes to safely speed up policy-gradient algorithms. Starting from assumptions about the Lipschitz continuity of the state-transition model, the reward function, and the policies considered in the learning process, we show that both the expected return of a policy and its gradient are Lipschitz continuous w.r.t. policy parameters. By leveraging such properties, we define policy-parameter updates that guarantee a performance improvement at each iteration. The proposed methods are empirically evaluated and compared to other related approaches using different configurations of three popular control scenarios: the linear quadratic regulator, the mass-spring-damper system and the ship-steering control.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据