3.8 Article Proceedings Paper

Bandit based Monte-Carlo planning

期刊

MACHINE LEARNING: ECML 2006, PROCEEDINGS
卷 4212, 期 -, 页码 282-293

出版社

SPRINGER-VERLAG BERLIN
DOI: 10.1007/11871842_29

关键词

-

向作者/读者索取更多资源

For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据