期刊
MACHINE LEARNING: ECML 2006, PROCEEDINGS
卷 4212, 期 -, 页码 282-293出版社
SPRINGER-VERLAG BERLIN
DOI: 10.1007/11871842_29
关键词
-
For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据