3.8 Proceedings Paper

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

出版社

ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE

关键词

-

资金

  1. NSF [1723381]
  2. AFOSR [FA9550-17-1-0165]
  3. ONR [N00014-18-12847]
  4. Honda Research Institute
  5. MIT-IBM Watson Lab
  6. SUTD Temasek Laboratories
  7. NSF Graduate Research Fellowships

向作者/读者索取更多资源

The study introduces a goal-literal babbling (GLIB) method inspired by human curiosity for efficient exploration in relational model-based reinforcement learning, with experimental results showing superior performance in various tasks.
We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards. Inspired by human curiosity, we propose goal-literal babbling (GLIB), a simple and general method for exploration in such problems. GLIB samples relational conjunctive goals that can be understood as specific, targeted effects that the agent would like to achieve in the world, and plans to achieve these goals using the transition model being learned. We provide theoretical guarantees showing that exploration with GLIB will converge almost surely to the ground truth model. Experimentally, we find GLIB to strongly outperform existing methods in both prediction and planning on a range of tasks, encompassing standard PDDL and PPDDL planning benchmarks and a robotic manipulation task implemented in the PyBullet physics simulator. Video: https://youtu.be/F6lmrPT6TOY Code: https://git.io/JIsTB

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据