4.5 Article

Planning for robotic exploration based on forward simulation

期刊

ROBOTICS AND AUTONOMOUS SYSTEMS
卷 83, 期 -, 页码 15-31

出版社

ELSEVIER
DOI: 10.1016/j.robot.2016.06.008

关键词

Partially observable Markov decision process; Active sensing; Robotic exploration; Mutual information; Sensor management

资金

  1. TUT Graduate School
  2. TUT

向作者/读者索取更多资源

We address the problem of controlling a mobile robot to explore a partially known environment. The robot's objective is the maximization of the amount of information collected about the environment. We formulate the problem as a partially observable Markov decision process (POMDP) with an information theoretic objective function, and solve it applying forward simulation algorithms with an open-loop approximation. We present a new sample-based approximation for mutual information useful in mobile robotics. The approximation can be seamlessly integrated with forward simulation planning algorithms. We investigate the usefulness of POMDP based planning for exploration, and to alleviate some of its weaknesses propose a combination with frontier based exploration. Experimental results in simulated and real environments show that, depending on the environment, applying POMDP based planning for exploration can improve performance over frontier exploration. (C) 2016 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据