4.7 Article

Importance sampling for online planning under uncertainty

Journal

INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
Volume 38, Issue 2-3, Pages 162-181

Publisher

SAGE PUBLICATIONS LTD
DOI: 10.1177/0278364918780322

Keywords

Planning under uncertainty; POMDP; importance sampling

Categories

Funding

  1. NUS AcRF Tier 1 grant [R-252-000587-112]
  2. US Air Force Research Laboratory [FA2386-15-1-4010]

Ask authors/readers for more resources

The partially observable Markov decision process (POMDP) provides a principled general framework for robot planning under uncertainty. Leveraging the idea of Monte Carlo sampling, recent POMDP planning algorithms have scaled up to various challenging robotic tasks, including, real-time online planning for autonomous vehicles. To further improve online planning performance, this paper presents IS-DESPOT, which introduces importance sampling to DESPOT, a state-of-the-art sampling-based POMDP algorithm for planning under uncertainty. Importance sampling improves DESPOT's performance when there are critical, but rare events, which are difficult to sample. We prove that IS-DESPOT retains the theoretical guarantee of DESPOT. We demonstrate empirically that importance sampling significantly improves the performance of online POMDP planning for suitable tasks. We also present a general method for learning the importance sampling distribution.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available