4.7 Article

A Survey on Policy Search Algorithms for Learning Robot Controllers in a Handful of Trials

Journal

IEEE TRANSACTIONS ON ROBOTICS
Volume 36, Issue 2, Pages 328-347

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TRO.2019.2958211

Keywords

Autonomous agents; learning and adaptive systems; micro-data policy search (MDPS); robot learning

Categories

Funding

  1. European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme [637972]
  2. Helmholtz Association
  3. European Commission [731540, 780684]
  4. CHIST-ERA project HEAP
  5. European Union [739578]

Ask authors/readers for more resources

Most policy search (PS) algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word big-data, we refer to this challenge as micro-data reinforcement learning. In this article, we show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based PS), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots, designing generic priors, and optimizing the computing time.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available