4.7 Article

Policy Search for Model Predictive Control With Application to Agile Drone Flight

Journal

IEEE TRANSACTIONS ON ROBOTICS
Volume 38, Issue 4, Pages 2114-2130

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TRO.2022.3141602

Keywords

Neural networks; Task analysis; Probabilistic logic; Logic gates; Predictive models; Vehicle dynamics; Drones; Learning agile flight; model predictive control (MPC); reinforcement learning (RL)

Categories

Funding

  1. National Centre of Competence in Research Robotics through the Swiss National Science Foundation
  2. European Union's Horizon 2020 Research and Innovation Program [871479]
  3. European Research Council [864042]
  4. European Research Council (ERC) [864042] Funding Source: European Research Council (ERC)

Ask authors/readers for more resources

A novel framework of policy search for model predictive control is proposed in the study, utilizing policy search to automatically select high-level decision variables for MPC. The formulation of a parameterized controller allows optimizing policies in a self-supervised manner.
Policy search and model predictive control (MPC) are two different paradigms for robot control: policy search has the strength of automatically learning complex policies using experienced data, and MPC can offer optimal control performance using models and trajectory optimization. An open research question is how to leverage and combine the advantages of both approaches. In this article, we provide an answer by using policy search for automatically choosing high-level decision variables for MPC, which leads to a novel policy-search-for-model-predictive-control framework. Specifically, we formulate the MPC as a parameterized controller, where the hard-to-optimize decision variables are represented as high-level policies. Such a formulation allows optimizing policies in a self-supervised fashion. We validate this framework by focusing on a challenging problem in agile drone flight: flying a quadrotor through fast-moving gates. Experiments show that our controller achieves robust and real-time control performance in both simulation and the real world. The proposed framework offers a new perspective for merging learning and control.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available