Journal
ANNUAL REVIEWS IN CONTROL
Volume 50, Issue -, Pages 119-138Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.arcontrol.2020.06.001
Keywords
Inverse optimal control; Inverse reinforcement learning; Learning from demonstration; Imitation learning
Categories
Ask authors/readers for more resources
Inverse optimal control (IOC) is a powerful theory that addresses the inverse problems in control systems, robotics, Machine Learning (ML) and optimization taking into account the optimal manners. This paper reviews the history of the IOC and Inverse Reinforcement Learning (IRL) approaches and describes the connections and differences between them to cover the research gap in the existing literature. The general formulation of IOC/IRL is described and the related methods are categorized based on a hierarchical approach. For this purpose, IOC methods are categorized under two classes, namely classic and modern approaches. The classic IOC is typically formulated for control systems, while IRL, as a modern approach to IOC, is considered for machine learning problems. Despite the presence of a handful of IOC/IRL methods, a comprehensive categorization of these methods is lacking. In addition to the IOC/IRL problems, this paper elaborates, where necessary, on other relevant concepts such as Learning from Demonstration (LfD), Imitation Learning (IL), and Behavioral Cloning. Some of the challenges encountered in the IOC/IRL problems are further discussed in this work, including ill-posedness, non-convexity, data availability, non-linearity, the curses of complexity and dimensionality, feature selection, and generalizability. (C) 2020 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available