4.6 Article

Learning Goal Conditioned Socially Compliant Navigation From Demonstration Using Risk-Based Features

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 6, Issue 2, Pages 651-658

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2020.3048657

Keywords

Navigation; Trajectory; Reinforcement learning; Entropy; Computational modeling; Two dimensional displays; Robot sensing systems; Inverse reinforcement learning; learning from demonstration; motion and path planning; robot navigation; social navigation

Categories

Funding

  1. Samsung Electronics

Ask authors/readers for more resources

This letter presents a learning-based solution for socially compliant navigation of mobile robots, inferring navigational policies from human examples and validating its effectiveness through comparisons with classical algorithms and reinforcement learning agents. The proposed method and feature representation are found to produce higher quality trajectories and play a critical role in successful navigation.
One of the main challenges of operating mobile robots in social environments is the safe and fluid navigation therein, specifically the ability to share a space with other human inhabitants by complying with the explicit and implicit rules that we humans follow during navigation. While these rules come naturally to us, they resist simple and explicit definitions. In this letter, we present a learning-based solution to address the question of socially compliant navigation, which is to navigate while maintaining adherence to the navigational policies a person might use. We infer these policies by learning from human examples using inverse reinforcement learning techniques. In particular, this letter contributes an efficient sampling-based approximation to enable model-free deep inverse reinforcement learning, and a goal conditioned risk-based feature representation that adequately captures local information surrounding the agent. We validate our approach by comparing against a classical algorithm and a reinforcement learning agent and evaluate our feature representation against similar feature representations from the literature. We find that the combination of our proposed method and our feature representation produce higher quality trajectories and that our proposed feature representation plays a critical role in successful navigation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available