4.8 Article

Meta-Reinforcement Learning in Non-Stationary and Dynamic Environments

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3185549

关键词

Task analysis; Training; Robots; Adaptation models; Multitasking; Inference algorithms; Gaussian mixture model; Meta-reinforcement learning; task inference; task adaptation; robotic control

向作者/读者索取更多资源

The subject of deep reinforcement learning (DRL) has developed rapidly and is now applied in various fields. However, artificial agents trained with RL algorithms require large amounts of training data. The concept of meta-reinforcement learning (meta-RL) enables agents to learn new skills from a small amount of experience. This study introduces a training strategy for non-stationary environments and a task representation based on Gaussian mixture models.
In recent years, the subject of deep reinforcement learning (DRL) has developed very rapidly, and is now applied in various fields, such as decision making and control tasks. However, artificial agents trained with RL algorithms require great amounts of training data, unlike humans that are able to learn new skills from very few examples. The concept of meta-reinforcement learning (meta-RL) has been recently proposed to enable agents to learn similar but new skills from a small amount of experience by leveraging a set of tasks with a shared structure. Due to the task representation learning strategy with few-shot adaptation, most recent work is limited to narrow task distributions and stationary environments, where tasks do not change within episodes. In this work, we address those limitations and introduce a training strategy that is applicable to non-stationary environments, as well as a task representation based on Gaussian mixture models to model clustered task distributions. We evaluate our method on several continuous robotic control benchmarks. Compared with state-of-the-art literature that is only applicable to stationary environments with few-shot adaption, our algorithm first achieves competitive asymptotic performance and superior sample efficiency in stationary environments with zero-shot adaption. Second, our algorithm learns to perform successfully in non-stationary settings as well as a continual learning setting, while learning well-structured task representations. Last, our algorithm learns basic distinct behaviors and well-structured task representations in task distributions with multiple qualitatively distinct tasks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据