Journal
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Volume 128, Issue -, Pages -Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.engappai.2023.107496
Keywords
Reinforcement learning; Invariant representations; Bisimulation learning; Continuous control
Ask authors/readers for more resources
This paper proposes a new representation learning method (RLF), which learns long-term dynamics using graph neural networks and trains the representation network based on a new state metric inspired by bisimulation relation. Experiments show that RLF can mine more stable state embeddings in continuous control tasks, and the learned policy on top of the embeddings has higher sample efficiency, performance, and generalization capability.
High-dimension inputs limit the sample efficiency of deep reinforcement learning, increasing the difficulty of applying it to real-world continuous control tasks, especially in uncertain environments. A good state embedding is crucial for the performance of agents on downstream tasks. The bisimulation metric is an excellent representation learning method that can abstract task-relevant invariant latent embeddings of states based on behavior similarity. However, since only one time-step transition is considered, the features captured by this method we call short-term dynamics. We think long-term dynamics are also important for state representation learning. In this paper, we present Invariant Representations Learning with Future Dynamics (RLF), which uses graph neural networks to learn long-term dynamics and trains the representation network based on a new state metric inspired by bisimulation relation. We experimented with our method on continuous control tasks from DeepMind Control Suite and showed that the RLF can mine more stable embeddings than the state-of-the-art representation learning methods for both state and pixel inputs. The learned policy on top of the embeddings has higher sample efficiency and performance and generalizes well in different tasks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available