4.6 Article

Visual Pretraining via Contrastive Predictive Model for Pixel-Based Reinforcement Learning

期刊

SENSORS
卷 22, 期 17, 页码 -

出版社

MDPI
DOI: 10.3390/s22176504

关键词

representation learning; vision-based deep reinforcement learning; deep reinforcement learning; sample efficiency

资金

  1. Institute of Information & communications Technology Planning & Evaluation (IITP) - Korea government (MSIT) [2021-0-02068]
  2. National Research Foundation of Korea (NRF) - Korea government (MSIT) [2022R1A2C201270611]

向作者/读者索取更多资源

In this study, a visual pretraining via contrastive predictive model (VPCPM) framework is proposed to overcome the limitations of reward-driven representation learning in vision-based reinforcement learning (RL). By training the convolutional encoder with the supervision of contrastive loss, better representations are learned by perceiving the underlying dynamics through a pair of forward and inverse models. Experimental results show that by initializing the encoders with VPCPM, the performance of state-of-the-art vision-based RL algorithms is significantly improved, surpassing or matching the performance of prior unsupervised methods. The learned representations also generalize successfully to new tasks with similar observation and action spaces.
In an attempt to overcome the limitations of reward-driven representation learning in vision-based reinforcement learning (RL), an unsupervised learning framework referred to as the visual pretraining via contrastive predictive model (VPCPM) is proposed to learn the representations detached from the policy learning. Our method enables the convolutional encoder to perceive the underlying dynamics through a pair of forward and inverse models under the supervision of the contrastive loss, thus resulting in better representations. In experiments with a diverse set of vision control tasks, by initializing the encoders with VPCPM, the performance of state-of-the-art vision-based RL algorithms is significantly boosted, with 44% and 10% improvement for RAD and DrQ at 100 steps, respectively. In comparison to the prior unsupervised methods, the performance of VPCPM matches or outperforms all the baselines. We further demonstrate that the learned representations successfully generalize to the new tasks that share a similar observation and action space.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据