4.7 Article

Sigmoid-weighted linear units for neural network function approximation in reinforcement learning

期刊

NEURAL NETWORKS
卷 107, 期 -, 页码 3-11

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2017.12.012

关键词

Reinforcement learning; Sigmoid-weighted linear unit; Function approximation; Tetris; Atari 2600; Deep learning

资金

  1. New Energy and Industrial Technology Development Organization (NEDO)
  2. MEXT KAKENHI [16H06563, 17H06042]
  3. Okinawa Institute of Science and Technology Graduate University
  4. Grants-in-Aid for Scientific Research [17H06042] Funding Source: KAKEN

向作者/读者索取更多资源

In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10 x 10 board, using TD(lambda) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(lambda) agent with SiLU and dSiLU hidden units. (C) 2017 The Author(s). Published by Elsevier Ltd.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据