4.7 Review

A survey on modern trainable activation functions

期刊

NEURAL NETWORKS
卷 138, 期 -, 页码 14-32

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2021.01.026

关键词

Neural networks; Machine learning; Activation functions; Trainable activation functions; Learnable activation functions

资金

  1. EU H2020 [785907, 945539]
  2. EU H2020-EIC-FET-PROACT2019 [951910]

向作者/读者索取更多资源

In recent years, there has been a renewed interest in trainable activation functions, which can be trained during the learning process to improve neural network performance. Various models of trainable activation functions have been proposed in the literature, many of which are equivalent to adding neuron layers with fixed activation functions and simple local rules.
In neural networks literature, there is a strong interest in identifying and defining activation functions which can improve neural network performance. In recent years there has been a renovated interest in the scientific community in investigating activation functions which can be trained during the learning process, usually referred to as trainable, learnable or adaptable activation functions. They appear to lead to better network performance. Diverse and heterogeneous models of trainable activation function have been proposed in the literature. In this paper, we present a survey of these models. Starting from a discussion on the use of the term activation function in literature, we propose a taxonomy of trainable activation functions, highlight common and distinctive proprieties of recent and past models, and discuss main advantages and limitations of this type of approach. We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions and some simple local rule that constrains the corresponding weight layers. (c) 2021 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据