4.6 Article

A simple and efficient architecture for trainable activation functions

期刊

NEUROCOMPUTING
卷 370, 期 -, 页码 1-15

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2019.08.065

关键词

Neural networks; Machine learning; Activation functions; Trainable activation functions

资金

  1. italian national project Perception, Performativity and Cognitive Sciences - PRIN2015 - MIUR (Ministero dell'Istruzione, dell'Universitae della Ricerca) [2015TM24JS_009]

向作者/读者索取更多资源

Automatically learning the best activation function for the task is an active topic in neural network research. At the moment, despite promising results, it is still challenging to determine a method for learning an activation function that is, at the same time, theoretically simple and easy to implement. Moreover, most of the methods proposed so far introduce new parameters or adopt different learning techniques. In this work, we propose a simple method to obtain a trained activation function which adds to the neural network local sub-networks with a small number of neurons. Experiments show that this approach could lead to better results than using a pre-defined activation function, without introducing the need to learn a large number of additional parameters. (C) 2019 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据