4.6 Article

A simple and efficient architecture for trainable activation functions

Journal

NEUROCOMPUTING
Volume 370, Issue -, Pages 1-15

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2019.08.065

Keywords

Neural networks; Machine learning; Activation functions; Trainable activation functions

Funding

  1. italian national project Perception, Performativity and Cognitive Sciences - PRIN2015 - MIUR (Ministero dell'Istruzione, dell'Universitae della Ricerca) [2015TM24JS_009]

Ask authors/readers for more resources

Automatically learning the best activation function for the task is an active topic in neural network research. At the moment, despite promising results, it is still challenging to determine a method for learning an activation function that is, at the same time, theoretically simple and easy to implement. Moreover, most of the methods proposed so far introduce new parameters or adopt different learning techniques. In this work, we propose a simple method to obtain a trained activation function which adds to the neural network local sub-networks with a small number of neurons. Experiments show that this approach could lead to better results than using a pre-defined activation function, without introducing the need to learn a large number of additional parameters. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available