4.7 Article

A novel type of activation function in artificial neural networks: Trained activation function

期刊

NEURAL NETWORKS
卷 99, 期 -, 页码 148-157

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2018.01.007

关键词

Activation function; Trained activation function; Artificial neural network; Random weight artificial neural network

向作者/读者索取更多资源

Determining optimal activation function in artificial neural networks is an important issue because it is directly linked with obtained success rates. But, unfortunately, there is not any way to determine them analytically, optimal activation function is generally determined by trials or tuning. This paper addresses, a simpler and a more effective approach to determine optimal activation function. In this approach, which can be called as trained activation function, an activation function was trained for each particular neuron by linear regression. This training process was done based on the training dataset, which consists the sums of inputs of each neuron in the hidden layer and desired outputs. By this way, a different activation function was generated for each neuron in the hidden layer. This approach was employed in random weight artificial neural network (RWN) and validated by 50 benchmark datasets. Achieved success rates by RWN that used trained activation functions were higher than obtained success rates by RWN that used traditional activation functions. Obtained results showed that proposed approach is a successful, simple and an effective way to determine optimal activation function instead of trials or tuning in both randomized single and multilayer ANNs. (C) 2018 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据