4.7 Article

Hyperparameter optimization of deep neural network using univariate dynamic encoding algorithm for searches

期刊

KNOWLEDGE-BASED SYSTEMS
卷 178, 期 -, 页码 74-83

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2019.04.019

关键词

Hyperparameter optimization; Gradient-free optimization; Deep neural network; Convolution neural network; Autoencoder

向作者/读者索取更多资源

This paper proposes a method to find the hyperparameter tuning for a deep neural network by using a univariate dynamic encoding algorithm for searches. Optimizing hyperparameters for such a neural network is difficult because the neural network that has several parameters to configure: furthermore, the training speed for such a network is slow. The proposed method was tested for two neural network models: an autoencoder and a convolution neural network with the Modified National Institute of Standards and Technology (MNIST) dataset. To optimize hyperparameters with the proposed method, the cost functions were selected as the average of the difference between the decoded value and the original image for the autoencoder, and the inverse of the evaluation accuracy for the convolution neural network. The hyperparameters were optimized using the proposed method with fast convergence speed and few computational resources, and the results were compared with those of the other considered optimization algorithms (namely, simulated annealing, genetic algorithm, and particle swarm algorithm) to show the effectiveness of the proposed methodology. (C) 2019 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据