3.8 Proceedings Paper

An Experimental Study on Hyper-parameter Optimization for Stacked Auto-Encoders

Ask authors/readers for more resources

Deep learning algorithms have shown their superiority especially in addressing challenging machine learning tasks. The best performance of deep learning algorithms can be reached only when their hyper-parameters have been successfully optimized. However, the hyper-parameter optimization problem is non-convex and non-differentiable, and traditional optimization algorithms are incapable of addressing them well. Evolutionary algorithms are a class of meta-heuristic search algorithms, preferred for optimizing real-world problems due largely to their no mathematical requirements on the problems to be optimized. Although most researchers from the community of deep learning are aware of the effectiveness of evolutionary algorithms in optimizing the hyper-parameters of deep learning algorithms, they still believe that the grid search method is more effective when the number of hyper-parameters is small. To clarify this, we design a hyper-parameter optimization method by using particle swarm optimization that is a widely used evolutionary algorithm, to perform 192 experimental comparisons for stacked auto-encoders that are a class of deep learning algorithms with a relative small number of hyper-parameters, investigate and compare the classification accuracy and computational complexity with those of the grid search method on eight widely used image classification benchmark datasets. The experimental results show that the proposed algorithm can achieve the comparative classification accuracy but saving 10x-100x computational complexity compared with the grid search method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available