期刊
IEEE ACCESS
卷 6, 期 -, 页码 15844-15869出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2018.2810849
关键词
Deep CNN; image classification; overfitting; generalization; anomaly detection; implicit regularization
资金
- Shandong Provincial Natural Science Foundation [ZR2014FM030]
- National Natural Science Foundation of China [61571275]
Optimization of deep learning is no longer an imminent problem, due to various gradient descent methods and the improvements of network structure, including activation functions, the connectivity style, and so on. Then the actual application depends on the generalization ability, which determines whether a network is effective. Regularization is an efficient way to improve the generalization ability of deep CNN, because it makes it possible to train more complex models while maintaining a lower overfitting. In this paper, we propose to optimize the feature boundary of deep CNN through a two-stage training method (pre-training process and implicit regularization training process) to reduce the overfitting problem. In the pre-training stage, we train a network model to extract the image representation for anomaly detection. In the implicit regularization training stage, we re-train the network based on the anomaly detection results to regularize the feature boundary and make it converge in the proper position. Experimental results on five image classification benchmarks show that the two-stage training method achieves a state-of-the-art performance and that it, in conjunction with more complicated anomaly detection algorithm, obtains better results. Finally, we use a variety of strategies to explore and analyze how implicit regularization plays a role in the two-stage training process. Furthermore, we explain how implicit regularization can be interpreted as data augmentation and model ensemble.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据