4.5 Article

A Novel Two-stage Learning Pipeline for Deep Neural Networks

Journal

NEURAL PROCESSING LETTERS
Volume 46, Issue 1, Pages 159-169

Publisher

SPRINGER
DOI: 10.1007/s11063-017-9578-6

Keywords

DNN; Two-stage structure; Big data

Funding

  1. strategic priority research program-Real-time Processing System of Massive Network Traffic Based on Sea-cloud Collaboration of the Chinese Academy of Science [XDA060112030]

Ask authors/readers for more resources

In this work, a training method was proposed for Deep Neural Networks (DNNs) based on a two-stage structure. Local DNN models are trained in all local machines and uploaded to the center with partial training data. These local models are integrated as a new DNN model (combination DNN). With another DNN model (optimization DNN) connected, the combination DNN forms a global DNN model in the center. This results in greater accuracy than local DNN models with smaller amounts of data uploaded. In this case, the bandwidth of the uploaded data is saved, and the accuracy is maintained as well. Experiments are conducted on MNIST dataset, CIFAR-10 dataset and LFW dataset. The results show that with less training data uploaded, the global model produces greater accuracy than local models. Specifically, this method focuses on condition of big data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available