期刊
NEURAL PROCESSING LETTERS
卷 46, 期 1, 页码 159-169出版社
SPRINGER
DOI: 10.1007/s11063-017-9578-6
关键词
DNN; Two-stage structure; Big data
资金
- strategic priority research program-Real-time Processing System of Massive Network Traffic Based on Sea-cloud Collaboration of the Chinese Academy of Science [XDA060112030]
In this work, a training method was proposed for Deep Neural Networks (DNNs) based on a two-stage structure. Local DNN models are trained in all local machines and uploaded to the center with partial training data. These local models are integrated as a new DNN model (combination DNN). With another DNN model (optimization DNN) connected, the combination DNN forms a global DNN model in the center. This results in greater accuracy than local DNN models with smaller amounts of data uploaded. In this case, the bandwidth of the uploaded data is saved, and the accuracy is maintained as well. Experiments are conducted on MNIST dataset, CIFAR-10 dataset and LFW dataset. The results show that with less training data uploaded, the global model produces greater accuracy than local models. Specifically, this method focuses on condition of big data.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据