Journal
2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS)
Volume -, Issue -, Pages 606-616Publisher
IEEE COMPUTER SOC
DOI: 10.1109/ICDCS47774.2020.00049
Keywords
-
Categories
Funding
- National Key R&D Program of China [2017YFB1001801]
- NSFC [61872175]
- Natural Science Foundation of Jiangsu Province [BK20181252]
- Fundamental Research Funds for the Central Universities [14380060]
- Nanjing University Innovation and Creative Program for PhD Candidate [CXCY19-25]
- Collaborative Innovation Center of Novel Software Technology and Industrialization
Ask authors/readers for more resources
Federated learning achieves the privacy-preserving training of models on mobile devices by iteratively aggregating model updates instead of raw training data to the server. Since excessive training iterations and model transferences incur heavy usage of computation and communication resources, selecting appropriate devices and excluding unnecessary model updates can help save the resource usage. We formulate an online time-varying non-linear integer program to minimize the cumulative resource usage over time while achieving the desired long-term convergence of the model being trained. We design an online learning algorithm to make fractional control decisions based on both previous system dynamics and previous training results, and also design an online randomized rounding algorithm to convert the fractional decisions into integers without violating any constraints. We rigorously prove that our online approach only incurs sub-linear dynamic regret for the optimality loss and sub-linear dynamic fit for the long-term convergence violation. We conduct extensive trace-driven evaluations and confirm the empirical superiority of our approach over alternative algorithms in terms of up to 27% reduction on the resource usage while sacrificing only 4% reduction on accuracy.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available