3.8 Proceedings Paper

Resource-Efficient and Convergence-Preserving Online Participant Selection in Federated Learning

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/ICDCS47774.2020.00049

Keywords

-

Funding

  1. National Key R&D Program of China [2017YFB1001801]
  2. NSFC [61872175]
  3. Natural Science Foundation of Jiangsu Province [BK20181252]
  4. Fundamental Research Funds for the Central Universities [14380060]
  5. Nanjing University Innovation and Creative Program for PhD Candidate [CXCY19-25]
  6. Collaborative Innovation Center of Novel Software Technology and Industrialization

Ask authors/readers for more resources

Federated learning achieves the privacy-preserving training of models on mobile devices by iteratively aggregating model updates instead of raw training data to the server. Since excessive training iterations and model transferences incur heavy usage of computation and communication resources, selecting appropriate devices and excluding unnecessary model updates can help save the resource usage. We formulate an online time-varying non-linear integer program to minimize the cumulative resource usage over time while achieving the desired long-term convergence of the model being trained. We design an online learning algorithm to make fractional control decisions based on both previous system dynamics and previous training results, and also design an online randomized rounding algorithm to convert the fractional decisions into integers without violating any constraints. We rigorously prove that our online approach only incurs sub-linear dynamic regret for the optimality loss and sub-linear dynamic fit for the long-term convergence violation. We conduct extensive trace-driven evaluations and confirm the empirical superiority of our approach over alternative algorithms in terms of up to 27% reduction on the resource usage while sacrificing only 4% reduction on accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available