4.7 Article

UVeQFed: Universal Vector Quantization for Federated Learning

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 69, Issue -, Pages 500-514

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2020.3046971

Keywords

Servers; Training; Data models; Vector quantization; Numerical models; Convergence; Collaborative work; Federated learning; quantization

Funding

  1. Benoziyo Endowment Fund for the Advancement of Science
  2. Estate of Olga Klein - Astrachan
  3. European Union's Horizon 2020 research and innovation program [646804-ERC-COG-BNYQ]
  4. Israel Science Foundation [0100101]
  5. U.S. National Science Foundation [CCF-0939370, CCF1908308]
  6. Key Area R&D Program of Guangdong Province [2018B030338001]

Ask authors/readers for more resources

Traditional deep learning models are trained at a centralized server with data samples collected from users, but Federated Learning (FL) offers a new approach to training models without sharing user data. By using tools from quantization theory, this study addresses the challenge of repeated transmission of trained models by proposing a universal vector quantization scheme for FL (UVeQFed). The combination of universal vector quantization methods with FL results in a decentralized training system with minimal distortion and improved accuracy in the aggregated model.
Traditional deep learning models are trained at a centralized server using data samples collected from users. Such data samples often include private information, which the users may not be willing to share. Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their data. FL consists of an iterative procedure, where in each iteration the users train a copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model. A major challenge that arises in this method is the need of each user to repeatedly transmit its learned model over the throughput limited uplink channel. In this work, we tackle this challenge using tools from quantization theory. In particular, we identify the unique characteristics associated with conveying trained models over rate-constrained channels, and propose a suitable quantization scheme for such settings, referred to as universal vector quantization for FL (UVeQFed). We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion. We then theoretically analyze the distortion, showing that it vanishes as the number of users grows. We also characterize how models trained with conventional federated averaging combined with UVeQFed converge to the model which minimizes the loss function. Our numerical results demonstrate the gains of UVeQFed over previously proposed methods in terms of both distortion induced in quantization and accuracy of the resulting aggregated model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available