4.7 Article

Communication-Efficient Federated Learning via Quantized Compressed Sensing

期刊

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
卷 22, 期 2, 页码 1087-1100

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2022.3201207

关键词

Federated learning; quantized compressed sensing; distributed stochastic gradient descent; gradient compres-sion; gradient reconstruction

向作者/读者索取更多资源

In this paper, a communication-efficient federated learning framework is presented, inspired by quantized compressed sensing. The framework includes gradient compression for wireless devices and gradient reconstruction for a parameter server. By leveraging both dimension reduction and quantization, a higher compression ratio than one-bit gradient compression can be achieved. An approximate minimum mean square error (MMSE) approach for gradient reconstruction using the expectation-maximization generalized-approximate-message-passing (EM-GAMP) algorithm is proposed for accurate aggregation of local gradients from the compressed signals.
In this paper, we present a communication-efficient federated learning framework inspired by quantized compressed sensing. The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server (PS). Our strategy for gradient compression is to sequentially perform block sparsification, dimensional reduction, and quantization. By leveraging both dimension reduction and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression. For accurate aggregation of local gradients from the compressed signals, we put forth an approximate minimum mean square error (MMSE) approach for gradient reconstruction using the expectation-maximization generalized-approximate-message-passing (EM-GAMP) algorithm. Assuming Bernoulli Gaussian-mixture prior, this algorithm iteratively updates the posterior mean and variance of local gradients from the compressed signals. We also present a low-complexity approach for the gradient reconstruction. In this approach, we use the Bussgang theorem to aggregate local gradients from the compressed signals, then compute an approximate MMSE estimate of the aggregated gradient using the EM-GAMP algorithm. We also provide a convergence rate analysis of the presented framework. Using the MNIST dataset, we demonstrate that the presented framework achieves almost identical performance with the case that performs no compression, while significantly reducing communication overhead for federated learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据