4.7 Article

Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach

Journal

IEEE TRANSACTIONS ON COMMUNICATIONS
Volume 71, Issue 1, Pages 173-186

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCOMM.2022.3224977

Keywords

Costs; Distance learning; Computer aided instruction; Rate-distortion; Distortion; Convergence; Computational modeling; Distributed learning; model aggregation; rate-distortion theory; vector Gaussian CEO problem

Ask authors/readers for more resources

One of the main focuses in distributed learning is improving communication efficiency for model aggregation. Various compression methods have been proposed, but the minimum communication cost for a given distortion of gradient estimators is still unknown. This paper studies the fundamental limit of communication cost in distributed learning from a rate-distortion perspective and provides insights into the relationship between communication cost and gradient compression.
One of the main focuses in distributed learning is communication efficiency, since model aggregation at each round of training can consist of millions to billions of parameters. Several model compression methods, such as gradient quantization and sparsification, have been proposed to improve the communication efficiency of model aggregation. However, the information-theoretic minimum communication cost for a given distortion of gradient estimators is still unknown. In this paper, we study the fundamental limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective. By formulating the model aggregation as a vector Gaussian CEO problem, we derive the rate region bound and sum-rate-distortion function for the model aggregation problem, which reveals the minimum communication rate at a particular gradient distortion upper bound. We also analyze the communication cost at each iteration and total communication cost based on the sum-rate-distortion function with the gradient statistics of real-world datasets. It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD, and a high distortion of gradient estimator can achieve low total communication cost in gradient compression.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available