4.6 Review

Coding for Large-Scale Distributed Machine Learning

Journal

ENTROPY
Volume 24, Issue 9, Pages -

Publisher

MDPI
DOI: 10.3390/e24091284

Keywords

error-control coding; gradient coding; random codes; ADMM

Funding

  1. Swedish Research Council (VR) [2021-04772]
  2. Swedish Research Council [2021-04772] Funding Source: Swedish Research Council

Ask authors/readers for more resources

This article provides a comprehensive and rigorous review of the principles and recent development of coding for large-scale distributed machine learning (DML), aiming to improve reliability and efficiency. Various coding schemes for different steps in DML are discussed, with potential directions for future works also provided.
This article aims to give a comprehensive and rigorous review of the principles and recent development of coding for large-scale distributed machine learning (DML). With increasing data volumes and the pervasive deployment of sensors and computing machines, machine learning has become more distributed. Moreover, the involved computing nodes and data volumes for learning tasks have also increased significantly. For large-scale distributed learning systems, significant challenges have appeared in terms of delay, errors, efficiency, etc. To address the problems, various error-control or performance-boosting schemes have been proposed recently for different aspects, such as the duplication of computing nodes. More recently, error-control coding has been investigated for DML to improve reliability and efficiency. The benefits of coding for DML include high-efficiency, low complexity, etc. Despite the benefits and recent progress, however, there is still a lack of comprehensive survey on this topic, especially for large-scale learning. This paper seeks to introduce the theories and algorithms of coding for DML. For primal-based DML schemes, we first discuss the gradient coding with the optimal code distance. Then, we introduce random coding for gradient-based DML. For primal-dual-based DML, i.e., ADMM (alternating direction method of multipliers), we propose a separate coding method for two steps of distributed optimization. Then coding schemes for different steps are discussed. Finally, a few potential directions for future works are also given.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available