4.7 Article

Ternary Compression for Communication-Efficient Federated Learning

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3041185

关键词

Communication efficiency; deep learning; federated learning; ternary coding

资金

  1. National Natural Science Foundation of China under Basic Science Center Program [61988101]
  2. National Natural Science Fund for Distinguished Young Scholars [61725301]
  3. International (Regional) Cooperation and Exchange [1720106008]
  4. National Natural Science Foundation of China [61590923]
  5. China Scholarship Council [201906745025]

向作者/读者索取更多资源

Federated learning is a privacy-preserving and secure machine learning approach. This paper proposes a federated trained ternary quantization algorithm and a ternary federated averaging protocol to reduce communication costs and improve performance.
Learning over massive data stored in different locations is essential in many real-world applications. However, sharing data is full of challenges due to the increasing demands of privacy and security with the growing use of smart mobile devices and Internet of thing (IoT) devices. Federated learning provides a potential solution to privacy-preserving and secure machine learning, by means of jointly training a global model without uploading data distributed on multiple devices to a central server. However, most existing work on federated learning adopts machine learning models with full-precision weights, and almost all these models contain a large number of redundant parameters that do not need to be transmitted to the server, consuming an excessive amount of communication costs. To address this issue, we propose a federated trained ternary quantization (FTTQ) algorithm, which optimizes the quantized networks on the clients through a self-learning quantization factor. Theoretical proofs of the convergence of quantization factors, unbiasedness of FTTQ, as well as a reduced weight divergence are given. On the basis of FTTQ, we propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems. Empirical experiments are conducted to train widely used deep learning models on publicly available data sets, and our results demonstrate that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data in contrast to the canonical federated learning algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据