4.6 Article

Computation and Communication Efficient Adaptive Federated Optimization of Federated Learning for Internet of Things

期刊

ELECTRONICS
卷 12, 期 16, 页码 -

出版社

MDPI
DOI: 10.3390/electronics12163451

关键词

federated learning; distributed machine learning; communication efficiency; privacy protection

向作者/读者索取更多资源

With the proliferation of the Internet of Things (IoT) and the use of devices with sensing, computing, and communication capabilities, intelligent applications empowered by artificial intelligence have become prevalent. However, existing classical artificial intelligence algorithms face challenges in realistic intelligent IoT applications due to data privacy concerns and distributed datasets. To address this, the paper proposes a novel efficient adaptive federated optimization (FedEAFO) algorithm that minimizes learning error by jointly considering local update and parameter compression variables to improve the efficiency of Federated Learning (FL). Experimental results demonstrate that FedEAFO achieves higher accuracies faster compared to state-of-the-art algorithms.
The proliferation of the Internet of Things (IoT) and widespread use of devices with sensing, computing, and communication capabilities have motivated intelligent applications empowered by artificial intelligence. Classical artificial intelligence algorithms require centralized data collection and processing, which are challenging in realistic intelligent IoT applications due to growing data privacy concerns and distributed datasets. Federated Learning (FL) has emerged as a privacy-preserving distributed learning framework, which enables IoT devices to train global models through sharing model parameters. However, inefficiency due to frequent parameter transmissions significantly reduces FL performance. Existing acceleration algorithms consist of two main types including local update and parameter compression, which considers the trade-offs between communication and computation/precision, respectively. Jointly considering these two trade-offs and adaptively balancing their impacts on convergence have remained unresolved. To solve the problem, this paper proposes a novel efficient adaptive federated optimization (FedEAFO) algorithm to improve the efficiency of FL, which minimizes the learning error via jointly considering two variables including local update and parameter compression. The FedEAFO enables FL to adaptively adjust two variables and balance trade-offs among computation, communication, and precision. The experiment results illustrate that compared with state-of-the-art algorithms, the FedEAFO can achieve higher accuracies faster.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据