4.7 Article

Adaptive asynchronous federated learning

Related references

Note: Only part of the references are listed.
Article Automation & Control Systems

Privacy-Preserving Federated Learning for Industrial Edge Computing via Hybrid Differential Privacy and Adaptive Compression

Bin Jiang et al.

Summary: With the improvement of hardware computing power, edge computing of industrial data has been widely used in the past decade, greatly improving production efficiency. Compared to cloud computing, edge computing saves bandwidth consumption and ensures terminal data security to some extent. However, new attack types require better privacy protection in industrial edge computing. This article proposes a federated edge learning framework based on hybrid differential privacy and adaptive compression to protect the privacy of industrial data.

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS (2023)

Article Computer Science, Theory & Methods

FedProc: Prototypical contrastive federated learning on non-IID data

Xutong Mu et al.

Summary: This paper proposes FedProc, a prototypical contrastive federated learning approach that uses prototypes as global knowledge to correct the drift of each client's local training. Experimental results show that FedProc can improve accuracy by 1.6% to 7.9% with an acceptable computational cost compared to state-of-the-art federated learning methods.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2023)

Article Computer Science, Theory & Methods

Model compression and privacy preserving framework for federated learning

Xi Zhu et al.

Summary: This paper proposes a model compression based federated learning framework that effectively reduces the size of the model and protects privacy while maintaining performance. The proposed perturbed model compression method and reconstruction algorithm contribute to achieving these objectives.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2023)

Article Engineering, Electrical & Electronic

Communication-Efficient Federated Learning via Quantized Compressed Sensing

Yongjeong Oh et al.

Summary: In this paper, a communication-efficient federated learning framework is presented, inspired by quantized compressed sensing. The framework includes gradient compression for wireless devices and gradient reconstruction for a parameter server. By leveraging both dimension reduction and quantization, a higher compression ratio than one-bit gradient compression can be achieved. An approximate minimum mean square error (MMSE) approach for gradient reconstruction using the expectation-maximization generalized-approximate-message-passing (EM-GAMP) algorithm is proposed for accurate aggregation of local gradients from the compressed signals.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2023)

Article Computer Science, Theory & Methods

An adaptive federated learning scheme with differential privacy preserving

Xiang Wu et al.

Summary: This paper proposes a federated learning scheme combined with the adaptive gradient descent strategy and differential privacy mechanism for multi-party collaborative modeling scenarios. The scheme improves modeling efficiency and performance with limited communication costs, while providing strong privacy protection for federated learning process.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2022)

Editorial Material Automation & Control Systems

Guest Editorial: Security and Privacy of Federated Learning Solutions for Industrial IoT Applications

Mohammad Shojafar et al.

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS (2022)

Article Computer Science, Theory & Methods

High-efficient hierarchical federated learning on non-IID data with progressive collaboration

Yunyun Cai et al.

Summary: Hierarchical federated learning (HFL) allows multiple edge aggregations at edge devices before one global aggregation to address the problems of non-IID data and communication bottleneck in federated learning (FL). In this paper, a high-efficient HFL algorithm called FedPEC is introduced, which utilizes progressive edge collaboration instead of unrealistic client allocation. With the guidance of estimated number of collaborators, each edge device can be assigned an appropriate collaborator set. Experimental results show that FedPEC outperforms state-of-the-art FL algorithms.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2022)

Article Computer Science, Information Systems

An Efficient Framework for Clustered Federated Learning

Avishek Ghosh et al.

Summary: This paper addresses the problem of federated learning with distributed and partitioned users. It introduces a new framework called clustered federated learning, which leverages cluster identities and gradient descent for user model parameter optimization. The proposed Iterative Federated Clustering Algorithm (IFCA) guarantees convergence and shows benefits over baselines in various settings, including non-convex problems and ambiguous clustering structures.

IEEE TRANSACTIONS ON INFORMATION THEORY (2022)

Article Computer Science, Theory & Methods

Dynamic and adaptive fault-tolerant asynchronous federated learning using volunteer edge devices

Jose Angel Morell et al.

Summary: The combination of edge computing and federated learning, known as federated edge learning, provides a solution for processing and protecting a large amount of data from interconnected devices. This research focuses on adapting to the changing environment through asynchronous learning and utilizing volunteer device resources for shared model training.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2022)

Article Computer Science, Theory & Methods

A state-of-the-art survey on solving non-IID data in Federated Learning

Xiaodong Ma et al.

Summary: This article analyzes the problems of non-IID data in federated learning and presents a series of challenges. Through the classification and research of existing methods, it is found that non-IID data not only reduces the performance of FL models but also affects users' active participation. Compared with methods based on data sharing, improving federated learning algorithms is a common practice to solve this problem.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2022)

Article Engineering, Electrical & Electronic

Fast-Convergent Federated Learning

Hung T. Nguyen et al.

Summary: This paper proposes a fast-convergent federated learning algorithm called FOLB, which optimizes the convergence speed of model training through intelligent sampling of devices, handles device heterogeneity, and experimentally demonstrates its improvement in model accuracy, convergence speed, and stability across various tasks.

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS (2021)

Article Engineering, Electrical & Electronic

Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning

Wenqi Shi et al.

Summary: In this paper, a joint device scheduling and resource allocation policy is proposed to maximize model accuracy within a given total training time budget for latency constrained wireless FL. The accuracy maximization problem is decomposed into two sub-problems and solved accordingly. Experimental results demonstrate the superiority of the proposed policy under various settings.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2021)

Article Engineering, Electrical & Electronic

FedSA: A Semi-Asynchronous Federated Learning Mechanism in Heterogeneous Edge Computing

Qianpiao Ma et al.

Summary: Federated learning (FL) presents challenges of edge heterogeneity, Non-IID data, and communication resource constraints when training machine learning models over distributed edge nodes. In this paper, a semi-asynchronous federated learning mechanism (FedSA) is proposed to address these challenges more effectively by aggregating local models by arrival order and determining the number of participating workers to minimize training completion time. FedSA also deploys adaptive learning rates based on workers' participation frequency to improve training accuracy on Non-IID data, and extends the mechanism to dynamic and multiple learning tasks scenarios. Experimental results demonstrate the effectiveness of the proposed mechanism and algorithms in addressing these challenges.

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS (2021)

Article Computer Science, Hardware & Architecture

SAFA: A Semi-Asynchronous Protocol for Fast Federated Learning With Low Overhead

Wentai Wu et al.

Summary: SAFA is a semi-asynchronous FL protocol proposed to address issues such as low round efficiency and poor convergence rate in extreme conditions. With novel designs in model distribution, client selection, and global aggregation, it mitigates the impacts of stragglers, crashes, and model staleness to boost efficiency and improve the quality of the global model.

IEEE TRANSACTIONS ON COMPUTERS (2021)

Article Engineering, Electrical & Electronic

Convergence Time Optimization for Federated Learning Over Wireless Networks

Mingzhe Chen et al.

Summary: This paper studies the convergence time of federated learning over a wireless network and proposes a probabilistic user selection scheme to minimize FL convergence time. By using artificial neural networks to estimate local FL models, the proposed approach can significantly reduce FL convergence time and improve accuracy compared to standard FL algorithms.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2021)

Article Computer Science, Theory & Methods

FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data

Ming Chen et al.

Summary: This study reformulates Federated Learning and proposes a novel asynchronous federated learning algorithm FedSA. Experimental results show that FedSA algorithm performs well in handling stale devices in both non-IID and IID cases, surpassing existing methods.

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE (2021)

Article Computer Science, Artificial Intelligence

Clustered Federated Learning: Model-Agnostic Distributed Multitask Optimization Under Privacy Constraints

Felix Sattler et al.

Summary: Federated learning is widely used for collaborative training of machine learning models under privacy constraints, but can yield suboptimal results when local data distributions diverge. Clustered FL is a novel framework that addresses this issue by grouping clients with jointly trainable data distributions based on geometric properties of the FL loss surface.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021)

Article Engineering, Electrical & Electronic

Client Selection and Bandwidth Allocation in Wireless Federated Learning Networks: A Long-Term Perspective

Jie Xu et al.

Summary: This study investigates the long-term perspective of resource allocation in wireless federated learning networks, showing the significant impact of temporal client selection patterns on learning performance. Data-driven experiments were designed to support the findings and a new algorithm was proposed to achieve long-term performance guarantee.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2021)

Article Engineering, Electrical & Electronic

Scheduling Policies for Federated Learning in Wireless Networks

Howard H. Yang et al.

IEEE TRANSACTIONS ON COMMUNICATIONS (2020)

Article Computer Science, Theory & Methods

A Survey on Distributed Machine Learning

Joost Verbraeken et al.

ACM COMPUTING SURVEYS (2020)

Article Computer Science, Theory & Methods

Accelerating Federated Learning via Momentum Gradient Descent

Wei Liu et al.

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS (2020)