4.7 Article

Accelerating Federated Learning With Cluster Construction and Hierarchical Aggregation

Related references

Note: Only part of the references are listed.
Article Computer Science, Information Systems

Adaptive Batch Size for Federated Learning in Resource-Constrained Edge Computing

Zhenguo Ma et al.

Summary: This study aims to alleviate the synchronization barrier issue in federated learning through adaptive batch size. By studying the relationship between batch size and learning rate, a scaling rule is formulated to guide the setting of learning rate in terms of batch size. The convergence rate of the global model is theoretically analyzed, and an efficient algorithm is proposed to adaptively adjust the batch size with scaled learning rate for heterogeneous devices, reducing waiting time and saving battery life.

IEEE TRANSACTIONS ON MOBILE COMPUTING (2023)

Article Engineering, Multidisciplinary

Optimal User-Edge Assignment in Hierarchical Federated Learning Based on Statistical Properties and Network Topology Constraints

Naram Mhaisen et al.

Summary: Distributed learning algorithms aim to utilize diverse data stored at users' devices to learn a global phenomena. However, when the data is strongly skewed, the performance of the global model can decrease. To tackle this issue, the paper proposes a hierarchical learning system that optimizes user-edge assignment to improve model accuracy.

IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING (2022)

Article Engineering, Electrical & Electronic

Accelerating DNN Training in Wireless Federated Edge Learning Systems

Jinke Ren et al.

Summary: Training tasks in classical machine learning models are usually performed at remote cloud centers, which can be time-consuming and resource-heavy, posing privacy and communication latency issues. To address this, federated edge learning framework aggregates local learning updates at network edge, aiming to accelerate the training process.

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS (2021)

Article Computer Science, Theory & Methods

Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems

Moming Duan et al.

Summary: Federated Learning (FL) is a distributed deep learning method where multiple devices contribute to a neural network training while keeping their data private. Data imbalance in mobile systems can lead to accuracy degradation in FL applications, but the Astraea framework offers improvements through data augmentation and rescheduling. Compared to FedAvg, Astraea demonstrates higher accuracy and reduced communication traffic.

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS (2021)

Article Engineering, Electrical & Electronic

A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks

Mingzhe Chen et al.

Summary: This article discusses the challenges of training federated learning algorithms over a realistic wireless network and proposes an optimization model to minimize the FL loss function, providing a method to improve identification accuracy.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2021)

Article Computer Science, Hardware & Architecture

SAFA: A Semi-Asynchronous Protocol for Fast Federated Learning With Low Overhead

Wentai Wu et al.

Summary: SAFA is a semi-asynchronous FL protocol proposed to address issues such as low round efficiency and poor convergence rate in extreme conditions. With novel designs in model distribution, client selection, and global aggregation, it mitigates the impacts of stragglers, crashes, and model staleness to boost efficiency and improve the quality of the global model.

IEEE TRANSACTIONS ON COMPUTERS (2021)

Article Engineering, Civil

A Hierarchical Blockchain-Enabled Federated Learning Algorithm for Knowledge Sharing in Internet of Vehicles

Haoye Chai et al.

Summary: This paper proposes a hierarchical blockchain framework and a hierarchical federated learning algorithm for knowledge sharing among vehicles, ensuring the security and privacy of knowledge while improving sharing efficiency and learning quality.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Proceedings Paper Computer Science, Hardware & Architecture

To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices

Liang Li et al.

Summary: This work focuses on improving the energy efficiency of federated learning over mobile edge networks, enabling flexible communication compression while balancing the energy consumption of local computing and wireless communication. The developed algorithm and compression control scheme aim to adapt to various computing and communication environments of participating devices, showing efficacy in energy saving through extensive simulations.

IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021) (2021)

Proceedings Paper Computer Science, Hardware & Architecture

Resource-Efficient and Convergence-Preserving Online Participant Selection in Federated Learning

Yibo Jin et al.

2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS) (2020)

Proceedings Paper Computer Science, Hardware & Architecture

Offloading Dependent Tasks in Mobile Edge Computing with Service Caching

Gongming Zhao et al.

IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (2020)

Proceedings Paper Computer Science, Hardware & Architecture

Network-Aware Optimization of Distributed Learning for Fog Computing

Yuwei Tu et al.

IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (2020)

Article Engineering, Electrical & Electronic

HFEL: Joint Edge Association and Resource Allocation for Cost-Efficient Hierarchical Federated Edge Learning

Siqi Luo et al.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2020)

Article Computer Science, Information Systems

Convergence of Edge Computing and Deep Learning: A Comprehensive Survey

Xiaofei Wang et al.

IEEE COMMUNICATIONS SURVEYS AND TUTORIALS (2020)

Article Engineering, Electrical & Electronic

Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism

Latif U. Khan et al.

IEEE COMMUNICATIONS MAGAZINE (2020)

Article Computer Science, Artificial Intelligence

Federated Machine Learning: Concept and Applications

Qiang Yang et al.

ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY (2019)

Article Computer Science, Hardware & Architecture

Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing

He Li et al.

IEEE NETWORK (2018)

Article Computer Science, Information Systems

LinkForecast: Cellular Link Bandwidth Prediction in LTE Networks

Chaoqun Yue et al.

IEEE TRANSACTIONS ON MOBILE COMPUTING (2018)

Review Computer Science, Information Systems

Edge Computing: Vision and Challenges

Weisong Shi et al.

IEEE INTERNET OF THINGS JOURNAL (2016)

Article Computer Science, Hardware & Architecture

Privacy and Big Data

Brian M. Gaff et al.

COMPUTER (2014)

Article Computer Science, Artificial Intelligence

Fast computation of Bipartite graph matching

Francesc Serratosa

PATTERN RECOGNITION LETTERS (2014)

Article Computer Science, Information Systems

Predictable 802.11 Packet Delivery from Wireless Channel Measurements

Daniel Halperin et al.

ACM SIGCOMM COMPUTER COMMUNICATION REVIEW (2010)

Article Computer Science, Artificial Intelligence

Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres

A Banerjee et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS (2004)