4.7 Article

Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing

Journal

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Volume 40, Issue 12, Pages 3373-3390

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2022.3213344

Keywords

Federated learning; blockchain; edge computing; actor-critic learning; network optimization

Funding

  1. ONR [N00014-22-1-2305, N00014-21-1-2472]
  2. NSF [CNS-2146171]

Ask authors/readers for more resources

In this paper, the authors investigate the issue of latency optimization in blockchain-based federated learning in multi-server edge computing. They propose an offloading strategy to assist ML model training for resource-constrained mobile devices and develop a decentralized ML model aggregation solution based on blockchain communications. The authors also formulate the problem as an optimization task and propose a deep reinforcement learning scheme to solve it. Numerical evaluation shows that their proposed scheme outperforms baselines in terms of model training efficiency, convergence rate, system latency, and robustness against attacks.
In this paper, we study a new latency optimization problem for blockchain-based federated learning (BFL) in multi-server edge computing. In this system model, distributed mobile devices (MDs) communicate with a set of edge servers (ESs) to handle both machine learning (ML) model training and block mining simultaneously. To assist the ML model training for resource-constrained MDs, we develop an offloading strategy that enables MDs to transmit their data to one of the associated ESs. We then propose a new decentralized ML model aggregation solution at the edge layer based on a consensus mechanism to build a global ML model via peer-to-peer (P2P)-based blockchain communications. Blockchain builds trust among MDs and ESs to facilitate reliable ML model sharing and cooperative consensus formation, and enables rapid elimination of manipulated models caused by poisoning attacks. We formulate latency-aware BFL as an optimization aiming to minimize the system latency via joint consideration of the data offloading decisions, MDs' transmit power, channel bandwidth allocation for MDs' data offloading, MDs' computational allocation, and hash power allocation. Given the mixed action space of discrete offloading and continuous allocation variables, we propose a novel deep reinforcement learning scheme with a parameterized advantage actor critic algorithm. We theoretically characterize the convergence properties of BFL in terms of the aggregation delay, mini-batch size, and number of P2P communication rounds. Our numerical evaluation demonstrates the superiority of our proposed scheme over baselines in terms of model training efficiency, convergence rate, system latency, and robustness against model poisoning attacks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available