期刊
IEEE-ACM TRANSACTIONS ON NETWORKING
卷 -, 期 -, 页码 -出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNET.2023.3294366
关键词
Federated learning (FL); non-stationary; dynamic regret
Federated Learning (FL) is a new emerging domain in AI research, aiming to achieve optimal global model with restricted data sharing. However, most existing FL literature assumes stationary data, which is unrealistic in real-world conditions where concept drift occurs. This paper proposes a multiscale algorithmic framework combining theoretical guarantees of FedAvg and FedOMD algorithms with non-stationary detection and adaptation techniques to improve FL generalization performance in the presence of concept drifts.
Federated Learning (FL) is an emerging domain in the broader context of artificial intelligence research. Methodologies pertaining to FL assume distributed model training, consisting of a collection of clients and a server, with the main goal of achieving optimal global model with restrictions on data sharing due to privacy concerns. It is worth highlighting that the diverse existing literature in FL mostly assume stationary data generation processes; such an assumption is unrealistic in real-world conditions where concept drift occurs due to, for instance, seasonal or period observations, faults in sensor measurements. In this paper, we introduce a multiscale algorithmic framework which combines theoretical guarantees of FedAvg and FedOMD algorithms in near stationary settings with a non-stationary detection and adaptation technique to ameliorate FL generalization performance in the presence of concept drifts. We present a multi-scale algorithmic framework leading to O-similar to (min{root LT,Delta((1/3)) T ((2/3))+ root T}) dynamic regret for T rounds with an underlying general convex loss function, where L is the number of times non-stationary drifts occurred and. is the cumulative magnitude of drift experienced within T rounds.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据