3.8 Proceedings Paper

BFU: Bayesian Federated Unlearning with Parameter Self-Sharing

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3579856.3590327

关键词

Bayesian Federated Learning; Machine Unlearning; Data Privacy

向作者/读者索取更多资源

As the right to be forgotten is legislated worldwide, there is a need for machine unlearning mechanisms in federated learning scenarios. This paper proposes a Bayesian federated unlearning approach that allows data erasure from a trained model in federated learning without sharing raw data with the server. The proposed approach considers the trade-off between forgetting erased data and preserving the original global model, and mitigates accuracy degradation caused by unlearning.
As the right to be forgotten has been legislated worldwide, many studies attempt to design machine unlearning mechanisms to enable data erasure from a trained model. Existing machine unlearning studies focus on centralized learning, where the server can access all users' data. However, in a popular scenario, federated learning (FL), the server cannot access users' training data. In this paper, we investigate the problem of machine unlearning in FL. We formalize a federated unlearning problem and propose a bayesian federated unlearning (BFU) approach to implement unlearning for a trained FL model without sharing raw data with the server. Specifically, we first introduce an unlearning rate in BFU to balance the trade-off between forgetting the erased data and remembering the original global model, making it adaptive to different unlearning tasks. Then, to mitigate accuracy degradation caused by unlearning, we propose BFU with parameter self-sharing (BFU-SS). BFU-SS considers data erasure and maintaining learning accuracy as two tasks and optimizes them together during unlearning. Extensive comparisons between our methods and the state-of-art federated unlearning method demonstrate the superiority of our proposed realizations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据