4.5 Article

Enhance membership inference attacks in federated learning

期刊

COMPUTERS & SECURITY
卷 136, 期 -, 页码 -

出版社

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2023.103535

关键词

Machine learning; Federated learning (FL); Membership inference attack; Adaboost classifier; Sequence prediction confidence

向作者/读者索取更多资源

This paper proposes a new membership inference attack method in federated learning, which utilizes data poisoning and sequence prediction confidence. The attack is effective and results in minimal overall model performance degradation.
In Federated learning, models in training will unintentionally memorize detailed information about private data, and the aggregation process on the central server requires users to upload their model parameters, making the models still susceptible to membership inference attacks. However, the existing membership inference attacks in federated learning have been less effective. This paper proposes a new membership inference attack method in federated learning that utilizes data poisoning and sequence prediction confidence. By injecting toxic data, the model can remember the detailed information of specific classes in the target private dataset to the maximum extent. The privacy data detailed information of the target clients contained in the model will be represented through the output confidence vector. Afterward, we aggregate the confidence information obtained from multiple epochs in federated learning and utilize the AdaBoost classifier to learn the details from it. Finally, we use different thresholds to partition the predicted confidence scores output by the AdaBoost classifier obtaining the membership information. We conducted experiments on multiple datasets and models to validate the effectiveness of our attack method. The results showed a high attack effectiveness with minimal overall model performance degradation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据