4.5 Article

PrivAim: A Dual-Privacy Preserving and Quality-Aware Incentive Mechanism for Federated Learning

期刊

IEEE TRANSACTIONS ON COMPUTERS
卷 72, 期 7, 页码 1913-1927

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TC.2022.3230904

关键词

Incentive mechanism; differential privacy; federated learning; multi-dimensional reverse auction

向作者/读者索取更多资源

Privacy protection and incentive mechanism are two fundamental problems in federated learning. Recent works have proposed differential privacy based privacy-preserving incentive mechanisms, but most of them only consider the privacy level as the incentive item, neglecting other factors such as data quantity and quality. Moreover, an untrusted server can infer sensitive information from bid costs. To solve these problems, this paper proposes a dual-privacy preserving and quality-aware incentive mechanism, PrivAim, for federated learning.
Privacy protection and incentive mechanism are two fundamental problems in federated learning (FL), which aim at protecting the privacy of data owners and stimulating them to share more resources, respectively. Recent works have proposed differential privacy (DP) based privacy-preserving incentive mechanisms to solve both problems simultaneously. However, almost all of them took the privacy level as the only incentive item, without considering other factors, such as data quantity and quality. Moreover, an untrusted server can further infer sensitive information from the bids that reflect the true costs of data owners. To solve these problems, in this paper, we propose a dual-privacy preserving and quality-aware incentive mechanism, PrivAim, for federated learning. Specifically, it utilizes differential privacy to protect the local models and true costs against the untrusted parameter server, and carefully designs a multi-dimensional reverse auction mechanism to incentivize data owners with high quality and low cost to participate in FL without knowing the true bids. We theoretically prove that PrivAim satisfies $\Delta b$Delta b-truthfulness, individual rational, computational efficiency, and differential privacy. Extensive experiments show that PrivAim can effectively protect bid privacy, and achieve at least 21% and 6% improvement on social welfare and model accuracy, respectively, compared to the state-of-the-art.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据