Journal
IEEE TRANSACTIONS ON COMPUTERS
Volume 72, Issue 7, Pages 1913-1927Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TC.2022.3230904
Keywords
Incentive mechanism; differential privacy; federated learning; multi-dimensional reverse auction
Ask authors/readers for more resources
Privacy protection and incentive mechanism are two fundamental problems in federated learning. Recent works have proposed differential privacy based privacy-preserving incentive mechanisms, but most of them only consider the privacy level as the incentive item, neglecting other factors such as data quantity and quality. Moreover, an untrusted server can infer sensitive information from bid costs. To solve these problems, this paper proposes a dual-privacy preserving and quality-aware incentive mechanism, PrivAim, for federated learning.
Privacy protection and incentive mechanism are two fundamental problems in federated learning (FL), which aim at protecting the privacy of data owners and stimulating them to share more resources, respectively. Recent works have proposed differential privacy (DP) based privacy-preserving incentive mechanisms to solve both problems simultaneously. However, almost all of them took the privacy level as the only incentive item, without considering other factors, such as data quantity and quality. Moreover, an untrusted server can further infer sensitive information from the bids that reflect the true costs of data owners. To solve these problems, in this paper, we propose a dual-privacy preserving and quality-aware incentive mechanism, PrivAim, for federated learning. Specifically, it utilizes differential privacy to protect the local models and true costs against the untrusted parameter server, and carefully designs a multi-dimensional reverse auction mechanism to incentivize data owners with high quality and low cost to participate in FL without knowing the true bids. We theoretically prove that PrivAim satisfies $\Delta b$Delta b-truthfulness, individual rational, computational efficiency, and differential privacy. Extensive experiments show that PrivAim can effectively protect bid privacy, and achieve at least 21% and 6% improvement on social welfare and model accuracy, respectively, compared to the state-of-the-art.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available