4.8 Article

Incentivizing Differentially Private Federated Learning: A Multidimensional Contract Approach

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 8, 期 13, 页码 10639-10651

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3050163

关键词

Data models; Computational modeling; Collaborative work; Data privacy; Internet of Things; Contracts; Task analysis; Differential privacy; federated learning; multidimensional contract; incentive mechanism

资金

  1. National Key Research and Development Program of China [2020YFB1807802]
  2. National Natural Science Foundation of China [61971148]
  3. Guangxi Natural Science Foundation, China [2018GXNSFDA281013]
  4. Foundation for Science and Technology Project of Guilin City [20190214-3]
  5. Natural Science Foundation of Guangdong Province [2018A030313306]
  6. U.S. National Science Foundation [CNS-1646607, CNS-1801925, CNS-2029569]
  7. National Science Foundation [CNS-2029685]

向作者/读者索取更多资源

This article introduces the application of federated learning in the Internet of Things (IoT) domain and the related privacy issues, proposing a design scheme considering incentive mechanisms. By using the differentially private federated learning (DPFL) framework, the contributions and costs of data owners are modeled to address information asymmetry.
Federated learning is a promising tool in the Internet-of-Things (IoT) domain for training a machine learning model in a decentralized manner. Specifically, the data owners (e.g., IoT device consumers) keep their raw data and only share their local computation results to train the global model of the model owner (e.g., an IoT service provider). When executing the federated learning task, the data owners contribute their computation and communication resources. In this situation, the data owners have to face privacy issues where attackers may infer data property or recover the raw data based on the shared information. Considering these disadvantages, the data owners will be reluctant to use their data to participate in federated learning without a well-designed incentive mechanism. In this article, we deliberately design an incentive mechanism jointly considering the task expenditure and privacy issue of federated learning. Based on a differentially private federated learning (DPFL) framework that can prevent the privacy leakage of the data owners, we model the contribution as well as the computation, communication, and privacy costs of each data owner. The three types of costs are data owners' private information unknown to the model owner, which thus forms an information asymmetry. To maximize the utility of the model owner under such information asymmetry, we leverage a 3-D contract approach to design the incentive mechanism. The simulation results validate the effectiveness of the proposed incentive mechanism with the DPFL framework compared to other baseline mechanisms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据