4.7 Article

Secure Federated Matrix Factorization

期刊

IEEE INTELLIGENT SYSTEMS
卷 36, 期 5, 页码 11-19

出版社

IEEE COMPUTER SOC
DOI: 10.1109/MIS.2020.3014880

关键词

Servers; Encryption; Privacy; Data models; Mathematical model; Machine learning; IEEE Intelligent system; Security and Privacy Protection; Distributed system

资金

  1. NSFC [61972008]
  2. National Key Research and Development Program of China [2018AAA0101100]

向作者/读者索取更多资源

The article introduces a secure matrix factorization framework FedMF under the federated learning setting, which enhances user privacy protection by designing a user-level distributed matrix factorization framework and using homomorphic encryption. The feasibility of FedMF is verified through testing with real movie rating dataset, while challenges for applying it in practical research are also discussed.
To protect user privacy and meet law regulations, federated (machine) learning is obtaining vast interests in recent years. The key principle of federated learning is training a machine learning model without needing to know each user's personal raw private data. In this article, we propose a secure matrix factorization framework under the federated learning setting, called FedMF. First, we design a user-level distributed matrix factorization framework where the model can be learned when each user only uploads the gradient information (instead of the raw preference data) to the server. While gradient information seems secure, we prove that it could still leak users' raw data. To this end, we enhance the distributed matrix factorization framework with homomorphic encryption. We implement the prototype of FedMF and test it with a real movie rating dataset. Results verify the feasibility of FedMF. We also discuss the challenges for applying FedMF in practice for future research.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据