4.7 Article

FederatedTrust: A solution for trustworthy federated learning

出版社

ELSEVIER
DOI: 10.1016/j.future.2023.10.013

关键词

Trustworthy federated learning; Trust assessment; AI governance; Privacy; Robustness; Fairness; Explainability; Accountability

向作者/读者索取更多资源

With the rapid expansion of the IoT and Edge Computing, centralized ML/DL methods face challenges due to distributed data silos and privacy concerns. The emerging solution of Federated Learning ensures data privacy while the need for trust in model predictions requires further research on trustworthy ML/DL. This paper introduces a comprehensive taxonomy with six pillars and over 30 metrics to evaluate the trustworthiness of Federated Learning models, and presents an algorithm named FederatedTrust for computing trustworthiness scores. Experimental results demonstrate the utility of FederatedTrust in real-world IoT security use cases.
The rapid expansion of the Internet of Things (IoT) and Edge Computing has presented challenges for centralized Machine and Deep Learning (ML/DL) methods due to the presence of distributed data silos that hold sensitive information. To address concerns regarding data privacy, collaborative and privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged. FL ensures data privacy by design, as the local data of participants remains undisclosed during the creation of a global and collaborative model. However, data privacy and performance are insufficient since a growing need demands trust in model predictions. Existing literature has proposed various approaches dealing with trustworthy ML/DL (excluding data privacy), identifying robustness, fairness, explainability, and accountability as important pillars. Nevertheless, further research is required to identify trustworthiness pillars and evaluation metrics specifically relevant to FL models, as well as to develop solutions that can compute the trustworthiness level of FL models. This work examines the existing requirements for evaluating trustworthiness in FL and introduces a comprehensive taxonomy consisting of six pillars (privacy, robustness, fairness, explainability, accountability, and federation), along with over 30 metrics for computing the trustworthiness of FL models. Subsequently, an algorithm named FederatedTrust is designed based on the pillars and metrics identified in the taxonomy to compute the trustworthiness score of FL models. A prototype of FederatedTrust is implemented and integrated into the learning process of FederatedScope, a well-established FL framework. Finally, five experiments are conducted using different configurations of FederatedScope (with different participants, selection rates, training rounds, and differential privacy) to demonstrate the utility of FederatedTrust in computing the trustworthiness of FL models. Three experiments employ the FEMNIST dataset, and two utilize the N-BaIoT dataset, considering a real-world IoT security use case.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据