4.6 Review

How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing

期刊

ARTIFICIAL INTELLIGENCE REVIEW
卷 56, 期 2, 页码 1427-1492

出版社

SPRINGER
DOI: 10.1007/s10462-022-10204-6

关键词

Deep learning; Privacy; Natural language processing; Differential privacy; Homomorphic encryption; Searchable encryption; Federated learning

向作者/读者索取更多资源

This article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering classification, privacy threats, metrics, and challenges in real-world scenarios.
Deep learning (DL) models for natural language processing (NLP) tasks often handle private data, demanding protection against breaches and disclosures. Data protection laws, such as the European Union's General Data Protection Regulation (GDPR), thereby enforce the need for privacy. Although many privacy-preserving NLP methods have been proposed in recent years, no categories to organize them have been introduced yet, making it hard to follow the progress of the literature. To close this gap, this article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering theoretical foundations, privacy-enhancing technologies, and analysis of their suitability for real-world scenarios. First, we introduce a novel taxonomy for classifying the existing methods into three categories: data safeguarding methods, trusted methods, and verification methods. Second, we present an extensive summary of privacy threats, datasets for applications, and metrics for privacy evaluation. Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view. Further, we discuss open challenges in privacy-preserving NLP regarding data traceability, computation overhead, dataset size, the prevalence of human biases in embeddings, and the privacy-utility tradeoff. Finally, this review presents future research directions to guide successive research and development of privacy-preserving NLP models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据