4.8 Article

SensitiveNets: Learning Agnostic Representations with Application to Face Images

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.3015420

关键词

Task analysis; Face; Privacy; Face recognition; Machine learning algorithms; Neural networks; Data protection; Face recognition; face analysis; biometrics; deep learning; agnostic; algorithmic discrimination; bias; privacy

资金

  1. PRIMA [MSCA-ITN-2019-860315]
  2. TRESPASS-ETN [MSCA-ITN-2019-860813]
  3. BIBECA [RTI2018-101248-B-I00MINECO]

向作者/读者索取更多资源

This work proposes a novel privacy-preserving neural network feature representation to suppress sensitive information while maintaining the utility of data. The approach ensures privacy and equality of opportunity by enforcing privacy of selected attributes. Fairness improvement is a result of this privacy-preserving learning method rather than the direct objective.
This work proposes a novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data. The new international regulation for personal data protection forces data controllers to guarantee privacy and avoid discriminative hazards while managing sensitive data of users. In our approach, privacy and discrimination are related to each other. Instead of existing approaches aimed directly at fairness improvement, the proposed feature representation enforces the privacy of selected attributes. This way fairness is not the objective, but the result of a privacy-preserving learning method. This approach guarantees that sensitive information cannot be exploited by any agent who process the output of the model, ensuring both privacy and equality of opportunity. Our method is based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective. The method is evaluated on three different primary tasks (identity, attractiveness, and smiling) and three publicly available benchmarks. In addition, we present a new face annotation dataset with balanced distribution between genders and ethnic origins. The experiments demonstrate that it is possible to improve the privacy and equality of opportunity while retaining competitive performance independently of the task.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据