4.3 Article

AttrLeaks on the Edge: Exploiting Information Leakage from Privacy-Preserving Co-inference

期刊

CHINESE JOURNAL OF ELECTRONICS
卷 32, 期 1, 页码 1-12

出版社

WILEY
DOI: 10.23919/cje.2022.00.031

关键词

Deep learning; Privacy; Collaboration; Transforms; Feature extraction; Prediction algorithms; Iron; Collaborative inference; Private information leakage; Attribute inference attack

向作者/读者索取更多资源

Collaborative inference accelerates deep neural network inference by extracting representations at the device and making predictions at the edge server. However, it can lead to privacy leakage of sensitive attributes. In this paper, we propose AttrLeaks, an attack framework that exploits the vulnerability of privacy-preserving co-inference to decode uploaded representations into a vulnerable form and predict private attributes. Experimental results show that AttrLeaks outperforms existing methods in terms of attack success rate.
Collaborative inference (co-inference) accelerates deep neural network inference via extracting representations at the device and making predictions at the edge server, which however might disclose the sensitive information about private attributes of users (e.g., race). Although many privacy-preserving mechanisms on co-inference have been proposed to eliminate privacy concerns, privacy leakage of sensitive attributes might still happen during inference. In this paper, we explore privacy leakage against the privacy-preserving co-inference by decoding the uploaded representations into a vulnerable form. We propose a novel attack framework named AttrLeaks, which consists of the shadow model of feature extractor (FE), the susceptibility reconstruction decoder, and the private attribute classifier. Based on our observation that values in inner layers of FE (internal representation) are more sensitive to attack, the shadow model is proposed to simulate the FE of the victim in the black-box scenario and generates the internal representations. Then, the susceptibility reconstruction decoder is designed to transform the uploaded representations of the victim into the vulnerable form, which enables the malicious classifier to easily predict the private attributes. Extensive experimental results demonstrate that AttrLeaks outperforms the state of the art in terms of attack success rate.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据