4.7 Article

Data Security Issues in Deep Learning: Attacks, Countermeasures, and Opportunities

期刊

IEEE COMMUNICATIONS MAGAZINE
卷 57, 期 11, 页码 116-122

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MCOM.001.1900091

关键词

-

资金

  1. National Key R&D Program of China [2017YFB0802300, 2017YFB0802000]
  2. National Natural Science Foundation of China [61972454, 61802051, 61772121, 61728102, 61472065]
  3. Peng Cheng Laboratory Project of Guangdong Province [PCL2018KP004]
  4. Guangxi Key Laboratory of Cryptography and Information Security [GCIS201804]

向作者/读者索取更多资源

Benefiting from the advancement of algorithms in massive data and powerful computing resources, deep learning has been explored in a wide variety of fields and produced unparalleled performance results. It plays a vital role in daily applications and is also subtly changing the rules, habits, and behaviors of society. However, inevitably, data-based learning strategies are bound to cause potential security and privacy threats, and arouse public as well as government concerns about its promotion to the real world. In this article, we mainly focus on data security issues in deep learning. We first investigate the potential threats of deep learning in this area, and then present the latest countermeasures based on various underlying technologies, where the challenges and research opportunities on offense and defense are also discussed. Then, we propose SecureNet, the first verifiable and privacy-preserving prediction protocol to protect model integrity and user privacy in DNNs. It can significantly resist various security and privacy threats during the prediction process. We simulate SecureNet under a real dataset, and the experimental results show the superior performance of SecureNet for detecting various integrity attacks against DNN models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据