3.8 Proceedings Paper

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

出版社

IEEE
DOI: 10.1109/host45689.2020.9300276

关键词

Machine Learning; Neural Networks; Side-Channel Analysis; Masking

资金

  1. NSF [1850373]
  2. SRC GRC Task [2908.001]
  3. Direct For Computer & Info Scie & Enginr
  4. Division Of Computer and Network Systems [1850373] Funding Source: National Science Foundation

向作者/读者索取更多资源

Differential Power Analysis (DPA) has been an active area of research for the past two decades to study the attacks for extracting secret information from cryptographic implementations through power measurements and their defenses. The research on power side-channels have so far predominantly focused on analyzing implementations of ciphers such as AES, DES, RSA, and recently post-quantum cryptography primitives (e.g., lattices). Meanwhile, machine-learning applications are becoming ubiquitous with several scenarios where the Machine Learning Models are Intellectual Properties requiring confidentiality. Expanding side-channel analysis to Machine Learning Model extraction, however, is largely unexplored. This paper expands the DPA framework to neural-network classifiers. First, it shows DPA attacks during inference to extract the secret model parameters such as weights and biases of a neural network. Second, it proposes the first countermeasures against these attacks by augmenting masking. The resulting design uses novel masked components such as masked adder trees for fully-connected layers and masked Rectifier Linear Units for activation functions. On a SAKURA-X FPGA board, experiments show that the first-order DPA attacks on the unprotected implementation can succeed with only 200 traces and our protection respectively increases the latency and area-cost by 2.8x and 2.3x.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据