4.7 Article

Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks With Adversarial Traces

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIFS.2020.3039691

关键词

Training; Deep learning; Monitoring; Fingerprint recognition; Bandwidth; Reliability; Privacy; Anonymity system; defense; privacy; adversarial machine learning; deep learning

资金

  1. National Science Foundation (NSF) [1423163, 1722743, 1816851, 1433736]
  2. Direct For Education and Human Resources
  3. Division Of Graduate Education [1433736] Funding Source: National Science Foundation
  4. Division Of Computer and Network Systems
  5. Direct For Computer & Info Scie & Enginr [1423163, 1816851] Funding Source: National Science Foundation
  6. Division Of Computer and Network Systems
  7. Direct For Computer & Info Scie & Enginr [1722743] Funding Source: National Science Foundation

向作者/读者索取更多资源

The paper introduces a novel defense technique called Mockingbird, which utilizes adversarial examples to resist Website Fingerprinting attacks, effectively reducing attack accuracy and bandwidth overhead.
Website Fingerprinting (WF) is a type of traffic analysis attack that enables a local passive eavesdropper to infer the victim's activity, even when the traffic is protected by a VPN or an anonymity system like Tor. Leveraging a deep-learning classifier, a WF attacker can gain over 98% accuracy on Tor traffic. In this paper, we explore a novel defense, Mockingbird, based on the idea of adversarial examples that have been shown to undermine machine-learning classifiers in other domains. Since the attacker gets to design and train his attack classifier based on the defense, we first demonstrate that at a straightforward technique for generating adversarial-example based traces fails to protect against an attacker using adversarial training for robust classification. We then propose Mockingbird, a technique for generating traces that resists adversarial training by moving randomly in the space of viable traces and not following more predictable gradients. The technique drops the accuracy of the state-of-the-art attack hardened with adversarial training from 98% to 42-58% while incurring only 58% bandwidth overhead. The attack accuracy is generally lower than state-of-the-art defenses, and much lower when considering Top-2 accuracy, while incurring lower bandwidth overheads.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据