4.8 Article

Universal adversarial examples and perturbations for quantum classifiers

期刊

NATIONAL SCIENCE REVIEW
卷 9, 期 6, 页码 -

出版社

OXFORD UNIV PRESS
DOI: 10.1093/nsr/nwab130

关键词

quantum machine learning; quantum classifiers; adversarial examples; measure concentration; quantum no-free-lunch theorem

资金

  1. Tsinghua University [53330300320]
  2. National Natural Science Foundation of China [12075128]
  3. Shanghai Qi Zhi Institute

向作者/读者索取更多资源

This paper explores the universality of adversarial examples and perturbations for quantum classifiers, providing evidence and proofs for the existence of universal adversarial risk and adversarial perturbations. The vulnerability of quantum machine learning systems revealed in this study is crucial for the practical applications of near-term and future quantum technologies in solving machine learning problems.
Quantum machine learning explores the interplay between machine learning and quantum physics, which may lead to unprecedented perspectives for both fields. In fact, recent works have shown strong evidence that quantum computers could outperform classical computers in solving certain notable machine learning tasks. Yet, quantum learning systems may also suffer from the vulnerability problem: adding a tiny carefully crafted perturbation to the legitimate input data would cause the systems to make incorrect predictions at a notably high confidence level. In this paper, we study the universality of adversarial examples and perturbations for quantum classifiers. Through concrete examples involving classifications of real-life images and quantum phases of matter, we show that there exist universal adversarial examples that can fool a set of different quantum classifiers. We prove that, for a set of k classifiers with each receiving input data of n qubits, an O(ln [k]/2(n)) increase of the perturbation strength is enough to ensure a moderate universal adversarial risk. In addition, for a given quantum classifier, we show that there exist universal adversarial perturbations, which can be added to different legitimate samples to make them adversarial examples for the classifier. Our results reveal the universality perspective of adversarial attacks for quantum machine learning systems, which would be crucial for practical applications of both near-term and future quantum technologies in solving machine learning problems. The existence of universal adversarial examples and perturbations for quantum classifiers has been demonstrated both analytically and numerically, revealing a crucial vulnerability aspect for quantum machine learning with near-term and future technologies.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据