4.2 Article

Benchmarking adversarially robust quantum machine learning at scale

期刊

PHYSICAL REVIEW RESEARCH
卷 5, 期 2, 页码 -

出版社

AMER PHYSICAL SOC
DOI: 10.1103/PhysRevResearch.5.023186

关键词

-

向作者/读者索取更多资源

In this study, the robustness of quantum ML networks, specifically quantum variational classifiers (QVC), was evaluated through rigorous training on simple and complex image datasets and various high-end adversarial attacks. The results demonstrate that QVCs outperform classical neural networks in learning undetected features, showing enhanced robustness against classical adversarial attacks and suggesting a potential quantum advantage for ML tasks. However, attacks on quantum networks can also deceive classical neural networks. By combining quantum and classical network outcomes, an adversarial attack detection technology is proposed.
Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology, and industry. Despite their accuracy and sophistication, neural networks can be easily fooled by carefully designed malicious inputs known as adversarial attacks. While such vulnerabilities remain a serious challenge for classical neural networks, the extent of their existence is not fully understood in the quantum ML setting. In this paper, we benchmark the robustness of quantum ML networks, such as quantum variational classifiers (QVC), at scale by performing rigorous training for both simple and complex image datasets and through a variety of high-end adversarial attacks. Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks by learning features, which are not detected by the classical neural networks, indicating a possible quantum advantage for ML tasks. Contrarily, and remarkably, the converse is not true, with attacks on quantum networks also capable of deceiving classical neural networks. By combining quantum and classical network outcomes, we propose an adversarial attack detection technology. Traditionally quantum advantage in ML systems has been sought through increased accuracy or algorithmic speed-up, but our study has revealed the potential for a kind of quantum advantage through superior robustness of ML models, whose practical realization will address serious security concerns and reliability issues of ML algorithms employed in a myriad of applications including autonomous vehicles, cybersecurity, and surveillance robotic systems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据