4.6 Article

Anomaly detection with variational quantum generative adversarial networks

Journal

QUANTUM SCIENCE AND TECHNOLOGY
Volume 6, Issue 4, Pages -

Publisher

IOP Publishing Ltd
DOI: 10.1088/2058-9565/ac0d4d

Keywords

quantum machine learning; anomaly detection; noisy intermediate-scale quantum; quantum generative adversarial networks; generative adversarial networks

Funding

  1. German Federal Ministry for Economic Affairs and Energy [01MK20005D]

Ask authors/readers for more resources

Generative adversarial networks (GANs) consist of a generative model and a discriminative model for sampling from a target distribution and evaluating the proximity of a sample to the target distribution. The introduction of variational quantum-classical Wasserstein GANs (WGANs) addresses training instabilities and sampling efficiency issues for anomaly detection.
Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detection. However, they suffer from training instabilities, and sampling efficiency may be limited by the classical sampling procedure. We introduce variational quantum-classical Wasserstein GANs (WGANs) to address these issues and embed this model in a classical machine learning framework for anomaly detection. Classical WGANs improve training stability by using a cost function better suited for gradient descent. Our model replaces the generator of WGANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged. This way, high-dimensional classical data only enters the classical model and need not be prepared in a quantum circuit. We demonstrate the effectiveness of this method on a credit card fraud dataset. For this dataset our method shows performance on par with classical methods in terms of the F-1 score. We analyze the influence of the circuit ansatz, layer width and depth, neural net architecture parameter initialization strategy, and sampling noise on convergence and performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available