4.1 Article

Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise

Journal

JOURNAL OF THE AUDIO ENGINEERING SOCIETY
Volume 71, Issue 1-2, Pages 34-44

Publisher

AUDIO ENGINEERING SOC
DOI: 10.17743/jaes.2022.0060

Keywords

-

Ask authors/readers for more resources

This paper proposes a defense method to enhance the robustness and security of automatic speech recognition (ASR) systems against adversarial examples. It introduces an algorithm for devastating and detecting adversarial examples that can attack advanced ASR systems.
An automatic speech recognition (ASR) system based on a deep neural network is vulnerable to attack by an adversarial example, especially if the command-dependent ASR fails. Adefense method against adversarial examples is proposed to improve the robustness and security of the ASR system. An algorithm of devastation and detection on adversarial examples that can attack current advanced ASR systems is proposed. An advanced text-dependent and command-dependent ASR system is chosen as the target, generating adversarial examples by an optimization-based attack on text-dependent ASR and the genetic-algorithm-based algorithm on command-dependent ASR. The method is based on input transformation of adversarial examples. Different random intensities and kinds of noise are added to adversarial examples to devastate the perturbation previously added to normal examples. Experimental results show that the method performs well. For the devastation of examples, the original speech similarity after adding noise can reach 99.68%, the similarity of adversarial examples can reach zero, and the detection rate of adversarial examples can reach 94%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available