4.6 Article

Enhancing the anti-steganalysis ability of steganography via adversarial examples

Journal

MULTIMEDIA TOOLS AND APPLICATIONS
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s11042-023-15306-z

Keywords

Steganography; Deep learning; Adversarial example; Generative adversarial network

Ask authors/readers for more resources

With the development of steganalysis technology, deep learning-based steganalyzers can accurately identify modification traces in steganographic covers, posing a significant threat to steganography. This research focuses on reducing the detection accuracy of deep learning-based steganalyzers. An Adversarial Example Steganography (AEST) method is designed to hide a secret grayscale image within a color cover image, creating a stego image that is difficult to distinguish. By using adversarial attacks such as FGM and PGD, small perturbations are added to generate adversarial steganographic images, effectively reducing the steganalyzer's detection accuracy. Additionally, a decoder based on adversarial training and generative adversarial networks is designed to minimize the impact of adversarial examples on secret information recovery. Experimental results demonstrate that AEST exhibits a strong anti-steganalysis performance, with the PGD attack-based adversarial steganographic image achieving a detection error rate of 63.511% for the XuNet steganalyzer.
Steganography technology can effectively conceal secret information in the carrier medium, enable covert communication without drawing the attention of a third party, and ensure the safe and reliable transmission of confidential information. However, with the development of steganalysis technology, steganalysers based on deep learning can accurately identify the modification traces in the steganographic cover, which poses a huge threat to steganography. Therefore, the focus of the research is how to reduce the detection accuracy of deep learning-based steganalyzer. In this work, we design an Adversarial Example STeganography (AEST) method, which hides the secret grayscale image into the color cover image to obtain the stego image that is difficult to distinguish by the naked eye. Then, the attack module composed of the FGM and PGD adversarial attacks added small perturbations to generate adversarial steganographic images, reducing the detection accuracy of the steganalyzer. In addition, to reduce the impact of adversarial examples on secret information recovery, we designed a decoder based on adversarial training and the generative adversarial network. Finally, the experimental results show that AEST has a good performance of anti-steganalysis ability. For example, the adversarial steganographic image based on PGD attack can make the detection error rate of the XuNet steganalyzer reach 63.511%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available