4.5 Article

Faster and transferable deep learning steganalysis on GPU

Journal

JOURNAL OF REAL-TIME IMAGE PROCESSING
Volume 16, Issue 3, Pages 623-633

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s11554-019-00870-1

Keywords

Steganalysis; Deep learning; Transfer learning; GPU

Funding

  1. National Key Research Development Program of China [2016QY01W0200]
  2. National Natural Science Foundation of China NSFC [U1636101, U1636219, U1736211]

Ask authors/readers for more resources

Steganalysis is an important and challenging problem in the area of multimedia forensics. Many deeper networks have been put forward to improve the performance of detecting steganographic traits. Existing methods focus on leveraging a more deeper structure. However, as the model deepens, gradient backpropagation cannot guarantee the ability to flow through the weights of every module so that it is difficult to learn, in addition, the depth of the structure will consume the computing resources on GPU. To reduce the computation and accelerate the training process, we propose a novel architecture which combines batch normalization with shallow layers. To reduce the loss of tiny information in steganalysis, we decrease the depth and increase the width of networks and abandon the max-pooling layers. To tackle the problem in which the training process is too long under different payloads, we propose two transfer learning schemes including parameters multiplexing and fine tuning to improve the overall efficiency. We demonstrate the effectiveness of our method on two steganographic algorithms WOW and S-UNIWARD. Compared with SRM and Ye.net, our model achieves better performance on the BOSSbase database and enhances the efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available