4.7 Article

Picking Up Quantization Steps for Compressed Image Classification

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2022.3218104

Keywords

Quantization (signal); Image coding; Transform coding; Training; Neural networks; Sensitivity; Deep learning; Compressed images; quantization steps; image classification

Ask authors/readers for more resources

In this paper, the sensitivity of deep neural networks to compressed images is addressed. The authors propose a method to reduce this sensitivity by utilizing neglected disposable coding parameters stored in compressed files. They introduce a novel quantization aware confidence (QAC) method based on quantization steps to reduce the influence of quantization on network training. Additionally, they propose a quantization aware batch normalization (QABN) method to alleviate the variance of feature distributions in classification networks. Experimental results demonstrate the significant improvement of the proposed method on CIFAR-10, CIFAR-100, and ImageNet.
The sensitivity of deep neural networks to compressed images hinders their usage in many real applications, which means classification networks may fail just after taking a screenshot and saving it as a compressed file. In this paper, we argue that neglected disposable coding parameters stored in compressed files could be picked up to reduce the sensitivity of deep neural networks to compressed images. Specifically, we resort to using one of the representative parameters, quantization steps, to facilitate image classification. Firstly, based on quantization steps, we propose a novel quantization aware confidence (QAC), which is utilized as sample weights to reduce the influence of quantization on network training. Secondly, we utilize quantization steps to alleviate the variance of feature distributions, where a quantization aware batch normalization (QABN) is proposed to replace batch normalization of classification networks. Extensive experiments show that the proposed method significantly improves the performance of classification networks on CIFAR-10, CIFAR-100, and ImageNet.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available