3.8 Proceedings Paper

End-to-end Image Compression with Swin-Transformer

Publisher

IEEE
DOI: 10.1109/VCIP56404.2022.10008895

Keywords

Image compression; end-to-end compression; transformer; convolution

Funding

  1. National Natural Science Foundation of China [62022002]
  2. Hong Kong Research Grants Council General Research Fund (GRF) [11203220]
  3. Hong Kong Innovation and Technology Fund [PRP/059/20FX]

Ask authors/readers for more resources

This paper proposes an end-to-end image compression framework using Swin-Transformer modules. Experimental results demonstrate that the proposed method outperforms existing methods in compressing both natural scene and screen content images.
In this paper, we propose an end-to-end image compression framework, which cooperates with the swin-transformer modules to capture the localized and non-localized similarities in image compression. In particular, the swin-transformer modules are deployed in the analysis and synthesis stages, interleaving with convolution layers. The transformer layers are expected to perceive more flexible receptive fields, such that the spatially localized and non-localized redundancies could be more effectively eliminated. The proposed method reveals the excellent capability of signal conjunction and prediction, leading to the improvement of the rate and distortion performance. Experimental results show that the proposed method is superior to the existing methods on both natural scene and screen content images, where 22.46% BD-Rate savings are achieved when compared with the BPG. Over 30% BD-Rate gains could be observed with screen content images when compared with the classical hyper-prior end-to-end coding method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available