4.8 Article

End-to-End Image Classification and Compression With Variational Autoencoders

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 9, Issue 21, Pages 21916-21931

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2022.3182313

Keywords

Classification; compression; end-to-end; reconstruction; variational autoencoders (VAEs)

Funding

  1. National Science Foundation [2002927, 2002937]
  2. Direct For Computer & Info Scie & Enginr
  3. Division Of Computer and Network Systems [2002937] Funding Source: National Science Foundation

Ask authors/readers for more resources

This study explores the joint optimization of codec and classifier to improve image classification accuracy, especially under limited network bandwidth. Based on VAEs, the proposed model achieves higher classification accuracy, while reducing encoder size, increasing inference speed, and saving power compared to baseline models.
The past decade has witnessed the rising dominance of deep learning and artificial intelligence in a wide range of applications. In particular, the ocean of wireless smartphones and IoT devices continue to fuel the tremendous growth of edge/cloud-based machine learning (ML) systems, including image/speech recognition and classification. To overcome the infrastructural barrier of limited network bandwidth in cloud ML, existing solutions have mainly relied on traditional compression codecs such as JPEG that were historically engineered for human-end users instead of ML algorithms. Traditional codecs do not necessarily preserve features important to ML algorithms under limited bandwidth, leading to potentially inferior performance. This work investigates application-driven optimization of programmable commercial codec settings for networked learning tasks such as image classification. Based on the foundation of variational autoencoders (VAEs), we develop an end-to-end networked learning framework by jointly optimizing the codec and classifier without reconstructing images for a given data rate (bandwidth). Compared with the standard JPEG codec, the proposed VAE joint compression and classification framework achieves classification accuracy improvement by over 10% and 4%, respectively, for CIFAR-10 and ImageNet-1k data sets at data rate of 0.8 bpp. Our proposed VAE-based models show 65%-99 % reductions in encoder size, x 1.5-x13.1 improvements in inference speed, and 25%-99% savings in power compared to baseline models. We further show that a simple decoder can reconstruct images with sufficient quality without compromising classification accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available