4.0 Review

Model Compression for Deep Neural Networks: A Survey

Journal

COMPUTERS
Volume 12, Issue 3, Pages -

Publisher

MDPI
DOI: 10.3390/computers12030060

Keywords

deep neural networks; model compression; model pruning; parameter quantization; low-rank decomposition; knowledge distillation; lightweight model design

Ask authors/readers for more resources

With the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in computer vision tasks. However, advanced DNN models have become complex, leading to high memory usage and computation demands. To address these issues, model compression has become a research focus. This study analyzed various model compression methods to reduce device storage space, speed up model inference, and improve model deployment.
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in various computer vision tasks. However, in the pursuit of performance, advanced DNN models have become more complex, which has led to a large memory footprint and high computation demands. As a result, the models are difficult to apply in real time. To address these issues, model compression has become a focus of research. Furthermore, model compression techniques play an important role in deploying models on edge devices. This study analyzed various model compression methods to assist researchers in reducing device storage space, speeding up model inference, reducing model complexity and training costs, and improving model deployment. Hence, this paper summarized the state-of-the-art techniques for model compression, including model pruning, parameter quantization, low-rank decomposition, knowledge distillation, and lightweight model design. In addition, this paper discusses research challenges and directions for future work.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.0
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available