4.5 Article

Parameters Compressing in Deep Learning

Journal

CMC-COMPUTERS MATERIALS & CONTINUA
Volume 62, Issue 1, Pages 321-336

Publisher

TECH SCIENCE PRESS
DOI: 10.32604/cmc.2020.06130

Keywords

Deep neural network; parameters compressing; matrix decomposition; tensor decomposition

Funding

  1. National Natural Science Foundation of China [61802030, 61572184]
  2. Science and Technology Projects of Hunan Province [2016JC2075]
  3. International Cooperative Project for Double First-Class, CSUST [2018IC24]

Ask authors/readers for more resources

With the popularity of deep learning tools in image decomposition and natural language processing, how to support and store a large number of parameters required by deep learning algorithms has become an urgent problem to be solved. These parameters are huge and can be as many as millions. At present, a feasible direction is to use the sparse representation technique to compress the parameter matrix to achieve the purpose of reducing parameters and reducing the storage pressure. These methods include matrix decomposition and tensor decomposition. To let vector take advance of the compressing performance of matrix decomposition and tensor decomposition, we use reshaping and unfolding to let vector be the input and output of Tensor-Factorized Neural Networks. We analyze how reshaping can get the best compress ratio. According to the relationship between the shape of tensor and the number of parameters, we get a lower bound of the number of parameters. We take some data sets to verify the lower bound.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available