3.8 Proceedings Paper

Channel-wise Mixed-precision Assignment for DNN Inference on Constrained Edge Nodes

Publisher

IEEE

Keywords

Deep Learning; NAS; Quantization; TinyML

Funding

  1. ECSEL Joint Undertaking (JU) [101007321]
  2. European Union, France
  3. European Union, Belgium
  4. European Union, Czech Republic
  5. European Union, Germany
  6. European Union, Italy
  7. European Union, Sweden
  8. European Union, Switzerland
  9. European Union, Turkey

Ask authors/readers for more resources

Quantization is a widely used method to reduce memory occupation, latency, and energy consumption of deep neural networks in cloud and edge systems. This study proposes a novel neural architecture search method that allows higher precision assignment based on important features, resulting in reduced memory and energy consumption while maintaining the same accuracy.
Quantization is widely employed in both cloud and edge systems to reduce the memory occupation, latency, and energy consumption of deep neural networks. In particular, mixed-precision quantization, i.e., the use of different bit-widths for different portions of the network, has been shown to provide excellent efficiency gains with limited accuracy drops, especially with optimized bit-width assignments determined by automated Neural Architecture Search (NAS) tools. State-of-the-art mixed-precision works layer-wise, i.e., it uses different bit-widths for the weights and activations tensors of each network layer. In this work, we widen the search space, proposing a novel NAS that selects the bit-width of each weight tensor channel independently. This gives the tool the additional flexibility of assigning a higher precision only to the weights associated with the most informative features. Testing on the MLPerf Tiny benchmark suite, we obtain a rich collection of Pareto-optimal models in the accuracy vs model size and accuracy vs energy spaces. When deployed on the MPIC RISC-V edge processor, our networks reduce the memory and energy for inference by up to 63% and 27% respectively compared to a layer-wise approach, for the same accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available