3.8 Proceedings Paper

Resource-efficient DNNs for Keyword Spotting using Neural Architecture Search and Quantization

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/ICPR48806.2021.9413191

Keywords

keyword spotting; neural architecture search; weight quantization

Funding

  1. Austrian Science Fund (FWF) [I2706-N31]

Ask authors/readers for more resources

This paper introduces the use of neural architecture search (NAS) to automatically discover small models for keyword spotting (KWS) in limited resource environments. By employing a differentiable NAS approach, the study achieves a highly efficient model with 95.4% accuracy on the Google speech commands dataset, with memory usage of 494.8 kB and 19.6 million operations. Additionally, weight quantization is utilized to reduce memory consumption further and improve model performance.
This paper introduces neural architecture search (NAS) for the automatic discovery of small models for keyword spotting (KWS) in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) to maximize the classification accuracy while minimizing the number of operations per inference. Using NAS only, we were able to obtain a highly efficient model with 95.4% accuracy on the Google speech commands dataset with 494.8 kB of memory usage and 19.6 million operations. Additionally, weight quantization is used to reduce the memory consumption even further. We show that weight quantization to low bit-widths (e.g. 1 bit) can be used without substantial loss in accuracy. By increasing the number of input features from 10 MFCC to 20 MFCC we were able to increase the accuracy to 963% at 340.1 kB of memory usage and 27.1 million operations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available