3.8 Proceedings Paper

Resource-efficient DNNs for Keyword Spotting using Neural Architecture Search and Quantization

出版社

IEEE COMPUTER SOC
DOI: 10.1109/ICPR48806.2021.9413191

关键词

keyword spotting; neural architecture search; weight quantization

资金

  1. Austrian Science Fund (FWF) [I2706-N31]

向作者/读者索取更多资源

This paper introduces the use of neural architecture search (NAS) to automatically discover small models for keyword spotting (KWS) in limited resource environments. By employing a differentiable NAS approach, the study achieves a highly efficient model with 95.4% accuracy on the Google speech commands dataset, with memory usage of 494.8 kB and 19.6 million operations. Additionally, weight quantization is utilized to reduce memory consumption further and improve model performance.
This paper introduces neural architecture search (NAS) for the automatic discovery of small models for keyword spotting (KWS) in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) to maximize the classification accuracy while minimizing the number of operations per inference. Using NAS only, we were able to obtain a highly efficient model with 95.4% accuracy on the Google speech commands dataset with 494.8 kB of memory usage and 19.6 million operations. Additionally, weight quantization is used to reduce the memory consumption even further. We show that weight quantization to low bit-widths (e.g. 1 bit) can be used without substantial loss in accuracy. By increasing the number of input features from 10 MFCC to 20 MFCC we were able to increase the accuracy to 963% at 340.1 kB of memory usage and 27.1 million operations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据