4.6 Article

Towards Model Compression for Deep Learning Based Speech Enhancement

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASLP.2021.3082282

关键词

Speech enhancement; Tensors; Image coding; Quantization (signal); Training; Pipelines; Sensitivity analysis; Model compression; sparse regularization; pruning; quantization; speech enhancement

资金

  1. National Institute on Deafness and Other Communication Disorders (NIDCD) [R01 DC012048]
  2. Ohio Supercomputer Center

向作者/读者索取更多资源

The study proposes two compression pipelines to reduce the size of DNN-based speech enhancement models using sparse regularization, iterative pruning, and clustering-based quantization. Experimental results show that this approach reduces model sizes while maintaining enhancement performance, especially excelling in speaker separation tasks.
The use of deep neural networks (DNNs) has dramatically elevated the performance of speech enhancement over the last decade. However, to achieve strong enhancement performance typically requires a large DNN, which is both memory and computation consuming, making it difficult to deploy such speech enhancement systems on devices with limited hardware resources or in applications with strict latency requirements. In this study, we propose two compression pipelines to reduce the model size for DNN-based speech enhancement, which incorporates three different techniques: sparse regularization, iterative pruning and clustering-based quantization. We systematically investigate these techniques and evaluate the proposed compression pipelines. Experimental results demonstrate that our approach reduces the sizes of four different models by large margins without significantly sacrificing their enhancement performance. In addition, we find that the proposed approach performs well on speaker separation, which further demonstrates the effectiveness of the approach for compressing speech separation models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据