期刊
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING
卷 29, 期 -, 页码 1785-1794出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASLP.2021.3082282
关键词
Speech enhancement; Tensors; Image coding; Quantization (signal); Training; Pipelines; Sensitivity analysis; Model compression; sparse regularization; pruning; quantization; speech enhancement
资金
- National Institute on Deafness and Other Communication Disorders (NIDCD) [R01 DC012048]
- Ohio Supercomputer Center
The study proposes two compression pipelines to reduce the size of DNN-based speech enhancement models using sparse regularization, iterative pruning, and clustering-based quantization. Experimental results show that this approach reduces model sizes while maintaining enhancement performance, especially excelling in speaker separation tasks.
The use of deep neural networks (DNNs) has dramatically elevated the performance of speech enhancement over the last decade. However, to achieve strong enhancement performance typically requires a large DNN, which is both memory and computation consuming, making it difficult to deploy such speech enhancement systems on devices with limited hardware resources or in applications with strict latency requirements. In this study, we propose two compression pipelines to reduce the model size for DNN-based speech enhancement, which incorporates three different techniques: sparse regularization, iterative pruning and clustering-based quantization. We systematically investigate these techniques and evaluate the proposed compression pipelines. Experimental results demonstrate that our approach reduces the sizes of four different models by large margins without significantly sacrificing their enhancement performance. In addition, we find that the proposed approach performs well on speaker separation, which further demonstrates the effectiveness of the approach for compressing speech separation models.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据