4.7 Article

Adversarial multi-task learning with inverse mapping for speech enhancement

期刊

APPLIED SOFT COMPUTING
卷 120, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2022.108568

关键词

Speech enhancement; Adversarial multi-task learning; Inverse mapping learning; Deep neural networks

资金

  1. China Scholarship Council (CSC)
  2. 2020 MBIE Catalyst: Strategic - New Zealand-Singapore Data Science Research Programme, New Zealand

向作者/读者索取更多资源

In this paper, a novel adversarial multi-task learning with inverse mapping method is proposed for speech enhancement. The method focuses on enhancing the generator's capability of speech information capturing and representation learning, and achieves state-of-the-art performance in terms of speech quality and intelligibility.
Adversarial Multi-Task Learning (AMTL) has demonstrated its promising capability of information capturing and representation learning, however, is hardly explored in speech enhancement. In this paper, we propose a novel adversarial multi-task learning with inverse mapping method for speech enhancement. Our method focuses on enhancing the generator's capability of speech information capturing and representation learning. To implement this method, two extra networks (namely P and Q) are developed to establish the inverse mapping from the generated distribution to the input data domains. Correspondingly, two new loss functions (i.e., latent loss and equilibrium loss) are proposed for the inverse mapping learning and the enhancement model training with the original adversarial loss. Our method obtains the state-of-the-art performance in terms of speech quality (PESQ=2.93, CVOL=3.55). For speech intelligibility, our method can also obtain competitive performance (STOI=0.947). The experimental results demonstrate that our method can effectively improve speech representation learning and speech enhancement performance. (c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据