3.8 Proceedings Paper

Unsupervised Adaptation of Recurrent Neural Network Language Models

出版社

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2016-1342

关键词

RNNLM; LHUC; unsupervised adaptation; fine-tuning; MOB-Challenge

资金

  1. EPSRC Programme, Natural Speech Technology (NST) [EP/I031022/1]
  2. Core Research for Evolutional Science and Technology (CREST) from the Japan Science and Technology Agency (JST) (uDialogue project)
  3. European Union under H project SUMMA [688139]
  4. EPSRC [EP/I031022/1] Funding Source: UKRI

向作者/读者索取更多资源

Recurrent neural network language models (RNNLMs) have been shown to consistently improve Word Error Rates (WERs) of large vocabulary speech recognition systems employing n gram LMs. In this paper we investigate supervised and unsupervised discriminative adaptation of RNNLMs in a broadcast transcription task to target domains defined by either genre or show. We have explored two approaches based on (1) scaling forward-propagated hidden activations (Learning Hidden Unit Contributions (LHUC) technique) and (2) direct fine-tuning of the parameters of the whole RNNLM. To investigate the effectiveness of the proposed methods we carry out experiments on multi-genre broadcast (MGB) data following the MGB-2015 challenge protocol. We observe small but significant improvements in WER compared to a strong unadapted RNNLM model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据