3.8 Proceedings Paper

Recurrent Neural Network Language Model Adaptation for Conversational Speech Recognition

出版社

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2018-1413

关键词

ASR; recurrent neural network language model (RNNLM); neural language model adaptation; fast marginal adaptation (FMA); cache model; deep neural network (DNN); lattice rescoring

资金

  1. DARPA LORELEI [HR0011-15-2-0024]
  2. NSF [CRI-1513128]
  3. IARPA MATERIAL award [FA8650-17-C-9115]

向作者/读者索取更多资源

We propose two adaptation models for recurrent neural network language models (RNNLMs) to capture topic effects and long-distance triggers for conversational automatic speech recognition (ASR). We use a fast marginal adaptation (FMA) framework to adapt a RNNLM. Our first model is effectively a cache model the word frequencies are estimated by counting words in a conversation (with utterance-level hold-one-out) from 1st pass decoded word lattices, and then is interpolated with a background unigram distribution. In the second model, we train a deep neural network (DNN) on conversational transcriptions to predict word frequencies given word frequencies from 1st pass decoded word lattices. The second model can in principle model trigger and topic effects but is harder to train. Experiments on three conversational corpora show modest WER and perplexity reductions with both adaptation models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据