3.8 Proceedings Paper

Recurrent Neural Network Language Model Adaptation for Conversational Speech Recognition

Publisher

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2018-1413

Keywords

ASR; recurrent neural network language model (RNNLM); neural language model adaptation; fast marginal adaptation (FMA); cache model; deep neural network (DNN); lattice rescoring

Funding

  1. DARPA LORELEI [HR0011-15-2-0024]
  2. NSF [CRI-1513128]
  3. IARPA MATERIAL award [FA8650-17-C-9115]

Ask authors/readers for more resources

We propose two adaptation models for recurrent neural network language models (RNNLMs) to capture topic effects and long-distance triggers for conversational automatic speech recognition (ASR). We use a fast marginal adaptation (FMA) framework to adapt a RNNLM. Our first model is effectively a cache model the word frequencies are estimated by counting words in a conversation (with utterance-level hold-one-out) from 1st pass decoded word lattices, and then is interpolated with a background unigram distribution. In the second model, we train a deep neural network (DNN) on conversational transcriptions to predict word frequencies given word frequencies from 1st pass decoded word lattices. The second model can in principle model trigger and topic effects but is harder to train. Experiments on three conversational corpora show modest WER and perplexity reductions with both adaptation models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available