Journal
IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING
Volume 8, Issue 1, Pages 63-75Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/89.817454
Keywords
language modeling; spontaneous speech; variable n-grams
Categories
Ask authors/readers for more resources
Recent progress in variable n-gram language modeling provides an efficient representation of n-gram models and makes training of higher order n-grams possible. In this paper, me apply the variable n-gram design algorithm to conversational speech, extending the algorithm to learn skips and context-dependent classes to handle conversational speech characteristics such as filler words, repetitions, and other disfluencies. Experiments show that using the extended variable n-gram results in a language model that captures 4-gram context with less than half the parameters of a standard trigram while also improving the test perplexity and recognition accuracy.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available