3.8 Proceedings Paper

FRAME-LEVEL SPECAUGMENT FOR DEEP CONVOLUTIONAL NEURAL NETWORKS IN HYBRID ASR SYSTEMS

Journal

Publisher

IEEE
DOI: 10.1109/SLT48900.2021.9383626

Keywords

speech recognition; frame-level SpecAugment; SNDCNN; data augmentation; hybrid ASR system

Ask authors/readers for more resources

Inspired by SpecAugment, a frame-level SpecAugment method (f-SpecAugment) is proposed to improve the performance of deep CNNs for hybrid HMM ASR systems. By applying transformations to each convolution window independently during training, f-SpecAugment reduces WER across different ASR tasks and is shown to be effective even with large training data, with benefits comparable to doubling the amount of training data for deep CNNs.
Inspired by SpecAugment - a data augmentation method for end-to-end ASR systems, we propose a frame-level SpecAugment method (f-SpecAugment) to improve the performance of deep convolutional neural networks (CNN) for hybrid HMM based ASR systems. Similar to the utterance level SpecAugment, f-SpecAugment performs three transformations: time warping, frequency masking, and time masking. Instead of applying the transformations at the utterance level, f-SpecAugment applies them to each convolution window independently during training. We demonstrate that f-SpecAugment is more effective than the utterance level SpecAugment for deep CNN based hybrid models. We evaluate the proposed f-SpecAugment on 50-layer Self-Normalizing Deep CNN (SNDCNN) acoustic models trained with up to 25000 hours of training data. We observe f-SpecAugment reduces WER by 0.5-4.5% relatively across different ASR tasks for four languages. As the benefits of augmentation techniques tend to diminish as training data size increases, the large scale training reported is important in understanding the effectiveness of f-SpecAugment. Our experiments demonstrate that even with 25k training data, f-SpecAugment is still effective. We also demonstrate that f-SpecAugment has benefits approximately equivalent to doubling the amount of training data for deep CNNs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available