4.5 Article

Maxout neurons for deep convolutional and LSTM neural networks in speech recognition

Journal

SPEECH COMMUNICATION
Volume 77, Issue -, Pages 53-64

Publisher

ELSEVIER
DOI: 10.1016/j.specom.2015.12.003

Keywords

Maxout neuron; Convolutional neural network; Long short-term memory; Acoustic modeling; Speech recognition

Funding

  1. National Natural Science Foundation of China [61273268, 61370034, 61403224, 61005017]

Ask authors/readers for more resources

Deep neural networks (DNNs) have achieved great success in acoustic modeling for speech recognition. However, DNNs with sigmoid neurons may suffer from the vanishing gradient problem during training. Maxout neurons are promising alternatives to sigmoid neurons. The activation of a maxout neuron is obtained by selecting the maximum-value within a local region, which results in constant gradients during the training process. In this paper, we combine the maxout neurons with two popular DNN structures for acoustic modeling, namely the convolutional neural network (CNN) and the long short-term memory (LSTM) recurrent neural network (RNN). The optimal network structures and training strategies for the models are explored. Experiments are conducted on the benchmark data sets released under the IARPA Babel Program. The proposed models achieve 2.5-6.0% relative improvements over their corresponding CNN or LSTM RNN baselines across six language collections. The state-of-the-art results on these data sets are achieved after system combination. (C) 2015 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available