4.5 Article

Maxout neurons for deep convolutional and LSTM neural networks in speech recognition

期刊

SPEECH COMMUNICATION
卷 77, 期 -, 页码 53-64

出版社

ELSEVIER
DOI: 10.1016/j.specom.2015.12.003

关键词

Maxout neuron; Convolutional neural network; Long short-term memory; Acoustic modeling; Speech recognition

资金

  1. National Natural Science Foundation of China [61273268, 61370034, 61403224, 61005017]

向作者/读者索取更多资源

Deep neural networks (DNNs) have achieved great success in acoustic modeling for speech recognition. However, DNNs with sigmoid neurons may suffer from the vanishing gradient problem during training. Maxout neurons are promising alternatives to sigmoid neurons. The activation of a maxout neuron is obtained by selecting the maximum-value within a local region, which results in constant gradients during the training process. In this paper, we combine the maxout neurons with two popular DNN structures for acoustic modeling, namely the convolutional neural network (CNN) and the long short-term memory (LSTM) recurrent neural network (RNN). The optimal network structures and training strategies for the models are explored. Experiments are conducted on the benchmark data sets released under the IARPA Babel Program. The proposed models achieve 2.5-6.0% relative improvements over their corresponding CNN or LSTM RNN baselines across six language collections. The state-of-the-art results on these data sets are achieved after system combination. (C) 2015 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据