4.7 Article

Learning word dependencies in text by means of a deep recurrent belief network

期刊

KNOWLEDGE-BASED SYSTEMS
卷 108, 期 -, 页码 144-154

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2016.07.019

关键词

Deep belief networks; Time-delays; Variable-order; Gaussian networks; Markov Chain Monte Carlo

资金

  1. ASTAR Thematic Strategic Research Programme (TSRP) Grant [1121720013]
  2. Center for Computational Intelligence (C2I) at NTU
  3. Singapore-MIT Alliance in Computation and Systems Biology
  4. MIT Center for Computational Research in Economics and Management Science

向作者/读者索取更多资源

We propose a deep recurrent belief network with distributed time delays for learning multivariate Gaussians. Learning long time delays in deep belief networks is difficult due to the problem of vanishing or exploding gradients with increase in delay. To mitigate this problem and improve the transparency of learning time-delays, we introduce the use of Gaussian networks with time-delays to initialize the weights of each hidden neuron. From our knowledge of time delays, it is possible to learn the long delays from short delays in a hierarchical manner. In contrast to previous works, here dynamic Gaussian Bayesian networks over training samples are evolved using Markov Chain Monte Carlo to determine the initial weights of each hidden layer of neurons. In this way, the time-delayed network motifs of increasing Markov order across layers can be modeled hierarchically using a deep model. To validate the proposed Variable-order Belief Network (VBN) framework, it is applied for modeling word dependencies in text. To explore the generality of VBN, it is further considered for a real-world scenario where the dynamic movements of basketball players are modeled. Experimental results obtained showed that the proposed VBN could achieve over 30% improvement in accuracy on real-world scenarios compared to the state-of-the-art baselines. (C) 2016 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据