4.6 Article

Lattice-to-sequence attentional Neural Machine Translation models

期刊

NEUROCOMPUTING
卷 284, 期 -, 页码 138-147

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2018.01.010

关键词

Neural Machine Translation; Word lattice; Recurrent Neural Network; Gated Recurrent Unit

资金

  1. Natural Science Foundation of China [61573294, 61672440]
  2. Ph.D. Programs Foundation of Ministry of Education of China [20130121110040]
  3. Foundation of the State Language Commission of China [WT135-10, YB135-49]
  4. Natural Science Foundation of Fujian Province [2016J05161]
  5. Fund of Research Project of Tibet Autonomous Region of China [Z2014A18G2-13]
  6. National Key Technology RD Program [2012BAH14F03]

向作者/读者索取更多资源

The dominant Neural Machine Translation (NMT) models usually resort to word-level modeling to embed input sentences into semantic space. However, it may not be optimal for the encoder modeling of NMT, especially for languages where tokenizations are usually ambiguous: On one hand, there may be tokenization errors which may negatively affect the encoder modeling of NMT. On the other hand, the optimal tokenization granularity is unclear for NMT. In this paper, we propose lattice-to-sequence attentional NMT models, which generalize the standard Recurrent Neural Network (RNN) encoders to lattice topology. Specifically, they take as input a word lattice which compactly encodes many tokenization alternatives, and learn to generate the hidden state for the current step from multiple inputs and hidden states in previous steps. Compared with the standard RNN encoder, the proposed encoders not only alleviate the negative impact of tokenization errors but are more expressive and flexible as well for encoding the meaning of input sentences. Experimental results on both Chinese-English and Japanese-English translations demonstrate the effectiveness of our models. (C) 2018 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据