4.5 Article

Recurrent graph encoder for syntax-aware neural machine translation

Journal

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s13042-022-01682-9

Keywords

Syntax-aware; Neural machine translation; Self-attention network; Recurrent graph; Recurrent neural network

Ask authors/readers for more resources

This article introduces a recursive graph syntax encoder called RGSE, which can simultaneously model syntactic dependencies and sequential information in neural machine translation tasks. The study shows that models equipped with RGSE outperform existing models on multiple strong syntax-aware benchmarks.
Self-attention networks (SAN) have achieved promising performance in a variety of NLP tasks, e.g. neural machine translation (NMT), as they can directly build dependencies among words. But it is weaker at learning positional information than recurrent neural networks (RNN). Natural questions arise: (1) Can we design a component with RNN by directly guiding the syntax dependencies for it? (2) Whether such syntax enhanced sequence modeling component benefits existing NMT structures, e.g. RNN-based NMT and Transformer-based NMT. To answer above question, we propose a simple yet effective recurrent graph syntax encoder, dubbed RGSE, to utilize off-the-shelf syntax dependencies and its intrinsic recurrence property, such that RGSE models syntactic dependencies and sequential information (i.e. word order) simultaneously. Experimental studies on various neural machine translation tasks demonstrate that RGSE equipped RNN and Transformer models could gain consistent significant improvements over several strong syntax-aware benchmarks, with minuscule parameters increases. The extensive analysis further illustrates that RGSE does improve the syntactic and semantic preservation ability than SAN, additionally, shows superior robustness to defend syntactic noise than existing syntax-aware NMT models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available