3.8 Proceedings Paper

Enhanced LSTM for Natural Language Inference

Publisher

ASSOC COMPUTATIONAL LINGUISTICS-ACL
DOI: 10.18653/v1/P17-1152

Keywords

-

Funding

  1. Science and Technology Development of Anhui Province, China [2014z02006]
  2. Fundamental Research Funds for the Central Universities [WK2350000001]
  3. Strategic Priority Research Program of the Chinese Academy of Sciences [XDB02070006]

Ask authors/readers for more resources

Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is very challenging. With the availability of large annotated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architectures, we first demonstrate that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result-it further improves the performance even when added to the already very strong model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available