3.8 Proceedings Paper

Learning Bounded Context-Free-Grammar via LSTM and the Transformer: Difference and Explanations

This study compares the practical differences between LSTM and Transformers in natural language processing tasks and proposes an explanation based on their latent space decomposition patterns. The experimental results show that LSTM models have difficulty capturing stack and stack operations, while Transformers are less affected.
Long Short-Term Memory (LSTM) and Transformers are two popular neural architectures used in natural language processing tasks. Theoretical results show that both are Turing-complete and can represent any context-free languages (CFLs). In practice, it is often observed that the Transformer models have better representation power than the LSTM. But the reason is barely understood. We study such practical differences between LSTM and the Transformer and propose an explanation based on their latent space decomposition patterns. To achieve this goal, we introduce an oracle training paradigm, which forces the decomposition of the latent representation of LSTM and the Transformer, and supervises with the transitions of the corresponding Pushdown Automaton (PDA) of the CFL. With the forced decomposition, we show that the performance upper bounds of LSTM and the Transformer in learning CFL are close: both of them can simulate a stack and perform stack operation along with state transitions. However, the absence of forced decomposition leads to the failure of LSTM models to capture the stack and stack operations, while having a marginal impact on the Transformer model. Lastly, we connect the experiment on the prototypical PDA to a real-world parsing task to re-verify the conclusions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据