4.6 Article

Persistent hidden states and nonlinear transformation for long short-term memory

Journal

NEUROCOMPUTING
Volume 331, Issue -, Pages 458-464

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2018.11.069

Keywords

Recurrent neural networks; Persistent hidden states; Affine transformation; Nonlinear transformation

Funding

  1. National Research Foundation of Korea (NRF) - Ministry of Education [2017R1D1A1B03033341]
  2. Institute for Information & Communications Technology Promotion (IITP) - Korea government (MSIT) [2018-0-00749]
  3. National Research Foundation of Korea [2017R1D1A1B03033341] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

Recurrent neural networks (RNNs) have been drawing much attention with great success in many applications like speech recognition and neural machine translation. Long short-term memory (LSTM) is one of the most popular RNN units in deep learning applications. LSTM transforms the input and the previous hidden states to the next states with the affine transformation, multiplication operations and a nonlinear activation function, which makes a good data representation for a given task. The affine transformation includes rotation and reflection, which change the semantic or syntactic information of dimensions in the hidden states. However, considering that a model interprets the output sequence of LSTM over the whole input sequence, the dimensions of the states need to keep the same type of semantic or syntactic information regardless of the location in the sequence. In this paper, we propose a simple variant of the LSTM unit, persistent recurrent unit (PRU), where each dimension of hidden states keeps persistent information across time, so that the space keeps the same meaning over the whole sequence. In addition, to improve the nonlinear transformation power, we add a feedforward layer in the PRU structure. In the experiment, we evaluate our proposed methods with three different tasks, and the results confirm that our methods have better performance than the conventional LSTM. (C) 2018 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available