LSTM details

 

Ref:

http://prog3.com/sbdm/blog/lanran2/article/details/50603861

https://apaszke.github.io/lstm-explained.html

Paper:

https://arxiv.org/pdf/1511.07889.pdf

What is RNN

RNN: multi layer feedback RNN (neural Network Recurrent, recurrent neural network) neural network is a kind of artificial neural network which is connected to the ring. The internal state of the network can display dynamic time series behavior. Unlike feedforward neural networks, RNN can use its internal memory to process arbitrary timing input sequences, which allows it to be more easily processed such as non segmented handwriting recognition, speech recognition, etc.. – Baidu Encyclopedia

Here we look at the abstract out of the RNN formula:
ht=θϕ(ht1)+θxxt
yt=θyϕ(ht)
You can find that each RNN has to use the last time the middle layer of the outputht

The shortcomings of the traditional RNN – the gradient of the Vanishing (gradient problem)

We define function loss asEThen the gradient formula is as follows:
Eθ=St=1Etθ
Etθ=tk=1Etytythththkhkθ
hthk=ti=k+1hihi1=ti=k+1θTdiag[ϕ(hi1)]
||hihi1||||θT||||diag[ϕ(hi1)]||γθγϕ
hthk(γθγϕ)tk
Multiplied by less than 1 of the number, the gradient will be smaller and smaller. In order to solve this problem, LSTM came into being.

LSTM introduction

Definition: LSTM (Term Memory Long-Short, LSTM)
Is a time recurrent neural network, the paper was first published in 1997. Due to the unique design structure, LSTM is suitable for processing and prediction of time series in the interval and delay is very long important events. – Baidu Encyclopedia

Mentioned LSTM, always accompanied by a picture as shown below:

Can be seen from the figure, in addition to the input, there are three parts: 1) Gate Input; 2) Gate Forget; 3) Gate Output
According to the RNN mentioned above, our input isxtandht1, while the input ishtandct(state cell), where state LSTM is the key to cell, which makes LSTM with memory function. Here’s a formula for LSTM:
1) Gate Input:
it=σ(Wxixt+Whiht1+bi)=σ(linearxi(xt)+linearhi(ht1))
amongσRefers to the sigmoid function.
2) Gate Forget:Decide whether to delete or retain memory (memory)
ft=σ(Wxfxt+Whfht1+bf)
3) Gate Output:
ot=σ(Wxoxt+Whoht1+bo)
4) update Cell:
gt=tanh(Wxgxt+Whght1+bg)
5) State Update Cell:
ct=ftct1+itgt
6) Output of LSTM Final:
ht=ottanh(ct)
Above is a formula for cell involved in LSTM,Below to explain why LSTM can solve the problem of gradient disappear in RNN.

Because each factor is very close to 1, so the gradient is difficult to decay, so as to solve the problem of gradient disappear.

 

Nngraph Torch

Before the use of LSTM to prepare torch, we need to learn a tool nngraph torch, an nngraph to the following commands:

LuarocksInstallNngraph

Nngraph detailed introduction:Https://github.com/torch/nngraph/
Nngraph can facilitate the design of a neural network module. We first use nngraph to create a simple network module:
z=x1+x2linear(x3)
We can see that the input of this module is a total of three,x1,x2andx3, the output isz. The following is the implementation of this module torch code:

Require'nngraph'
X1=nn.Identity(())
X2=nn.Identity(())
X3=nn.Identity(())
L=nn.CAddTable () () (){x1, nn.CMulTable () ({x2) () (nn.Linear) (20,10) (x3)}}))
Mlp=nn.gModule ({x1, X2, x3},{L})

First we definex1,x2andx3, useNn.Identity () () () ()And then tolinear(x3)We useX4=nn.Linear (20,10) (x3)A linear neural network with 20 neurons in the output layer is defined, and a linear neural network with 10 neurons in the output layer is defined.x2linear(x3), useX5=nn.CMulTable () (X2, x4)For; forx1+x2linear(x3)We useNn.CAddTable () (x1, x5)To achieve; finally useNn.gModule ({input}, {output})To define the neural network module.
We use the forward method to test whether our Module is correct:

H1=Torch.Tensor{One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten}
H2=Torch.Tensor (Ten(fill ().One)
H3=Torch.Tensor (Twenty(fill ().Two)
B=Mlp:forward ({h1, H2, h3})
Parameters=Mlp:parameters ()One]
Bias=Mlp:parameters ()Two]
Result=Torch.cmul (H2, (parameters*h3+bias)) +h1

First we define three inputsh1,h2andh3, then call the module forward MPL command to get the output B, and then we get the network weights w and bias are saved in the parameters and bias variables, calculationz=h1+h2linear(h3)ResultResult=torch.cmul (H2, (parameters*h3+bias)) +h1, finally compare B and result is consistent, we found that the results of the calculation is the same, that our module is correct.

Use LSTM to prepare the nngraph module

Now we use nngraph to write the LSTM module described above, the code is as follows:

Require 'nngraph'
Function LSTM(XT, prev_c, prev_h)
    Function New_input_sum()
        Local I2h=NN.Linear(Four hundred,Four hundred)
        Local H2H=NN.Linear(Four hundred,Four hundred)
        Return NN.CAddTable()({i2h(XT)H2H.(prev_h)})
    End
    Local Input_gate=NN.Sigmoid()(new_input_sum())
    Local Forget_gate=NN.Sigmoid()(new_input_sum())
    Local Output_gate=NN.Sigmoid()(new_input_sum())
    Local GT=NN.Tanh()(new_input_sum())
    Local CT=NN.CAddTable()({nn.CMulTable()({forget_gate, prev_c}), nn.CMulTable()({input_gate, gt})})
    Local HT=NN.CMulTable()({output_gate, nn.Tanh()(CT)})
    Return CT,HT
End
XT=NN.Identity()()
Prev_c=NN.Identity()()
Prev_h=NN.Identity()()
LSTM=NN.GModule({xt, prev_c, prev_h}, {lstm(XT, prev_c, prev_h)})

amongXTandPrev_hIs input,Prev_cIs state cell, and then we follow the previous formula one calculation, the final outputCT(cell state new) (), HT (output). The calculation sequence of the code is completely consistent with the above, so here is no longer one one explained.

Advertisements

One thought on “LSTM details

  1. Shivali says:

    Hi ! Parameters=Mlp:parameters ()One]
    Bias=Mlp:parameters ()Two] doesn’t work for me.

    How can i print the parameters ??

    th> h1 = torch.Tensor{1,2,3}

    th> h2 = torch.Tensor{1,1,1}

    th> h3 = torch.Tensor{1,2,3,4,5,6,7,8,9,10}

    th> b= mlp:forward({h1,h2,h3})

    th> print(b)
    -1.4341
    1.8059
    5.1775

    But I can’t see the value of parameters to cross check the answer.

    Help would be highly appreciated !

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s