{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "### 使用Pytorch实现LSTM\n",
    "\n",
    "> python 3.7\n",
    ">\n",
    "> 2023/02/13"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "outputs": [],
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "import os\n",
    "import time\n",
    "import torch.nn as nn\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 循环神经网络\n",
    "\n",
    "循环神经网络是一类具有**短期记忆**能力的神经网络，在RNN中，神经元不仅可以接受其他神经元的信息，也可以接受自身的信息，形成具有环路的网络结构。\n",
    "\n",
    "RNN通常使用带自反馈的神经元，可以处理任意长度的时序数据。\n",
    "\n",
    "RNN更新通过以下公式\n",
    "$$h_t = f(h_{t-1},x_t)$$\n",
    "\n",
    "![](./../img/rnn.png)\n",
    "\n",
    "但RNN在学习过程中，会存在梯度消失或爆炸的问题，很难对长时间间隔状态间的依赖关系进行建模。这样，若时刻t的输入$y_t$依赖于时刻k的输入$x_k$，当间隔t-k较大时，简单RNN很难建模这种长距离的依赖关系，称作长程依赖问题。"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### LSTM\n",
    "\n",
    "为了改善RNN的长程依赖问题，在RNN的基础上引入**门控**机制来控制信息的累积速度，包括有选择地加入新的信息，有选择地遗忘之前累积的信息，这种网络都可以称作 基于门控的循环神经网络Gated RNN。\n",
    "\n",
    "一种改进策略：\n",
    "\n",
    "$$h_t = h_{t-1} + g(x_t,h_{t-1},\\theta)$$\n",
    "\n",
    "这样$h_t$和$h_{t-1}$间有了线形关系也有非线形关系。\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "LSTM在上述公式的基础上，做了两个主要的改进\n",
    "\n",
    "1. 新的内部状态\n",
    "    LSTM引入一个新的内部状态$c_t$专门进行线性的循环信息传递\n",
    "    $c_t$记录了到当前时刻为止的历史信息\n",
    "这儿$\\bar{c_t}$是非线性变换得到的**候选状态**，计算方式为：\n",
    "    $$\n",
    "        c_t = f_t \\bigodot c_{t-1} + i_t \\bigodot \\bar{c_t}\n",
    "    $$\n",
    "    $$\n",
    "        h_t = o_t \\bigodot tanh(c_t)\n",
    "    $$\n",
    "\n",
    "    $$\n",
    "        \\bar{c_t}  = tanh(W_c x_t + U_c H_{t-1} + b_c)\n",
    "    $$\n",
    "\n",
    "\n",
    "\n",
    "2. 门控机制\n",
    "    LSTM中加入门控机制来控制信息传递路径。，门是一个二值变量取0/1,表示不开放/开放。\n",
    "    * 输入门$i_t$:控制当前时刻的候选状态$\\bar{c_t}$有多少信息需要保存\n",
    "    * 遗忘门$f_t$:控制上一个时刻的内部状态${c_{t-1}}$需要遗忘多少信息\n",
    "    * 输出门$o_t$:控制当前时刻的内部状态$c_t$有多少信息需要输出给外部状态$h_t$\n",
    "\n",
    "LSTM中的门取值在（0，1）之间，计算如下\n",
    "![](./../img/ifo.png)\n",
    "$\\sigma$是logistic函数\n",
    "![](./../img/lstm.png)\n",
    "简单描述为\n",
    "![](./../img/lstm_.png)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### nn.lstm\n",
    "\n",
    "##### 前三个参数\n",
    "\n",
    "input_size:输入x的向量大小\n",
    "\n",
    "hidden_size：隐藏层的神经元数量\n",
    "\n",
    "num_layers：LSTM堆叠层数\n",
    "\n",
    "bias：隐层带不带偏置，默认为true\n",
    "\n",
    "batch_first: 参数设置True可以将 batch_size 放在第一维度\n",
    "\n",
    "\n",
    "##### 输入数据格式： （三个输入）\n",
    "input(seq_len, batch, input_size)\n",
    "\n",
    "h_0(num_layers * num_directions, batch, hidden_size)\n",
    "\n",
    "c_0(num_layers * num_directions, batch, hidden_size)\n",
    "\n",
    "\n",
    "##### 输出数据格式：\n",
    "output(seq_len, batch, hidden_size * num_directions)\n",
    "\n",
    "h_n(num_layers * num_directions, batch, hidden_size)\n",
    "\n",
    "c_n(num_layers * num_directions, batch, hidden_size)\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([5, 3, 20]) torch.Size([2, 3, 20]) torch.Size([2, 3, 20])\n"
     ]
    }
   ],
   "source": [
    "rnn = nn.LSTM(10, 20, 2)\n",
    "# 输入数据x的向量维数10, 设定lstm隐藏层的特征维度20, 此model用2个lstm层。如果是1，可以省略，默认为1)\n",
    "\n",
    "input = torch.randn(5, 3, 10)\n",
    "# 输入的input为，序列长度seq_len=5, 每次取的minibatch大小，batch_size=3, 数据向量维数=10（仍然为x的维度）。每次运行时取3个含有5个字的句子（且句子中每个字的维度为10进行运行）\n",
    "\n",
    "# 初始化的隐藏元和记忆元,通常它们的维度是一样的\n",
    "# 2个LSTM层，batch_size=3, 隐藏层的特征维度20\n",
    "h0 = torch.randn(2, 3, 20)\n",
    "c0 = torch.randn(2, 3, 20)\n",
    "\n",
    "# 这里有2层lstm，output是最后一层lstm的每个词向量对应隐藏层的输出,其与层数无关，只与序列长度相关\n",
    "# hn,cn是所有层最后一个隐藏元和记忆元的输出\n",
    "output, (hn, cn) = rnn(input, (h0, c0))\n",
    "##模型的三个输入与三个输出。三个输入与输出的理解见上三输入，三输出\n",
    "\n",
    "print(output.size(),hn.size(),cn.size())"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 使用循环神经网络LSTM训练MNIST分类"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "cpu\n"
     ]
    }
   ],
   "source": [
    "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
    "print(device)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "超参\n",
    "\"\"\"\n",
    "batch_size = 128\n",
    "sequence_length = 28 # 序列长度\n",
    "input_size = 28\n",
    "hidden_size = 128\n",
    "num_layers = 2\n",
    "output_size = 10\n",
    "epochs = 4\n",
    "lr = 0.01"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "outputs": [],
   "source": [
    "download_mnist = False\n",
    "data_dir = './../dataset/'\n",
    "if os.path.exists(data_dir):\n",
    "    download_mnist = False\n",
    "else:\n",
    "    download_mnist = True"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "data_size 60000\n",
      "batch_size 128\n"
     ]
    }
   ],
   "source": [
    "train_dataset = torchvision.datasets.MNIST(root=data_dir,\n",
    "                                           train=True,\n",
    "                                           transform=transforms.ToTensor(),\n",
    "                                           download=download_mnist)\n",
    "\n",
    "test_dataset = torchvision.datasets.MNIST(root=data_dir,\n",
    "                                          train=False,\n",
    "                                          transform=transforms.ToTensor())\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(dataset=train_dataset,\n",
    "                                           batch_size=batch_size,\n",
    "                                           shuffle=True)\n",
    "\n",
    "test_loader = torch.utils.data.DataLoader(dataset=test_dataset,\n",
    "                                          batch_size=batch_size,\n",
    "                                          shuffle=False)\n",
    "print('data_size',len(train_dataset))\n",
    "print('batch_size',batch_size)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([128, 1, 28, 28])\n",
      "torch.Size([128])\n",
      "(batch_size，sequence_length,input_size） torch.Size([128, 28, 28])\n"
     ]
    }
   ],
   "source": [
    "for i,(images,labels) in enumerate(train_loader):\n",
    "    if i==0:\n",
    "        print(images.shape)\n",
    "        print(labels.shape)\n",
    "        images = images.reshape(-1,sequence_length,input_size)\n",
    "        print('(batch_size，sequence_length,input_size）',images.shape)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "outputs": [],
   "source": [
    "class LSTMnet(nn.Module):\n",
    "    def __init__(self,input_size,hidden_size,num_layers,output_size):\n",
    "        super(LSTMnet,self).__init__()\n",
    "        self.hidden_size = hidden_size\n",
    "        self.num_layers = num_layers\n",
    "        self.lstm = nn.LSTM(input_size,hidden_size,num_layers,batch_first=True) # 这个参数就将batch_size这个参数放在第一位\n",
    "        self.fc = nn.Linear(hidden_size,output_size)\n",
    "\n",
    "    def forward(self,x):\n",
    "        h_0 = torch.zeros(self.num_layers,x.size(0),self.hidden_size).to(device)\n",
    "        c_0 = torch.zeros(self.num_layers,x.size(0),self.hidden_size).to(device)\n",
    "\n",
    "        out,_ = self.lstm(x,(h_0,c_0))\n",
    "        out = self.fc(out[:,-1,:]) # 只需要最后的那个h_t,这样最后就是输出一个10维度大小的向量\n",
    "        return out"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total_step 469\n",
      "Epoch [1/4], Step [100/469], Loss: 0.3491\n",
      "Epoch [1/4], Step [200/469], Loss: 0.1503\n",
      "Epoch [1/4], Step [300/469], Loss: 0.1969\n",
      "Epoch [1/4], Step [400/469], Loss: 0.0619\n",
      "one epoch cost 30.08 seconds\n",
      "Epoch [2/4], Step [100/469], Loss: 0.0565\n",
      "Epoch [2/4], Step [200/469], Loss: 0.0374\n",
      "Epoch [2/4], Step [300/469], Loss: 0.0793\n",
      "Epoch [2/4], Step [400/469], Loss: 0.0860\n",
      "one epoch cost 30.45 seconds\n",
      "Epoch [3/4], Step [100/469], Loss: 0.0599\n",
      "Epoch [3/4], Step [200/469], Loss: 0.0235\n",
      "Epoch [3/4], Step [300/469], Loss: 0.0731\n",
      "Epoch [3/4], Step [400/469], Loss: 0.0607\n",
      "one epoch cost 26.30 seconds\n",
      "Epoch [4/4], Step [100/469], Loss: 0.0410\n",
      "Epoch [4/4], Step [200/469], Loss: 0.0291\n",
      "Epoch [4/4], Step [300/469], Loss: 0.0110\n",
      "Epoch [4/4], Step [400/469], Loss: 0.0683\n",
      "one epoch cost 25.25 seconds\n"
     ]
    }
   ],
   "source": [
    "model = LSTMnet(input_size,hidden_size,num_layers,output_size).to(device)\n",
    "\n",
    "criterion = nn.CrossEntropyLoss() # 交叉熵损失函数\n",
    "optimizer = torch.optim.Adam(model.parameters(),lr = lr)\n",
    "\n",
    "total_step = len(train_loader)\n",
    "print('total_step {}'.format(total_step))\n",
    "for epoch in range(epochs):\n",
    "    ts = time.time()\n",
    "    for i, (images,labels) in enumerate(train_loader):\n",
    "        images = images.reshape(-1,sequence_length,input_size).to(device)\n",
    "        labels = labels.to(device)\n",
    "\n",
    "        outputs = model(images)\n",
    "        loss = criterion(outputs,labels)\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        if (i+1) % 100 == 0:\n",
    "            print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'\n",
    "                   .format(epoch+1, epochs, i+1, total_step, loss.item()))\n",
    "    te = time.time()\n",
    "    print('one epoch cost {:.2f} seconds'.format(te-ts))\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test Accuracy of the model on the 10000 test images: 98.24 %\n"
     ]
    }
   ],
   "source": [
    "# Test the model\n",
    "model.eval()\n",
    "with torch.no_grad():\n",
    "    correct = 0\n",
    "    total = 0\n",
    "    for images, labels in test_loader:\n",
    "        images = images.reshape(-1, sequence_length, input_size).to(device)\n",
    "        labels = labels.to(device)\n",
    "        outputs = model(images)\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        total += labels.size(0)\n",
    "        correct += (predicted == labels).sum().item()\n",
    "\n",
    "    print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total))"
   ],
   "metadata": {
    "collapsed": false
   }
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
