{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "677f8e60",
   "metadata": {},
   "source": [
    "### RNN分类\n",
    "\n",
    "按照输入输出的结构分类：\n",
    "1. N vs N\n",
    "2. N vs 1\n",
    "3. 1 vs N\n",
    "4. N vs M\n",
    "\n",
    "按照RNN的内部构造分类：\n",
    "1. 传统RNN\n",
    "2. LSTM\n",
    "3. Bi-LSTM\n",
    "4. GRU\n",
    "5. Bi-GRU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "78a0386d",
   "metadata": {},
   "source": [
    "#### 1. 传统RNN模型\n",
    "\n",
    "![](./pics/RNN.png)\n",
    "\n",
    "**优势**：\n",
    "\n",
    "1. 内部结构简单，对计算资源要求低，在短序列任务上性能和效果都表现优异\n",
    "\n",
    "**缺点**：\n",
    "\n",
    "1. 在解决长序列之间的关联时，表现很差，原因是在进行反向传播时，过长的序列导致梯度的计算异常，发生梯度消失或爆炸"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e9380e50",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "rnn = nn.RNN(input_size=5, hidden_size=6, num_layers=2)\n",
    "h0 = torch.randn(2, 3, 6)  # 2代表num_layers, 3为batch_szie， 6为hidden_size\n",
    "\n",
    "x = torch.randn(1,3,5)  # 1代表seq长度，3代表batch_size， 5代表input_size\n",
    "y, hn = rnn(x, h0)\n",
    "\n",
    "y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6eb782bd",
   "metadata": {},
   "outputs": [],
   "source": [
    "hn"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ea67a6b",
   "metadata": {},
   "source": [
    "#### 2. LSTM模型\n",
    "\n",
    "![Long Short-Term Memory](./pics/LSTM.png)\n",
    "\n",
    "\n",
    "![Long Short-Term Memory](./pics/bi-LSTM.png)\n",
    "\n",
    "1. 遗忘门\n",
    "2. 输入门\n",
    "3. 输出门\n",
    "4. 细胞状态\n",
    "\n",
    "**优势**：\n",
    "\n",
    "1. 能够有效缓解长序列问题中可能出现的梯度消失或爆炸\n",
    "\n",
    "**缺点**：\n",
    "\n",
    "1. 训练效率低"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cfa0b7b",
   "metadata": {},
   "outputs": [],
   "source": [
    "lstm = nn.LSTM(input_size=5, hidden_size=6, num_layers=2, bidirectional=True)\n",
    "\n",
    "h0 = torch.randn(4, 3, 6)\n",
    "c0 = torch.randn(4, 3, 6)\n",
    "\n",
    "x = torch.randn(1, 3, 5)\n",
    "\n",
    "y, (hn, cn) = lstm(x, (h0, c0))\n",
    "\n",
    "y.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7da9c3d",
   "metadata": {},
   "outputs": [],
   "source": [
    "hn.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5ec66241",
   "metadata": {},
   "outputs": [],
   "source": [
    "cn"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acba8e3c",
   "metadata": {},
   "source": [
    "#### 3. GRU模型\n",
    "\n",
    "![](./pics/GRU.png)\n",
    "\n",
    "1. 重置门\n",
    "2. 更新门\n",
    "\n",
    "**优势**：\n",
    "1. 在捕捉长序列语义关联时，能有效抑制梯度消失或爆炸\n",
    "2. 计算复杂度优于LSTM\n",
    "\n",
    "**缺点**：\n",
    "1. 不能完全解决梯度消失的问题\n",
    "2. 不可并行计算（RNN结构本身的一大弊端）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "86d43dcf",
   "metadata": {},
   "outputs": [],
   "source": [
    "gru = nn.GRU(input_size=5, hidden_size=6, num_layers=2, bidirectional=False)\n",
    "\n",
    "h0 = torch.randn(2, 3, 6)\n",
    "x = torch.randn(1, 3, 5)\n",
    "\n",
    "y, hn = gru(x, h0)\n",
    "\n",
    "y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "904940d5",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "torchX",
   "language": "python",
   "name": "torchx"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
