{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "# 第25章循环神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## 习题25.1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "&emsp;&emsp;Jordan提出的循环神经网络如图25.15所示。试写出这种神经网络的公式，并与Elman提出的简单循环神经网络做比较。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "![习题25.1图](../images/25-1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答：**  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答思路：**\n",
    "\n",
    "1. 给出简单循环神经网络（S-RNN）的定义\n",
    "2. 给出Jordan提出的循环神经网络的公式\n",
    "2. 比较Jordan RNN和S-RNN"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答步骤：**  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第1步：简单循环神经网络（S-RNN）的定义**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第25.1.1节的定义25.1的简单循环神经网络（即Elman提出的简单循环神经网络）的定义：\n",
    "\n",
    "> **定义25.1（简单循环神经网络）** 称以下的神经网络为简单循环神经网络。神经网络以序列数据$x_1, x_2, \\cdots, x_T$为输入，每一项是一个实数向量。在每一个位置上重复使用同一个神经网络结构。在第$t$个位置上（$t = 1, 2, \\cdots, T$），神经网络的隐层或中间层以$x_t$和$h_{t-1}$为输入，以$h_t$为输出，其间有以下关系成立：\n",
    "> $$ h_t = \\tanh (U \\cdot h_{t - 1} + W \\cdot x_t + b) \\tag{25.1} $$\n",
    "> 其中，$x_t$表示第$t$个位置上的输入，是一个实数向量$(x_{t,1}, x_{t,2}, \\cdots, x_{t,n})^T$；$h_{t - 1}$表示第$t - 1$个位置的状态，也是一个实数向量$(h_{t-1,1}, h_{t-1,2}, \\cdots, h_{t-1,m})^T$；$h_t$表示第$t$个位置的状态$(h_{t,1}, h_{t,2}, \\cdots, h_{t,m})^T$，也是一个实数向量；$U,W$是权重矩阵；$b$是偏置向量。神经网络的输出层以$h_t$为输入，$p_t$为输出，有以下关系成立：\n",
    "> $$ p_t = \\text{softmax} (V \\cdot h_t + c) \\tag{25.2} $$\n",
    "> 其中，$p_t$表示第$t$个位置上的输出，是一个概率向量$(p_{t,1}, p_{t,2}, \\cdots,p_{t,l})^T$，满足$\\displaystyle p_{t,i} \\geqslant 0 \\ (i = 1, 2, \\cdots, l), \\sum_{i = 1}^l p_{t,i} = 1$；$V$是权重矩阵；$c$是偏置向量。神经网络输出序列数据$p_1, p_2, \\cdots, p_T$，每一项是一个概率向量。  \n",
    "> &emsp;&emsp;以上公式还可以写作\n",
    "> $$ r_t = U \\cdot h_{t - 1} + W \\cdot x_t + b \\tag{25.3} $$\n",
    "> \n",
    "> $$ h_t = \\tanh(r_t) \\tag{25.4} $$\n",
    "> \n",
    "> $$ z_t = V \\cdot h_t + c \\tag{25.5} $$\n",
    "> \n",
    "> $$ p_t = \\text{softmax}(z_t) \\tag{25.6} $$\n",
    "> \n",
    "> 其中，$r_t$是隐层的净输入向量，$z_t$是输出层的净输入向量。隐层的激活函数通常是双曲正切函数，也可以是其他激活函数；输出层的激活函数通常是软最大化函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第2步：给出Jordan提出的循环神经网络的公式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据图25.15的循环神经网络架构，可得循环神经网络的公式如下：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{r}_t = \\boldsymbol{U} \\cdot \\boldsymbol{p}_{t-1} + W \\cdot \\boldsymbol{x}_t + \\boldsymbol{b} \\\\\n",
    "\\boldsymbol{h}_t = \\tanh (\\boldsymbol{r}_t) \\\\\n",
    "\\boldsymbol{z}_t = V \\cdot \\boldsymbol{h}_t + \\boldsymbol{c} \\\\\n",
    "\\boldsymbol{p}_t = \\text{softmax} (\\boldsymbol{z}_{t})\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第3步：比较Jordan RNN和S-RNN**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 相同点：\n",
    "    1. 两者都是描述动态系统的非线性模型\n",
    "    2. 两者都满足循环神经网络的基本特点，包括可以处理任意长度的序列数据、属于自回归模型、具有强大的表示能力\n",
    "    3. 两者都不能进行并行化处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 不同点：\n",
    "    1. Jordan RNN采用softmax处理隐含层后的输出层作为下一层隐含层的输入，而 S-RNN 采用的是softmax处理前的隐含层作为下一层隐含层的输入\n",
    "    2. 由于Jordan RNN采用的是经softmax处理后的隐含层，所以其分布较原本的隐含层改变了，尤其是类别间的差距被非线性放大或缩小（负值厌恶），这种对正负向的偏好在信息量表达上是不利的，这里直接建模相邻时间节点的隐含层可以建立更直接的相邻隐含层分布之间的关系，拥有更高的拓展性和普适性，所以后续的循环神经网络多以 S-RNN 作为基础机构。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 习题25.2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;写出循环神经网络的层归一化的公式。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答：**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答思路：**  \n",
    "\n",
    "1. 给出层归一化的基本概念\n",
    "2. 写出循环神经网络的层归一化的公式"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答步骤：**  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第1步：层归一化的基本概念**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第23.2.5节的层归一化的描述：\n",
    "\n",
    "> &emsp;&emsp;层归一化（layer normalization）是另一种防止内部协变量偏移的方法。其基本想法与批量归一化相同，但是是在每一层的神经元上进行归一化，而不是在每一个批量的样本上进行归一化。优点是实现简单，也没有批量大小的超参数需要调节。  \n",
    "> &emsp;&emsp;层归一化在每一层的神经元的净输入上进行。假设当前层的神经元的净输入是$z = (z_1, z_2, \\cdots, z_m)^T$，其中$z_j$是第$j$个神经元的净输入，$m$是神经元个数。训练和预测时，首先计算这一层的神经元的净输入的均值与方差（无偏估计）。\n",
    "> $$ \\mu = \\frac{1}{m} \\sum_{j = 1}^m z_j \\tag{23.65} $$\n",
    "> $$ \\sigma^2 = \\frac{1}{m - 1} \\sum_{j = 1}^m (z_j - \\mu)^2 \\tag{23.66} $$\n",
    "> \n",
    "> 然后对每一个神经元的净输入进行归一化，得到数值：\n",
    "> \n",
    "> $$ \\bar{z}_j = \\frac{z_j - \\mu}{\\sqrt{\\sigma^2 + \\epsilon}}, \\ j= 1, 2, \\cdots, m \\tag{23.67} $$\n",
    "> \n",
    "> 其中，$\\epsilon$是一个很小的正数。之后再进行仿射变换，得到数值：\n",
    "> \n",
    "> $$ \\tilde{z}_j = \\gamma \\cdot \\bar{z}_j + \\beta, \\ j = 1, 2, \\cdots, m \\tag{23.68} $$\n",
    "> \n",
    "> 其中，$\\gamma$和$\\beta$是参数。最后将归一化加仿射变换的结果作为这一层神经元的实际净输入。在每一层都做相同的处理。神经网络的每一层都有两个参数$\\gamma$和$\\beta$。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第2步：写出循环神经网络的层归一化的公式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;对于第$l$层循环神经网络层输入为$z^{(l)}$，其层归一化后的输出$\\tilde{z}^{(l)}$为：\n",
    "$$\n",
    "\\tilde{z}^{(l)} = \\gamma \\cdot \\frac{z^{(l)}-\\mu^{(l)}}{\\sqrt{\\sigma^{(l)}+\\epsilon}} + \\beta \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "记作$L N_{\\gamma, \\beta}\\left(z^{(l)}\\right)$，其中$\\gamma$, $\\beta$ 这里是缩放和平移的参数向量，和 $z^{(l)}$ 的维度相同；$\\mu^{(l)}$为$\\displaystyle \\mu^{(l)}=\\frac{1}{n^{(l)}} \\sum_{i=1}^{n^{(l)}} z_i^{(l)}$；$\\displaystyle \\sigma^{(l)} = \\frac{1}{n^{(l)} - 1} \\sum_{i=1}^{n^{(l)}} (z_i^{(l)} - \\mu^{(l)})^2$。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在循环神经网络中，假设在$t$时刻，隐层为$h_t$，其归一化的公式为\n",
    "\n",
    "$$\n",
    "z_t=U h_{t-1}+W x_t + b\\\\\n",
    "h_t=f\\left(L N_{\\gamma, \\beta}\\left(z_t\\right)\\right)\n",
    "$$\n",
    "\n",
    "其中$x_t$ 表示第 $t$ 个位置上的输入，$U,W$ 为权重矩阵，$f(\\cdot)$是激活函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 习题25.3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;比较前馈神经网络的反向传播算法与循环神经网络的反向传播算法的异同。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答：**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答思路：**  \n",
    "\n",
    "1. 给出前馈神经网络的反向传播算法\n",
    "2. 给出循环神经网络的反向传播算法\n",
    "3. 比较两者的异同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答步骤：**   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第1步：前馈神经网络的反向传播算法**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第23.2.3节的算法23.3的前馈神经网络的反向传播算法：\n",
    "\n",
    "> **算法23.3 （前馈神经网络的反向传播算法）**    \n",
    "输入：神经网络$f(\\boldsymbol{x};\\boldsymbol{\\theta})$，参数向量$\\boldsymbol{\\theta}$，一个样本$(\\boldsymbol{x}, \\boldsymbol{y})$  \n",
    "输出：更新的参数向量$\\boldsymbol{\\theta}$  \n",
    "超参数：学习率$\\eta$  \n",
    "> 1.正向传播，得到各层输出$\\boldsymbol{h}^{(1)}, \\boldsymbol{h}^{(2)}, \\cdots, \\boldsymbol{h}^{(s)}$  \n",
    "> $$ \\boldsymbol{h}^{(0)} = \\boldsymbol{x} $$\n",
    "> For $t = 1,2, \\cdots, s$，do {\n",
    "> $$ z^{(t)} = \\boldsymbol{W}^{(t)} \\boldsymbol{h}^{(t - 1)} + \\boldsymbol{b}^{(t)} \\\\\n",
    "\\boldsymbol{h}^{(t)} = a (z^{(t)}) $$\n",
    "> } \n",
    "> \n",
    "> $$ f(\\boldsymbol{x}) = \\boldsymbol{h}^{(s)} $$\n",
    "> \n",
    "> 2.反向传播，得到各层误差$\\boldsymbol{\\delta}^{(s)},\\cdots, \\boldsymbol{\\delta}^{(2)}, \\boldsymbol{\\delta}^{(1)}$，同时计算各层的梯度，更新各层的参数。  \n",
    "> 计算输出层的误差\n",
    "> $$ \\boldsymbol{\\delta}^{(s)} = \\boldsymbol{h}^{(s)} - \\boldsymbol{y} $$\n",
    "> For $t = s, \\cdots, 2, 1$，do {  \n",
    "> &emsp;&emsp;计算第$t$层的梯度\n",
    "> $$ \\nabla_{\\boldsymbol{W}^{(t)}} L = \\boldsymbol{\\delta}^{(t)} \\cdot {\\boldsymbol{h}^{(t - 1)}}^T \\\\\n",
    "\\nabla_{\\boldsymbol{b}^{(t)}} L = \\boldsymbol{\\delta}^{(t)} $$\n",
    ">  &emsp;&emsp;根据梯度下降公式更新第$t$层的参数  \n",
    "> $$ \\boldsymbol{W}^{(t)} \\leftarrow \\boldsymbol{W}^{(t)} - \\eta \\nabla_{\\boldsymbol{W}^{(t)}} L \\\\\n",
    "\\boldsymbol{b}^{(t)} \\leftarrow \\boldsymbol{b}^{(t)} - \\eta \\nabla_{\\boldsymbol{b}^{(t)}} L $$\n",
    ">  &emsp;&emsp; If ($t > 1$) {  \n",
    ">  &emsp;&emsp; &emsp;&emsp;将第$t$层的误差传到第$t - 1$层\n",
    "> $$ \\boldsymbol{\\delta}^{(t - 1)} = \\frac{\\partial a}{\\partial z^{(t - 1)}} \\odot \\left ( {\\boldsymbol{W}^{(t)}}^T \\cdot \\boldsymbol{\\delta}^{(t)} \\right ) $$  \n",
    ">  &emsp;&emsp;}  \n",
    "> }\n",
    "> 3.返回更新的参数向量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第2步：循环神经网络的反向传播算法**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第25.1.2节的算法25.1的循环神经网络的反向传播算法：\n",
    "\n",
    "> **算法25.1（随时间的反向传播算法）**    \n",
    "> 输入：循环神经网络$f(x;\\theta)$，参数$\\theta$，样本$(x_1,x_2,...,x_T)$ 和 $(y_1,y_2,...,y_T)$。  \n",
    "> 输出：更新的参数$\\theta$  \n",
    "> 超参数：学习率 $\\eta$\n",
    "> 1.正向传播，得到各个位置的输出  \n",
    "> For $t = 1, 2, \\cdots, T$，do {  \n",
    "> &emsp;&emsp;将信号从前向后传播，计算隐层的输出$h_t$和输出层的输出$p_t$  \n",
    "> $$ r_t = U \\cdot h_{t - 1} + W \\cdot x_t + b \\\\\n",
    "h_t = \\tanh (r_t) \\\\\n",
    "z_t = V \\cdot h_t + c \\\\\n",
    "p_t = \\text{softmax}(z_t) $$\n",
    "> }\n",
    "> 2.反向传播，得到各个位置的梯度  \n",
    "> For $t = T, \\cdots, 2, 1$，do {  \n",
    "> &emsp;&emsp;计算输出层的梯度$\\displaystyle \\frac{\\partial{L}}{\\partial{z_t}}$  \n",
    "> $$ \\frac{\\partial{L}}{\\partial{z_t}} = y_t - p_t $$\n",
    "> &emsp;&emsp;将梯度从后向前传播，计算隐层的梯度$\\displaystyle \\frac{\\partial{L}}{\\partial{r_t}}$  \n",
    "> &emsp;&emsp;If ($t < T$) {  \n",
    "> $$ \\frac{\\partial{L}}{\\partial{r_t}} = \\text{diag}(\\boldsymbol{1} - \\tanh^2 r_t) \\cdot U^T \\cdot \\frac{\\partial{L}}{\\partial{r_{t+1}}} + \\text{diag}(\\boldsymbol{1} - \\tanh^2 r_t) \\cdot V^T \\cdot \\frac{\\partial{L}}{\\partial{z_t}} $$\n",
    "> &emsp;&emsp;} else {  \n",
    "> $$ \\frac{\\partial{L}}{\\partial{r_T}} = \\text{diag}(\\boldsymbol{1} - \\tanh^2 r_T) \\cdot V^T \\cdot \\frac{\\partial{L}}{\\partial{z_T}} $$\n",
    "> &emsp;&emsp;}  \n",
    "> }  \n",
    "> 3.进行参数更新  \n",
    "> 计算梯度\n",
    "> $$ \\frac{\\partial{L}}{\\partial{c}} = \\sum_{t = 1}^T \\frac{\\partial{L}}{\\partial{z_t}} \\\\\n",
    "\\frac{\\partial{L}}{\\partial{V}} = \\sum_{t = 1}^T \\frac{\\partial{L}}{\\partial{z_t}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial{L}}{\\partial{b}} = \\sum_{t = 1}^T \\frac{\\partial{L}}{\\partial{r_t}} \\\\\n",
    "\\frac{\\partial{L}}{\\partial{U}} = \\sum_{t = 1}^T \\frac{\\partial{L}}{\\partial{r_t}} \\cdot h_{t - 1}^T \\\\\n",
    "\\frac{\\partial{L}}{\\partial{W}} = \\sum_{t = 1}^T \\frac{\\partial{L}}{\\partial{r_t}} \\cdot x_t^T $$\n",
    "> 根据梯度下降公式更新参数\n",
    "> $$ c \\leftarrow c - \\eta \\frac{\\partial{L}}{\\partial{c}} \\\\\n",
    "V \\leftarrow V - \\eta \\frac{\\partial{L}}{\\partial{V}} \\\\\n",
    "b \\leftarrow b - \\eta \\frac{\\partial{L}}{\\partial{b}} \\\\\n",
    "W \\leftarrow W - \\eta \\frac{\\partial{L}}{\\partial{W}} \\\\\n",
    "U \\leftarrow U - \\eta \\frac{\\partial{L}}{\\partial{U}} $$\n",
    "> 4.返回更新的参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第3步：比较两者的异同**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 相同点：\n",
    "    1. 两者的反向传播学习的过程步骤相同，都是正向传播、反向传播、参数更新、返回更新的参数\n",
    "    2. 两者在反向传播过程中都会因矩阵连乘，导致梯度消失和梯度爆炸"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 不同点：\n",
    "    1. 在循环神经网络的反向传播中，矩阵的连乘接近矩阵的连续自乘，导致其梯度消失与爆炸的风险更严重；\n",
    "    2. 循环神经网络的反向传播算法需要在时间上展开，计算量更大；\n",
    "    3. 循环神经网络的反向传播算法中，每一个位置的参数共享，传播梯度为所有位置求和。而前馈网络神经网络的反向传播算法中没有参数共享；\n",
    "    4. 循环神经网络的反向传播算法中，隐层的梯度来自输出层的梯度和下一个位置的隐层梯度两个方向。而前馈网络神经网络的反向传播算法中，隐层梯度只来自于输出层的梯度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 习题25.4"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;写出LSTM模型的反向传播算法公式。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答：**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答思路：**  \n",
    "\n",
    "1. 给出LSTM的基本概念\n",
    "2. 写出LSTM的反向传播算法公式推导\n",
    "3. 写出LSTM的反向传播算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答步骤：**   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第1步：LSTM的基本概念**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第25.2.1节的定义25.2的长短期记忆网络（LSTM）：  \n",
    "\n",
    "> &emsp;&emsp;**定义25.2（长短期记忆网络）** 以下的循环神经网络称为长短期记忆网络。在循环网络的每一个位置上有状态和记忆元，以及输入门、遗忘门、输出门，构成一个单元。第$t$个位置上$(t = 1, 2, \\cdots, T)$的单元是以当前位置的输入$x_t$、之前位置的记忆元$c_{t - 1}$、之前位置的状态$h_{t - 1}$为输入，以当前位置的状态$h_t$和当前位置的记忆元$c_t$为输出的函数，由以下方式计算。\n",
    "> $$ i_t = \\sigma(U_i \\cdot h_{t - 1} + W_i \\cdot x_t + b_i) \\tag{25.20} $$\n",
    "> $$ f_t = \\sigma(U_f \\cdot h_{t - 1} + W_f \\cdot x_t + b_f) \\tag{25.21} $$\n",
    "> $$ o_t = \\sigma(U_o \\cdot h_{t - 1} + W_o \\cdot x_t + b_o) \\tag{25.22} $$\n",
    "> $$ \\tilde{c}_t = \\tanh (U_c \\cdot h_{t - 1} + W_c \\cdot x_t + b_c) \\tag{25.23} $$\n",
    "> $$ c_t = i_t \\odot \\tilde{c}_t + f_t \\odot c_{t - 1} \\tag{25.24} $$\n",
    "> $$ h_t = o_t \\odot \\tanh(c_t) \\tag{25.25} $$\n",
    "> 这里$i_t$是输入门，$f_t$是遗忘门，$o_t$是输出门，$\\tilde{c}_t$是中间结果。状态$h_t$、记忆元$c_t$、输入门$i_t$、遗忘门$f_t$、输出门$o_t$都是向量，其维度相同。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![习题25.4解答图](../images/25-4.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第2步：写出LSTM的反向传播算法公式推导**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;由上述LSTM算法公式，现为了区分$\\tilde{c}_t$ 和 $c_t$，使用 $g_t$ 代替 $\\tilde{c}_t$："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "i_t = \\sigma\\left(\\tilde{i}_t\\right) = \\sigma(U_i \\cdot h_{t - 1} + W_i \\cdot x_t + b_i) \\\\\n",
    "f_t = \\sigma\\left(\\tilde{f}_t\\right) = \\sigma(U_f \\cdot h_{t - 1} + W_f \\cdot x_t + b_f) \\\\\n",
    "o_t = \\sigma\\left(\\tilde{o}_t\\right) = \\sigma(U_o \\cdot h_{t - 1} + W_o \\cdot x_t + b_o) \\\\\n",
    "g_t = \\tanh(\\tilde{g}_t) = \\tilde{c}_t = \\tanh (U_c \\cdot h_{t - 1} + W_c \\cdot x_t + b_c) \\\\\n",
    "c_t = i_t \\odot g_t + f_t \\odot c_{t - 1} \\\\\n",
    "h_t = o_t \\odot \\tanh(c_t) \\\\\n",
    "z_t = V \\cdot h_t + c \\\\\n",
    "p_t = \\text{softmax}(z_t)\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里$i_t$是输入门，$f_t$是遗忘门，$o_t$是输出门，$g_t$是中间结果。状态$h_t$、记忆元$c_t$、输入门$i_t$、遗忘门$f_t$、输出门$o_t$都是向量，其维度相同。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;现考虑，已知$\\displaystyle \\frac{\\partial L}{\\partial z_t}, \\frac{\\partial L}{\\partial c_{t+1}}, \\frac{\\partial L}{\\partial \\tilde{o}_{t+1}}, \\frac{\\partial L}{\\partial \\tilde{f}_{t+1}}, \\frac{\\partial L}{\\partial \\tilde{i}_{t+1}}, \\frac{\\partial L}{\\partial \\tilde{g}_{t+1}}$ 求某个隐层的梯度时，首先应该找到该隐层的输出层，然后分别计算输出层的梯度乘以输出层对该隐层的梯度，最后相加即可得到该隐层的梯度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;假设某一隐层的输出$h_t$，现计算$\\displaystyle \\frac{\\partial L}{\\partial h_t}$时，可找到该层神经元的后一层所有已连接的神经元的净输出 $ z_t, \\tilde{o}_{t+1}, \\tilde{f}_{t+1}, \\tilde{i}_{t+1}, \\tilde{g}_{t+1}$ ，然后分别计算该隐层的输出层的梯度（如$\\displaystyle \\frac{\\partial L}{\\partial z_t}$）与输出层的神经元对该隐层$h_t$的梯度的乘积（如 $\\displaystyle \\frac{\\partial L}{\\partial z_t} U^T_c $），最后相加即可得到该隐层的梯度:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial h_{t}}\n",
    "= \\frac{\\partial L}{\\partial z_t} V^T + \\frac{\\partial L}{\\partial \\tilde{o}_{t+1}} U_o^T + \\frac{\\partial L}{\\partial \\tilde{f}_{t+1}} U_f^T + \\frac{\\partial L}{\\partial \\tilde{i}_{t+1}} U_i^T + \\frac{\\partial L}{\\partial \\tilde{g}_{t+1}} U_c^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据上述计算过程，可计算各个计算公式的中间输出结果的梯度："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{aligned}\n",
    "\\frac{\\partial L}{\\partial c_t}\n",
    "&= \\frac{\\partial L}{\\partial \\tanh (c_t)} \\frac{d \\tanh(c_t)}{d c_t} + \\frac{\\partial L}{\\partial c_{t+1}} \\odot f_{t+1} \\\\\n",
    "&= \\left( \\frac{\\partial L}{\\partial h_t} \\odot o_t \\right ) \\cdot \\text{diag}(\\boldsymbol{1} - \\tanh^2 c_t) + \\frac{\\partial L}{\\partial c_{t + 1}} \\odot f_{t + 1} \n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{aligned}\n",
    "\\frac{\\partial L}{\\partial \\tilde{g}_t} \n",
    "&= \\frac{\\partial L}{\\partial g_t} ( 1 - g_t^2 ) \\\\\n",
    "&= (1 - g_t^2) \\cdot \\frac{\\partial L}{\\partial c_t} \\odot i_t \n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{aligned}\n",
    "\\frac{\\partial L}{\\partial \\tilde{i}_t}\n",
    "&= \\frac{\\partial L}{\\partial i_t} i_t ( 1 - i_t ) \\\\\n",
    "&= i_t ( 1 - i_t ) \\cdot \\frac{\\partial L}{\\partial c_t} \\odot g_t\n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{aligned}\n",
    "\\frac{\\partial L}{\\partial \\tilde{f}_t}\n",
    "&= \\frac{\\partial L}{\\partial f_t} f_t (1 - f_t) \\\\\n",
    "&= f_t (1 - f_t) \\cdot \\frac{\\partial L}{\\partial c_t} \\odot c_{t-1}\n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{aligned}\n",
    "\\frac{\\partial L}{\\partial \\tilde{o}_t}\n",
    "&= \\frac{\\partial L}{\\partial o_t} i_t (1 - o_t) \\\\\n",
    "&= i_t (1 - o_t) \\cdot \\frac{\\partial L}{\\partial h_t} \\odot \\tanh (c_t)\n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial x_t} = \\frac{\\partial L}{\\partial \\tilde{o}_t} W_o^T + \\frac{\\partial L}{\\partial \\tilde{f}_t} W_f^T + \\frac{\\partial L}{\\partial \\tilde{i}_t} W_i^T + \\frac{\\partial L}{\\partial \\tilde{g}_t} W_c^T\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可求得各个参数的梯度，注意参数是在每个位置共享的，需要对所有位置求和。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial U_o} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{o}_{t + 1}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial U_f} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{f}_{t + 1}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial U_i} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde {i}_{t + 1}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial U_c} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{g}_{t + 1}} \\cdot h_t^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial V} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial z_t} \\cdot h_t^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial W_o} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{o}_t} \\cdot x_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial W_f} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{f}_t} \\cdot x_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial W_i} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{i}_t} \\cdot x_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial W_c} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{g}_t} \\cdot x_t^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial b_i} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{i}_t} \\\\\n",
    "\\frac{\\partial L}{\\partial b_f} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{f}_t} \\\\\n",
    "\\frac{\\partial L}{\\partial b_o} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{o}_t} \\\\\n",
    "\\frac{\\partial L}{\\partial b_c} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{g}_t} \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第3步：写出LSTM的反向传播算法**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "输入：LSTM网络 $y=f(x;\\theta)$，参数$\\theta$，样本$(x_1, x_2, \\cdots, x_T)$ 和 $(y_1, y_2, \\cdots, y_T)$。  \n",
    "输出：更新的参数$\\theta$  \n",
    "超参数：$\\eta$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 正向传播，得到各个位置的输出"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For $t = 1, 2, \\cdots, T$ do {   \n",
    "&emsp;&emsp;将信号从前向后传播，计算隐层的输出$h_t$ 和输出层的输出$p_t$ \n",
    "$$\n",
    "i_t = \\sigma\\left(\\tilde{i}_t\\right) = \\sigma(U_i \\cdot h_{t - 1} + W_i \\cdot x_t + b_i) \\\\\n",
    "f_t = \\sigma\\left(\\tilde{f}_t\\right) = \\sigma(U_f \\cdot h_{t - 1} + W_f \\cdot x_t + b_f) \\\\\n",
    "o_t = \\sigma\\left(\\tilde{o}_t\\right) = \\sigma(U_o \\cdot h_{t - 1} + W_o \\cdot x_t + b_o) \\\\\n",
    "g_t = \\tanh(\\tilde{g}_t) = \\tilde{c}_t = \\tanh (U_c \\cdot h_{t - 1} + W_c \\cdot x_t + b_c) \\\\\n",
    "c_t = i_t \\odot g_t + f_t \\odot c_{t - 1} \\\\\n",
    "h_t = o_t \\odot \\tanh(c_t) \\\\\n",
    "z_t = V \\cdot h_t + c \\\\\n",
    "p_t = \\text{softmax}(z_t)\n",
    "$$\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2. 反向传播，得到各个位置的梯度"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For $t =T, \\cdots, 2, 1$ do {  \n",
    "&emsp;&emsp;计算输出层的梯度$\\displaystyle \\frac{\\partial{L}}{\\partial{z_t}} $\n",
    "$$\n",
    "\\frac{\\partial{L}}{\\partial{z_t}} = y_t - p_t\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;将梯度从后向前传播，计算隐层的梯度$\\displaystyle \\frac{\\partial L}{\\partial \\tilde{i}_t}, \\frac{\\partial L}{\\partial \\tilde{f}_t}, \\frac{\\partial L}{\\partial \\tilde{o}_t}$  \n",
    "&emsp;&emsp;If $(t < T)$ {\n",
    "$$\n",
    "\\frac{\\partial L}{\\partial \\tilde{i}_t} = i_t ( 1 - i_t ) \\cdot \\frac{\\partial L}{\\partial c_t} \\odot g_t \\\\\n",
    "\\frac{\\partial L}{\\partial \\tilde{f}_t} = f_t (1 - f_t) \\cdot \\frac{\\partial L}{\\partial c_t} \\odot c_{t-1} \\\\\n",
    "\\frac{\\partial L}{\\partial \\tilde{o}_t} = i_t (1 - o_t) \\cdot \\frac{\\partial L}{\\partial h_t} \\odot \\tanh (c_t)\n",
    "$$\n",
    "&emsp;&emsp;&emsp;&emsp;其中\n",
    "$$\n",
    "\\frac{\\partial L}{\\partial c_t} = \\left( \\frac{\\partial L}{\\partial h_t} \\odot o_t \\right ) \\cdot \\text{diag}(\\boldsymbol{1} - \\tanh^2 c_t) + \\frac{\\partial L}{\\partial c_{t + 1}} \\odot f_{t + 1} \\\\\n",
    "\\frac{\\partial L}{\\partial h_{t}}\n",
    "= \\frac{\\partial L}{\\partial z_t} V^T + \\frac{\\partial L}{\\partial \\tilde{o}_{t+1}} U_o^T + \\frac{\\partial L}{\\partial \\tilde{f}_{t+1}} U_f^T + \\frac{\\partial L}{\\partial \\tilde{i}_{t+1}} U_i^T + \\frac{\\partial L}{\\partial \\tilde{g}_{t+1}} U_c^T \\\\\n",
    "g_t = \\tanh (U_c \\cdot h_{t - 1} + W_c \\cdot x_t + b_c)\n",
    "$$\n",
    "&emsp;&emsp;} else { "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial \\tilde{i}_T} = i_T (1 - i_T) \\cdot \\frac{\\partial L}{\\partial c_T} \\odot g_T \\\\\n",
    "\\frac{\\partial L}{\\partial \\tilde{f}_T} = f_T (1 - f_T) \\cdot \\frac{\\partial L}{\\partial c_T} \\odot c_{T-1} \\\\\n",
    "\\frac{\\partial L}{\\partial \\tilde{o}_T} = i_T (1 - o_T) \\cdot \\frac{\\partial L}{\\partial h_T} \\odot \\tanh (c_T) \n",
    "$$\n",
    "&emsp;&emsp;&emsp;&emsp;其中\n",
    "$$\n",
    "\\frac{\\partial L}{\\partial c_T} = \\left ( \\frac{\\partial L}{\\partial h_T} \\odot o_T \\right) \\cdot \\text{diag} (\\boldsymbol{1} - \\tanh^2 c_T) \\\\\n",
    "\\frac{\\partial L}{\\partial h_T} = \\frac{\\partial L}{\\partial z_T} V^T \\\\\n",
    "g_T = \\tanh (U_c \\cdot h_{T - 1} + W_c \\cdot x_T + b_c)\n",
    "$$\n",
    "&emsp;&emsp;}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. 进行参数更新"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算梯度"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial U_o} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{o}_{t + 1}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial U_f} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{f}_{t + 1}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial U_i} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde {i}_{t + 1}} \\cdot h_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial U_c} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{g}_{t + 1}} \\cdot h_t^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial V} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial z_t} \\cdot h_t^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial W_o} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{o}_t} \\cdot x_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial W_f} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{f}_t} \\cdot x_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial W_i} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{i}_t} \\cdot x_t^T \\\\\n",
    "\\frac{\\partial L}{\\partial W_c} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{g}_t} \\cdot x_t^T \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial L}{\\partial b_i} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{i}_t} \\\\\n",
    "\\frac{\\partial L}{\\partial b_f} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{f}_t} \\\\\n",
    "\\frac{\\partial L}{\\partial b_o} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{o}_t} \\\\\n",
    "\\frac{\\partial L}{\\partial b_c} = \\sum_{t=1}^T \\frac{\\partial L}{\\partial \\tilde{g}_t} \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据梯度下降公式更新参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "U_o \\leftarrow U_o - \\eta \\frac{\\partial L}{\\partial U_o} \\\\\n",
    "U_f \\leftarrow U_f - \\eta \\frac{\\partial L}{\\partial U_f} \\\\\n",
    "U_i \\leftarrow U_i - \\eta \\frac{\\partial L}{\\partial U_i} \\\\\n",
    "U_c \\leftarrow U_c - \\eta \\frac{\\partial L}{\\partial U_c}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "V \\leftarrow V - \\eta \\frac{\\partial L}{\\partial V}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "W_o \\leftarrow W_o - \\eta \\frac{\\partial L}{\\partial W_o} \\\\\n",
    "W_f \\leftarrow W_f - \\eta \\frac{\\partial L}{\\partial W_f} \\\\\n",
    "W_i \\leftarrow W_i - \\eta \\frac{\\partial L}{\\partial W_i} \\\\\n",
    "W_c \\leftarrow W_c - \\eta \\frac{\\partial L}{\\partial W_c}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "b_o \\leftarrow b_o - \\eta \\frac{\\partial L}{\\partial b_o} \\\\\n",
    "b_f \\leftarrow b_f - \\eta \\frac{\\partial L}{\\partial b_f} \\\\\n",
    "b_i \\leftarrow b_i - \\eta \\frac{\\partial L}{\\partial b_i} \\\\\n",
    "b_c \\leftarrow b_c - \\eta \\frac{\\partial L}{\\partial b_c}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "4. 返回更新的参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 习题25.5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;推导LSTM模型中记忆元的展开式(25.26)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答：**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答思路：**  \n",
    "\n",
    "1. 给出LSTM模型的记忆元表达式\n",
    "2. 使用递归法推导记忆元的展开式"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答步骤：**   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第1步：LSTM模型的记忆元表达式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第25.2.1节的LSTM的模型特点：\n",
    "\n",
    "> &emsp;&emsp;当输入门和遗忘门满足$i_t = \\boldsymbol{1}, f_t = 0$时，当前位置的记忆元$c_t$只依赖于当前位置的输入$x_t$和之前位置的状态$h_{t - 1}$，LSTM是S-RNN的近似。当输入门和遗忘门满足$i_t = 0, f_t = \\boldsymbol{1}$时，当前位置的记忆元$c_t$只依赖于之前位置的记忆元$c_{t - 1}$，LSTM将之前位置的记忆元复制到当前位置。  \n",
    "> &emsp;&emsp;当前位置的记忆元$c_t$可以展开成以下形式：\n",
    "> $$ c_t = i_t \\odot \\tilde{c}_t + f_t \\odot c_{t - 1} = \\sum_{i = 1}^t \\left( \\prod_{j = i + 1}^t f_j \\odot i_i \\right) \\odot \\tilde{c}_i = \\sum_{i = 1}^t w_i^t \\odot \\tilde{c}_i \\tag{25.26} $$\n",
    "> 其中，$w_i^t$表示计算得到的第$t$个位置的权重。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第2步：使用递归法推导记忆元的展开式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据递推形式，在第$t$个位置时，可得当前位置的记忆元$c_t$：\n",
    "$$\n",
    "c_t = i_t \\odot \\tilde{c}_t + f_t \\odot c_{t - 1} \\tag{1}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;在第$t-1$个位置时，可得之前位置的记忆元$c_{t - 1}$：\n",
    "$$\n",
    "c_{t-1} = i_{t-1}\\odot\\tilde{c}_{t-1} + f_{t-1} \\odot c_{t-2} \\tag{2}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;将式(2)带入式(1)中，可得：\n",
    "$$\n",
    "\\begin{aligned}\n",
    "c_t \n",
    "&= i_t \\odot \\tilde{c}_t + f_t \\odot (i_{t-1} \\odot \\tilde{c}_{t-1} + f_{t-1}\\odot c_{t-2}) \\\\\n",
    "&= i_t \\odot \\tilde{c}_t + f_t \\odot i_{t-1} \\odot \\tilde{c}_{t-1} + f_t \\odot f_{t-1} \\odot c_{t-2}\n",
    "\\end{aligned} \\tag{3}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;逐步递归，可得：\n",
    "$$\n",
    "\\begin{aligned}\n",
    "c_t \n",
    "&= i_t \\odot \\tilde{c}_t + f_t \\odot i_{t-1} \\odot \\tilde{c}_{t-1} + f_t \\odot f_{t-1} \\odot c_{t-2} \\\\\n",
    "&= i_t \\odot \\tilde{c}_t + f_t \\odot i_{t-1} \\odot \\tilde{c}_{t-1} + f_t \\odot f_{t-1} \\odot (i_{t-2} \\odot \\tilde{c}_{t-2} + f_{t-2} \\odot c_{t-3}) \\\\\n",
    "&= i_t \\odot \\tilde{c}_t + f_t \\odot i_{t-1} \\odot \\tilde{c}_{t-1} + f_t \\odot f_{t-1} \\odot i_{t-2} \\odot \\tilde{c}_{t-2} + f_t \\odot f_{t-1} \\odot f_{t-2} \\odot c_{t-3} \\\\\n",
    "&= i_t \\odot \\tilde{c}_t + f_t \\odot i_{t-1} \\odot \\tilde{c}_{t-1} + f_t \\odot f_{t-1} \\odot i_{t-2} \\odot \\tilde{c}_{t-2} + \\cdots + f_t \\odot f_{t-1} \\odot f_3 \\odot i_2 \\odot \\tilde{c}_2 + f_t \\odot f_{t-1} \\odot f_2 \\odot c_1 \\\\\n",
    "&= \\sum_{i=2}^{t}(\\prod_{j=i+1}^{t} f_j \\odot i_i)\\odot \\tilde{c}_i + f_t \\odot f_{t-1} \\odot f_2 \\odot c_1\n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;当在第1个位置时，$i_1 = \\boldsymbol{1}$，则$c_1 = i_1 \\odot \\tilde{c}_1$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;所以\n",
    "$$\n",
    "\\begin{aligned}\n",
    "c_t \n",
    "&= \\sum_{i=2}^{t}(\\prod_{j=i+1}^{t} f_j \\odot i_i)\\odot \\tilde{c}_i + f_t \\odot f_{t-1} \\odot f_2 \\odot c_1 \\\\\n",
    "&= \\sum_{i=2}^{t}(\\prod_{j=i+1}^{t} f_j \\odot i_i)\\odot \\tilde{c}_i + f_t \\odot f_{t-1} \\odot f_2 \\odot i_1 \\odot \\tilde{c}_1 \\\\\n",
    "&= \\sum_{i=1}^{t}(\\prod_{j=i+1}^{t} f_j \\odot i_i)\\odot \\tilde{c}_i\n",
    "\\end{aligned}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;令第$t$个位置的权重可表示为\n",
    "$$\n",
    "w_i^t = \\prod_{j=i+1}^{t} f_j \\odot i_i\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "则\n",
    "$$\n",
    "c_t = i_t \\odot \\tilde{c}_t + f_t \\odot c_{t - 1} = \\sum_{i = 1}^t \\left( \\prod_{j = i + 1}^t f_j \\odot i_i \\right) \\odot \\tilde{c}_i\n",
    " = \\sum_{i = 1}^t w_i^t \\odot \\tilde{c}_i\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 习题25.6"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;写出双向LSTM-CRF的模型公式。图25.16是双向LSTM-CRF的架构图。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![习题25.6图](../images/25-6.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答：**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答思路：**  \n",
    "\n",
    "1. 给出双向循环神经网络的模型公式\n",
    "2. 给出LSTM模型公式\n",
    "3. 给出CRF模型公式\n",
    "4. 根据双向LSTM-CRF架构图，写出模型公式"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解答步骤：**  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第1步：双向循环神经网络的模型公式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第25.2.4节的双向循环神经网络：\n",
    "\n",
    "> &emsp;&emsp;前向的循环神经网络的隐层（状态）是：\n",
    "> $$ h_t^{(1)} = \\tanh (U^{(1)} \\cdot h_{t - 1}^{(1)} + W^{(1)} \\cdot x_t + b^{(1)}) \\tag{25.35} $$\n",
    "> 后向的循环神经网络的隐层（状态）是：\n",
    "> $$ h_t^{(2)} = \\tanh (U^{(2)} \\cdot h_{t - 1}^{(2)} + W^{(2)} \\cdot x_t + b^{(2)}) \\tag{25.36} $$\n",
    "> 两者的拼接是\n",
    "> $$ h_t = [h_t^{(1)}; h_t^{(2)}] \\tag{25.37} $$\n",
    "> 其中，；表示两个向量的拼接。\n",
    "> $$ p_t = \\text{softmax}(V \\cdot h_t + c) $$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第2步：LSTM模型公式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第25.2.1节的长短期记忆网络（LSTM）的模型公式：  \n",
    "\n",
    "> $$ i_t = \\sigma(U_i \\cdot h_{t - 1} + W_i \\cdot x_t + b_i) \\\\\n",
    "f_t = \\sigma(U_f \\cdot h_{t - 1} + W_f \\cdot x_t + b_f) \\\\\n",
    "o_t = \\sigma(U_o \\cdot h_{t - 1} + W_o \\cdot x_t + b_o) \\\\\n",
    "\\tilde{c}_t = \\tanh (U_c \\cdot h_{t - 1} + W_c \\cdot x_t + b_c) \\\\ \n",
    "c_t = i_t \\odot \\tilde{c}_t + f_t \\odot c_{t - 1} \\\\\n",
    "h_t = o_t \\odot \\tanh(c_t) $$\n",
    "> 这里$i_t$是输入门，$f_t$是遗忘门，$o_t$是输出门，$\\tilde{c}_t$是中间结果。状态$h_t$、记忆元$c_t$、输入门$i_t$、遗忘门$f_t$、输出门$o_t$都是向量，其维度相同。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第3步：CRF模型公式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;根据书中第11.2.3节的条件随机场（CRF）的简化形式：\n",
    "\n",
    "> 条件随机场可以写成向量$w$与$F(y,x)$的内积的形式：\n",
    "> $$ P_w(y|x) = \\frac{\\exp (w \\cdot F(y, x))}{Z_w(x)} \\tag{11.19} $$\n",
    "> 其中，\n",
    "> $$ Z_w(x) = \\sum_y \\exp (w \\cdot F(y, x)) \\tag{11.20} $$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**第4步：根据双向LSTM-CRF架构图，写出模型公式**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;双向LSTM-CRF的基本架构是在双向LSTM的输出层引入CRF，是序列标注的有代表性的方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;前向的LSTM的隐层（状态）是\n",
    "$$\n",
    "h_t^f = \\text{LSTM}_f (x_t, h_{t-1}^f)\n",
    "$$\n",
    "后向的LSTM的隐层（状态）是\n",
    "$$\n",
    "h_t^b = \\text{LSTM}_b (x_t, h_{t+1}^b)\n",
    "$$\n",
    "其中，$\\text{LSTM}_f$ 和 $\\text{LSTM}_b$ 分别表示前向LSTM和后向LSTM，$h_{t-1}^f$ 和 $h_{t+1}^b$ 分别表示前向LSTM和后向LSTM的上一个位置的隐藏状态。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "两者的拼接是\n",
    "$$\n",
    "h_t = [h_i^f; h_i^b]\n",
    "$$\n",
    "其中，；表示两个向量的拼接。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "&emsp;&emsp;将 $h_t$ 输入到CRF层中，得到每个时间步的标签 $y_t$，并计算其对数概率 $p(y|x)$：\n",
    "\n",
    "$$\n",
    "p(y|x) = \\frac{\\exp(\\text{score}(x, y))}{\\displaystyle \\sum_{y'} \\exp(\\text{score}(x, y'))}\n",
    "$$\n",
    "\n",
    "其中，$\\text{score}(x, y)$ 表示序列 $x$ 对应标签序列 $y$ 的得分，可以使用线性模型进行计算：\n",
    "\n",
    "$$\n",
    "\\text{score}(x, y) = \\sum_{t=1}^n A_{y_t, y_{t-1}} + \\sum_{t=1}^n B_{t, y_t}\n",
    "$$\n",
    "\n",
    "其中，$A$ 是状态转移矩阵，$B$ 是发射矩阵，$A_{y_t, y_{t-1}}$ 表示从状态 $y_{t-1}$ 转移到状态 $y_t$ 的得分，$B_{t, y_t}$ 表示在第$t$个位置，标签为 $y_t$ 的发射得分。"
   ]
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "c50c7698ff25eb1d7dce4c52e3d5cc14b3b23d6a97d4fb70aad8c815eba6f63e"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.5"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": false,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "calc(100% - 180px)",
    "left": "10px",
    "top": "150px",
    "width": "273.188px"
   },
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
