{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 前向传播在神经网络定义的计算图中按顺序计算和存储中间变量，计算的顺序是从输入层到输出层\n",
    "* 反向传播按相反的顺序从输出层到输入层计算和存储神经网络的中间变量参数的梯度\n",
    "* 在训练深度学习模型时，前向传播和反向传播是相互依赖的\n",
    "* 训练比预测需要更多的内存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ[\"KMP_DUPLICATE_LIB_OK\"]=\"TRUE\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 4.7.1 前向传播"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "按顺序从输入层到输出层计算和存储神经网络中每层的结果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假设输入样本是$x \\in \\mathbb{R}^{d}$，并且隐藏层不包括偏置项，中间变量："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$z = W ^{(1)}x$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其中$W \\in \\mathbb{R}^{h \\times d}$是隐藏层的权重参数，将中间变量$z \\in \\mathbb{R}^{h}$通过激活函数$\\phi$得到长度为$h$的隐藏层激活向量："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$h = \\phi(z)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "隐藏层激活向量$h$也是一个中间变量。假设输出层的参数只有权重$W ^ {(2)} \\in \\mathbb{R}^{q \\times h}$，我们可以得到输出层变量，它是一个长度为$q$的向量："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$o = W ^ {(2)}h$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假设损失函数为$l$，样本标签为$y$，我们可以计算单个数据样本的损失项："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$L = l(o,y)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据$L_2$正则化的定义，给定超参数$\\lambda$，正则项为："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$s = \\frac{\\lambda}{2}(||W^{(1)}||_F^2 + ||W^{(2)}||_F^2)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其中，矩阵的弗罗贝尼乌斯范数是将矩阵展平为向量后应用的$L_2$范数.最后，模型在给定数据样本上的正则化损失为："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$J = L + s$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 4.7.3 反向传播"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "反向传播指的是计算神经网络参数梯度的方法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "第一步是计算目标函数$J = L + s$关于损失项$L$和正则项$s$的梯度："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial J}{\\partial L} = 1,\\frac{\\partial J}{\\partial s} = 1$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们根据链式法则计算目标函数关于输出层变量$o$的梯度："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial J}{\\partial o} = prod(\\frac{\\partial J}{\\partial L}, \\frac{\\partial L}{\\partial o}) = \\frac{\\partial L}{\\partial o} \\in \\mathbb{R}^q$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算正则化项关于两个参数的梯度："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial s}{\\partial W ^ {(1)}} = \\lambda W ^ {(1)},\\frac{\\partial s}{\\partial W ^ {(2)}} = \\lambda W ^ {(2)}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算最接近输出层的模型参数的梯度$\\partial J / \\partial w^{(2)} \\in \\mathbb{R}^{q \\times h}$:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial J}{\\partial W ^ {(2)}} = prod(\\frac{\\partial J}{\\partial o},\\frac{\\partial o}{\\partial W ^ {(2)}}) + prod(\\frac{\\partial J}{\\partial s},\\frac{\\partial s}{\\partial W ^ {(2)}}) = \\frac{\\partial J}{\\partial o} h^T + \\lambda W ^ {(2)}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了获得关于$W^{(1)}$的梯度，我们需要继续沿着输出层到隐藏层反向传播。关于隐藏层输出的梯度$\\partial L / \\partial h \\in \\mathbb{R}^{h}$，由下式："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial J}{\\partial h} = prod(\\frac{\\partial J}{\\partial o}, \\frac{\\partial o}{\\partial h}) = W^{(2)T}\\frac{\\partial J}{\\partial o}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于激活函数$\\phi$是按元素计算的，计算中间变量$z$的梯度$\\partial J / \\partial z \\in \\mathbb{R}^{h}$需要使用按元素乘法运算符，我们用$\\odot$表示:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial J}{\\partial z} = prod(\\frac{\\partial J}{\\partial h}, \\frac{\\partial h}{\\partial z}) = \\frac{\\partial J}{\\partial h} \\odot \\phi'(z)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "得到最接近输入层的模型参数的梯度$\\partial J / \\partial W ^{(1)} \\in \\mathbb{R}^{h \\times d}$:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\frac{\\partial J}{\\partial W^{(1)}} = prod(\\frac{\\partial J}{\\partial z}, \\frac{\\partial z}{\\partial W^{(1)}}) + prod(\\frac{\\partial J}{\\partial s},\\frac{\\partial s}{\\partial W^{(1)}}) = \\frac{\\partial J}{\\partial z} x^T + \\lambda W^{(1)}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 4.7.4 训练神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在训练神经网络是，前向传播和反向传播相互依赖。对于前向传播，我们沿着依赖的方向遍历计算图并计算其路径上的所有变量。然后将这些用于反向传播，其中的计算顺序与计算图的相反。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "test",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
