{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 梯度下降公式推导"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "误差函数是关于权重的函数，假设初始权重是 $w_A$，误差函数的梯度就是误差函数相对于 $w_A$ 的偏导数。如果有多个权重，则是这多个权重偏导数的矢量和。  \n",
    "* 在下降过程中，梯度就是误差函数减小最快的方向，梯度为负数。希望误差函数减小，则需要增大权重 $w_A$，所以权重变化量为梯度的相反数，即为正数。  \n",
    "* 在上升过程中，梯度为正数，希望减小误差函数，则需要较小权重，则权重变化量为梯度的相反数，即为负数。  \n",
    "\n",
    "从预测函数 $ \\hat y = \\sigma(Wx+b) $ 开始，这个预测的误差很大，误差函数的梯度就是误差函数相对权重和偏差的偏导数的矢量和，表现形式为各偏导数所形成的向量。\n",
    "\n",
    "$$\\nabla E = (\\dfrac{\\partial E}{\\partial w_1},...,\\dfrac{\\partial E}{\\partial w_n},\\dfrac{\\partial E}{\\partial b})$$\n",
    "\n",
    "沿着梯度的反方向做出微小的变化，需要引入很小的学习速率，然后更新权重和偏差：\n",
    "\n",
    "$$w_i' \\leftarrow w_i - \\alpha \\dfrac{\\partial E}{\\partial w_i}$$\n",
    "\n",
    "$$b' \\leftarrow b - \\alpha \\dfrac{\\partial E}{\\partial b}$$\n",
    "\n",
    "然后根据新的权重和偏差得到误差较小的偏差：$ \\hat y = \\sigma(W'x+b') $\n",
    "\n",
    "重复这一过程直到误差很小，或者重复固定的次数（epoch）。步长次数就是epoch次数。  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 逻辑回归的梯度下降法：\n",
    "\n",
    "<img src=\"src/743561.jpg\" width=200px>\n",
    "\n",
    "注意：权重矩阵的维度为 $(n_{output}, n_{input}) = (1,n)$，输入x为列元素 (n,1) 。\n",
    "\n",
    "**前向传播**的式子如下：\n",
    "\\begin{align}\n",
    "z &= W x+b \\\\ \n",
    "\\hat y &= \\sigma(z) = \\dfrac{1}{1+e^{-z}} \\\\\n",
    "L(y,\\hat y) &= -(yln(\\hat {y}) + (1-y)ln(1-\\hat {y})) \\\\\n",
    "E(w,b) &= \\dfrac{1}{m} \\sum_{i=1}^m L(y,\\hat y) \\\\\n",
    "\\end{align}\n",
    "\n",
    "现在，如果有 m 个样本数据，标为 $x^{(1)},x^{(2)},…,x^{(m)}$，误差公式是：\n",
    "\n",
    "$$\\hat {y^{(i)}} = \\sigma(Wx^{(i)} + b)$$\n",
    "$$E = -\\dfrac{1}{m} \\sum_{i=1}^m (y^{(i)}ln(\\hat {y^{(i)}}) + (1-y^{(i)})ln(1-\\hat {y^{(i)}}))$$\n",
    "\n",
    "计算 $E$ 在单个样本点 x 时的梯度（偏导数），其中 $x^{(i)}$ 包含 n 个特征，即 $x=(x_1,…,x_n)$：\n",
    "\n",
    "$$\\nabla E = (\\dfrac{\\partial E}{\\partial w_1},...,\\dfrac{\\partial E}{\\partial w_n},\\dfrac{\\partial E}{\\partial b})$$\n",
    "\n",
    "1、$L$ 相对于 $\\hat y$ 的导数：\n",
    "$$\\dfrac{\\partial L}{\\partial \\hat y} = -\\dfrac{y}{\\hat y} + \\dfrac{1-y}{1-\\hat y}$$\n",
    "\n",
    "2、$\\hat y$ 相对于 $z$ 的导数：\n",
    "$$\\dfrac{\\partial \\hat y}{\\partial z} = \\sigma'(z) = \\sigma(z) (1-\\sigma(z)) = \\hat y(1-\\hat y)$$\n",
    "\n",
    "3、则 $L$ 相对于 $z$ 的导数为：\n",
    "$$\\dfrac{\\partial L}{\\partial z} = \\dfrac{\\partial L}{\\partial \\hat y} \\cdot \\dfrac{\\partial \\hat y}{\\partial z} = -(y - \\hat y)$$\n",
    "\n",
    "4、求 $L$ 相对于 $w_j$ 和 $b$ 导数：\n",
    "$$\\dfrac{\\partial L}{\\partial w_j} = \\dfrac{\\partial L}{\\partial z} \\cdot \\dfrac{\\partial z}{\\partial w_j} = -(y - \\hat y)x_j\n",
    "\\longrightarrow \\dfrac{\\partial L}{\\partial W} = \\dfrac{\\partial L}{\\partial z} \\cdot \\dfrac{\\partial z}{\\partial W} = -(y - \\hat y)X^T$$\n",
    "\n",
    "$$\\dfrac{\\partial L}{\\partial b} = \\dfrac{\\partial L}{\\partial z} \\cdot \\dfrac{\\partial z}{\\partial b} = -(y - \\hat y)$$\n",
    "\n",
    "5、最后求 $E$ 相对于 $w_j$ 和 $b$ 的导数：\n",
    "$$\\dfrac{\\partial E}{\\partial w_j} = \\dfrac{1}{m} \\sum_{i=1}^m \\dfrac{\\partial L}{\\partial w_j} = -\\dfrac{1}{m} \\sum_{i=1}^m (y^{(i)} - \\hat {y^{(i)}}) x_j^{(i)}\n",
    "\\longrightarrow \\dfrac{\\partial E}{\\partial W} = \\dfrac{1}{m} \\sum_{i=1}^m \\dfrac{\\partial L}{\\partial W} = -\\dfrac{1}{m} \\sum_{i=1}^m (y^{(i)} - \\hat {y^{(i)}}) {X^{(i)}}^T$$\n",
    "\n",
    "$$\\dfrac{\\partial E}{\\partial b} = \\dfrac{1}{m} \\sum_{i=1}^m \\dfrac{\\partial L}{\\partial b} = -\\dfrac {1}{m} \\sum_{i=1}^m(y^{(i)} - \\hat {y^{(i)}})$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "所以对于具有坐标 $(x_1,…,x_n)$ 的点，标签 $y$，预测 $\\hat y$，该点的误差函数梯度是：\n",
    "\n",
    "$$ (-(y-\\hat y)x_1,...,-(y-\\hat y)x_n,-(y-\\hat y)) $$\n",
    "即：\n",
    "$$ \\nabla E(W,b) = -(y-\\hat y) (x_1,...,x_n,1) $$\n",
    "\n",
    "权重更新的步长和梯度的方向相反，加入学习率：\n",
    "$$\\Delta w_i = - \\alpha \\dfrac{\\partial E}{\\partial w_i}\\ \\ or\\ \\ \\Delta W = - \\alpha \\dfrac{\\partial E}{\\partial W}$$\n",
    "\n",
    "梯度实际上是标量乘以点的坐标！标量就是标签和预测的差，即误差。这意味着，如果标签与预测接近（表示点分类正确），该梯度将很小，如果标签与预测差别很大（表示点分类错误），那么此梯度将很大。请记住：小的梯度表示稍微修改下坐标，大的梯度表示大幅度修改坐标。  \n",
    "\n",
    "\n",
    "可以得到更新权重和误差的函数为：  \n",
    "$$w_i = w_i + \\Delta w_i = w_i + \\alpha (y - \\hat y)x_i\\ \\ or\\ \\ W = W + \\Delta W = W + \\alpha (y - \\hat y)X^T$$\n",
    "$$b = b + \\Delta b = b + \\alpha (y - \\hat y)$$\n",
    "\n",
    "这看起来和感知器算法非常类似，都是每个点将成倍数的自己与直线的权重相加，以便使自己离直线更近。  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 线性回归的梯度下降法：\n",
    "\n",
    "误差函数为误差平方：\n",
    "$$E = \\dfrac{1}{2} (y-\\hat y)^2$$\n",
    "\n",
    "误差公式相对于 $z$ 的导数：  \n",
    "$$\\dfrac{\\partial E}{\\partial z} = \\dfrac{\\partial E}{\\partial \\hat y} \\cdot \\dfrac{\\partial \\hat y}{\\partial z} = -(y-\\hat y) \\cdot \\sigma'(z)$$\n",
    "\n",
    "误差公式相对于 $w_i$ 的导数：\n",
    "$$\\dfrac{\\partial E}{\\partial w_i} = \\dfrac{\\partial E}{\\partial z} \\cdot \\dfrac{\\partial z}{\\partial w_i} = -(y-\\hat y) \\sigma'(z) x_i$$\n",
    "\n",
    "令 $\\delta$ 表示输出节点的误差项(error term)：\n",
    "\n",
    "$$\\delta = -\\dfrac{\\partial E}{\\partial z} = (y-\\hat y) \\sigma'(z)$$\n",
    "\n",
    "所以步长为：\n",
    "$$\\Delta w_i = - \\alpha \\dfrac{\\partial E}{\\partial w_i} = \\alpha (y-\\hat y) \\sigma'(z) x_i = \\alpha \\delta x_i$$\n",
    "\n",
    "所以更新权重可以表示为：\n",
    "$$w_i = w_i+ \\Delta w_i = w_i + \\alpha \\delta x_i$$\n",
    "\n",
    "现在假设只有一个输出单元和一项数据，把上面这个写成代码。用 sigmoid 来作为激活函数 f(h)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[1.]\n",
      " [2.]\n",
      " [3.]\n",
      " [4.]]\n",
      "[[ 0.5 -0.5  0.3  0.1]]\n",
      "[[ 0.47968131 -0.54063738  0.23904392  0.01872523]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# Defining the sigmoid function for activations \n",
    "def sigmoid(x):\n",
    "    return 1 / (1 + np.exp(-x))\n",
    "\n",
    "# Derivative of the sigmoid function\n",
    "def sigmoid_prime(x):\n",
    "    return sigmoid(x) * (1 - sigmoid(x))\n",
    "\n",
    "# Input data, target and weights\n",
    "x = np.array([1, 2., 3, 4])\n",
    "y = np.array(0.5)\n",
    "weights = np.array([0.5, -0.5, 0.3, 0.1])\n",
    "x = x[:, None]\n",
    "print(x)\n",
    "weights = weights.reshape((1,4))\n",
    "print(weights)\n",
    "\n",
    "# The learning rate, eta in the weight step equation\n",
    "learnrate = 0.5\n",
    "\n",
    "# the node's linear combination of inputs and weights\n",
    "h = np.dot(weights, x)\n",
    "# The neural network output (y-hat)\n",
    "nn_output = sigmoid(h)\n",
    "# output error (y - y-hat)\n",
    "error = y - nn_output\n",
    "\n",
    "# output gradient (f'(h))\n",
    "output_grad = sigmoid_prime(h)\n",
    "# error term (lowercase delta)\n",
    "error_term = error * output_grad\n",
    "error_term = error * nn_output * (1 - nn_output) # more efficient\n",
    "\n",
    "# Gradient descent a step \n",
    "del_w = learnrate * error_term * x.T\n",
    "weights = weights + del_w\n",
    "print(weights)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 含有隐藏层的神经网路的反向传播\n",
    "\n",
    "如何让多层神经网络学习呢？已经知道了使用梯度下降来更新权重，反向传播算法则是它的一个延伸。以一个两层神经网络为例，使用链式法则计算输入层-隐藏层间权重的误差。\n",
    "\n",
    "要使用梯度下降法更新隐藏层的权重，需要知道各隐藏层节点的误差对最终输出的影响。每层的输出是由两层间的权重决定的，两层之间产生的误差，按权重缩放后在网络中向前传播。既然知道输出误差，便可以用权重来反向传播到隐藏层。\n",
    "\n",
    "已经知道输出节点的误差项，借助梯度下降算法，利用此误差项来训练隐藏层到输出层的权重。要训练输入层到隐藏层的权重，需要知道隐藏单元对应的误差项。使用链式法则时发现隐藏层误差项与输出层误差项成正比，比例系数由两层之间的权重决定。\n",
    "\n",
    "<img src=\"src/6f1e18.jpg\" width=300px>\n",
    "\n",
    "<img src=\"src/twolayers.png\" width=800px>\n",
    "\n",
    "注：$\\hat y = a^{[2]}$，输入X为列元素 $(n^{[0]},1)$，各层节点数为 $n^{[0]},n^{[1]},n^{[2]}$=1，权重矩阵的维度分别为 $W^{[1]}=(n^{[1]},n^{[0]})$、$W^{[2]}=(n^{[2]},n^{[1]})$，偏差向量的维度分别为 $b^{[1]}=(n^{[1]},1)$、$b^{[2]}=(n^{[2]},1)$。误差函数为对数误差方程。\n",
    "\n",
    "1、$L$ 相对于 $a^{[2]}$ 的导数：\n",
    "$$\\dfrac{\\partial L}{\\partial a^{[2]}} = -\\dfrac{y}{a^{[2]}} + \\dfrac{1-y}{1-a^{[2]}}$$\n",
    "\n",
    "2、$a^{[2]}$ 相对于 $z^{[2]}$ 的导数：（$a^{[2]}$ 和 $z^{[2]}$ 维度相同，都为 $(n^{[2]},1)$）\n",
    "\n",
    "$$\\dfrac{\\partial a^{[2]}}{\\partial z^{[2]}} = \\sigma'(z^{[2]}) = \\sigma(z^{[2]}) (1-\\sigma(z^{[2]})) = a^{[2]}(1-a^{[2]})$$\n",
    "\n",
    "3、则 $L$ 相对于 $z^{[2]}$ 的导数为：\n",
    "$$\\dfrac{\\partial L}{\\partial z^{[2]}} = \\dfrac{\\partial L}{\\partial a^{[2]}} \\cdot \\dfrac{\\partial a^{[2]}}{\\partial z^{[2]}} = -(y - a^{[2]})$$\n",
    "\n",
    "4、求 $L$ 相对于 $W^{[2]}$ 和 $b^{[2]}$ 导数：（$a^{[1]}$ 作为输出层的输入值，其维度为$(n^{[1]},1)$）\n",
    "\n",
    "$$\\dfrac{\\partial L}{\\partial W^{[2]}} = \\dfrac{\\partial L}{\\partial z^{[2]}} \\cdot \\dfrac{\\partial z^{[2]}}{\\partial W^{[2]}} = \\dfrac{\\partial L}{\\partial z^{[2]}} {a^{[1]}}^T \\ \\ （维度为：(n^{[2]},n^{[1]})）$$\n",
    "\n",
    "$$\\dfrac{\\partial L}{\\partial b^{[2]}} = \\dfrac{\\partial L}{\\partial z^{[2]}} \\cdot \\dfrac{\\partial z^{[2]}}{\\partial b^{[2]}} = \\dfrac{\\partial L}{\\partial z^{[2]}} \\ \\ （维度为：(n^{[2]},1)）$$\n",
    "\n",
    "5、$L$ 相对于 $a^{[1]}$ 的导数：\n",
    "$$\\dfrac{\\partial L}{\\partial a^{[1]}} = \\dfrac{\\partial L}{\\partial z^{[2]}} \\cdot \\dfrac{\\partial z^{[2]}}{\\partial a^{[1]}} = {W^{[2]}}^T\\dfrac{\\partial L}{\\partial z^{[2]}} \\ \\ （维度为：(n^{[1]},1)）$$\n",
    "\n",
    "6、$L$ 相对于 $z^{[1]}$ 的导数：（$z^{[1]}$ 维度为 $(n^{[1]},1)$）\n",
    "\n",
    "$$\\dfrac{\\partial L}{\\partial z^{[1]}} = \\dfrac{\\partial L}{\\partial a^{[1]}} \\cdot \\dfrac{\\partial a^{[1]}}{\\partial z^{[1]}} = {W^{[2]}}^T\\dfrac{\\partial L}{\\partial z^{[2]}} \\sigma'(z^{[1]}) = {W^{[2]}}^T\\dfrac{\\partial L}{\\partial z^{[2]}} \\cdot a^{[1]}(1-a^{[1]})$$\n",
    "\n",
    "7、求 $L$ 相对于 $W^{[1]}$ 和 $b^{[1]}$ 导数：\n",
    "\n",
    "$$\\dfrac{\\partial L}{\\partial W^{[1]}} = \\dfrac{\\partial L}{\\partial z^{[1]}} \\cdot \\dfrac{\\partial z^{[1]}}{\\partial W^{[1]}} = \\dfrac{\\partial L}{\\partial z^{[1]}} X^T$$\n",
    "\n",
    "$$\\dfrac{\\partial L}{\\partial b^{[1]}} = \\dfrac{\\partial L}{\\partial z^{[1]}} \\cdot \\dfrac{\\partial z^{[1]}}{\\partial b^{[1]}} = \\dfrac{\\partial L}{\\partial z^{[1]}}$$\n",
    "\n",
    "在实际项目中可以只计算3、4、6、7这六个公式。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "引入误差项：\n",
    "$$\\delta^{[2]} = -\\dfrac{\\partial L}{\\partial z^{[2]}} = y - a^{[2]}$$\n",
    "$$\\delta^{[1]} = -\\dfrac{\\partial L}{\\partial z^{[1]}} = {W^{[2]}}^T \\delta^{[2]} \\sigma'(z^{[1]})$$\n",
    "\n",
    "更新权重和偏差：\n",
    "$$W^{[2]} = W^{[2]} + \\Delta W^{[2]} = W^{[2]} + \\alpha\\ \\delta^{[2]}\\ {a^{[1]}}^T$$\n",
    "$$W^{[1]} = W^{[1]} + \\Delta W^{[1]} = W^{[1]} + \\alpha\\ \\delta^{[1]}\\ X^T$$\n",
    "$$b^{[2]} = b^{[2]} + \\alpha\\ \\delta^{[2]}$$\n",
    "$$b^{[1]} = b^{[1]} + \\alpha\\ \\delta^{[1]}$$\n",
    "\n",
    "\n",
    "神经网络的正向传播是输入值乘以权重值，反向传播则是误差项乘以权重值，即在反向传播中误差项作为输入值。即使加深层数，只需逐层不断传递误差项。  \n",
    "<img src=\"src/58841.jpg\" width=300px>\n",
    "\n",
    "### 向量化实现\n",
    "向量化处理可以同时作用于多个样本数据，权重W和偏差b的维度不变，但是Z、A和X的维度会发生变化。假设训练集有 m 个样本数据。\n",
    "$X$ 的维度为 $(n^{[0]},m)$\n",
    "$A^{[1]}$ 和 $Z^{[1]}$ 的维度为 $(n^{[1]},m)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 范例\n",
    "以一个简单的两层神经网络为例，计算其权重的更新过程。假设该神经网络包含两个输入值，一个隐藏节点和一个输出节点，隐藏层和输出层的激活函数都是 sigmoid，如下图所示。（注意：输入层不计入层数，所以该结构被称为两层神经网络。）\n",
    "\n",
    "<img src=\"src/10_30_57.png\" width=120px>\n",
    "\n",
    "假设试着训练一些二进制数据，目标值是 y=1。从正向传播开始，首先计算输入到隐藏层节点：\n",
    "\n",
    "$$h=\\sum_i(w_i x_i)=0.1×0.4−0.2×0.3=−0.02$$\n",
    "\n",
    "以及隐藏层节点的输出：  \n",
    "$$a=f(h)=sigmoid(−0.02)=0.495$$\n",
    "\n",
    "然后将其作为输出节点的输入，该神经网络的输出可表示为：\n",
    "\n",
    "$$\\hat y=f(W⋅a)=sigmoid(0.1×0.495)=0.512$$\n",
    "\n",
    "基于该神经网络的输出，就可以使用反向传播来更新各层的权重了。sigmoid 函数的导数为：\n",
    "\n",
    "$$f'(W \\cdot a) = f(W \\cdot a) (1 - f(W \\cdot a))$$\n",
    "\n",
    "输出节点的误差项（error term）可表示为：\n",
    "\n",
    "$$\\delta^o = (y - \\hat y) \\cdot f'(W \\cdot a) = (1 - 0.512) \\times 0.512 \\times (1 - 0.512) = 0.122$$\n",
    "\n",
    "现在要通过反向传播来计算隐藏节点的误差项。这里把输出节点的误差项与隐藏层到输出层的权重 W 相乘。隐藏节点的误差项 $\\delta^h_j = \\sum_k(W_{jk} \\delta^o_k f'(h_j))$，因为该案例只有一个隐藏节点，这就比较简单。\n",
    "\n",
    "$$\\delta^h = W \\delta^o f'(h) = 0.1×0.122×0.495×(1−0.495)=0.003$$\n",
    "\n",
    "有了误差项，就可以计算梯度下降步长。隐藏层-输出层权重更新步长是学习速率乘以输出节点误差项再乘以隐藏节点激活值。\n",
    "\n",
    "$$\\Delta W = \\eta \\delta^o a = 0.5×0.122×0.495=0.0302$$\n",
    "\n",
    "然后，输入-隐藏层权重 $w_i$ 是学习速率乘以隐藏节点误差再乘以输入值。\n",
    "\n",
    "$$\\Delta w_i = \\eta \\delta^h x_i = (0.5×0.003×0.1,\\ 0.5×0.003×0.3)=(0.00015,\\ 0.00045)$$\n",
    "\n",
    "从这个例子可以看到 sigmoid 做激活函数的一个缺点。sigmoid 函数导数的最大值是 0.25，因此输出层的误差被减少了至少 75%，隐藏层的误差被减少了至少 93.75%！如果你的神经网络有很多层，使用 sigmoid 激活函数会很快把靠近输入层的权重步长降为很小的值，该问题称作**梯度消失**。后面的课程中会学到在这方面表现更好，也被广泛用于最新神经网络中的其它激活函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 链式法则的理解\n",
    "\n",
    "在下面的神经网络中,\n",
    "<img src='src/20180728160204.png' width=350>\n",
    "\n",
    "权重矩阵 $W_1$ 的梯度是：\n",
    "\n",
    "$$\\dfrac{\\partial y}{\\partial W_1} = \\dfrac{\\partial y}{\\partial h_3} \\dfrac{\\partial h_3}{\\partial h_1} \\dfrac{\\partial h_1}{\\partial W_1} + \\dfrac{\\partial y}{\\partial h_4} \\dfrac{\\partial h_4}{\\partial h_1} \\dfrac{\\partial h_1}{\\partial W_1}$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
