{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 开始之前\n",
    "在正式开始之前，我们说一下我们要做什么。我们需要做的是分割数据集和优化梯度下降算法，所以我们需要做以下几件事：\n",
    "  1. 分割数据集\n",
    "  2. 优化梯度下降算法：\n",
    "     2.1 不使用任何优化算法\n",
    "     2.2 mini-batch梯度下降法\n",
    "     2.3 使用具有动量的梯度下降算法\n",
    "     2.4 使用Adam算法\n",
    "\n",
    "   到目前为止，我们始终都是在使用梯度下降法学习，本文中，我们将使用一些更加高级的优化算法，利用这些优化算法，通常可以提高我们算法的收敛速度，并在最终得到更好的分离结果。这些方法可以加快学习速度，甚至可以为成本函数提供更好的最终值，在相同的结果下，有一个好的优化算法可以是等待几天和几个小时之间的差异。\n",
    "   我们想象一下成本函数J JJ，最小化成本就像找到丘陵的最低点，在训练的每一步中，都会按照某个方向更新参数，以尽可能达到最低点。它类似于最快的下山的路，如下图：\n",
    "   \n",
    "![下山法](https://img-blog.csdn.net/20180412093927807?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTM3MzMzMjY=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 导入库函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\deeplearning_ai_programe_homework\\第2课 - 第2周 - 优化算法\\opt_utils.py:76: SyntaxWarning: assertion is always true, perhaps remove parentheses?\n",
      "  assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1])\n",
      "D:\\deeplearning_ai_programe_homework\\第2课 - 第2周 - 优化算法\\opt_utils.py:77: SyntaxWarning: assertion is always true, perhaps remove parentheses?\n",
      "  assert(parameters['W' + str(l)].shape == layer_dims[l], 1)\n"
     ]
    }
   ],
   "source": [
    "# -*- coding: utf-8 -*-\n",
    "\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import scipy.io\n",
    "import math\n",
    "import sklearn\n",
    "import sklearn.datasets\n",
    "\n",
    "import opt_utils #参见数据包或者在本文底部copy\n",
    "import testCase  #参见数据包或者在本文底部copy\n",
    "\n",
    "#%matplotlib inline #如果你用的是Jupyter Notebook请取消注释\n",
    "plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\n",
    "plt.rcParams['image.interpolation'] = 'nearest'\n",
    "plt.rcParams['image.cmap'] = 'gray'\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 梯度下降\n",
    "    最简单的，没有任何优化的梯度下降。GD(gradient descent)\n",
    "    参数更新公式：\n",
    "$$ W^{[l]} = W^{[l]} - \\alpha dW^{[l]} $$\n",
    "$$ \\beta^{[l]} = \\beta^{[l]}- \\alpha d\\beta^{[l]} $$\n",
    "        \n",
    "> l: 当前的层数\n",
    "\n",
    ">$\\alpha$: 学习率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def update_parameters_with_gd(parameters,grads,learning_rate):\n",
    "    \"\"\"\n",
    "    使用梯度下降更新参数\n",
    "    \n",
    "    参数：\n",
    "        parameters - 字典，包含了要更新的参数：\n",
    "            parameters['W' + str(l)] = Wl\n",
    "            parameters['b' + str(l)] = bl\n",
    "        grads - 字典，包含了每一个梯度值用以更新参数\n",
    "            grads['dW' + str(l)] = dWl\n",
    "            grads['db' + str(l)] = dbl\n",
    "        learning_rate - 学习率\n",
    "        \n",
    "    返回值：\n",
    "        parameters - 字典，包含了更新后的参数\n",
    "    \"\"\"\n",
    "    \n",
    "    L = len(parameters) // 2 #神经网络的层数\n",
    "    \n",
    "    #更新每个参数\n",
    "    for l in range(L):\n",
    "        parameters[\"W\" + str(l +1)] = parameters[\"W\" + str(l + 1)] - learning_rate * grads[\"dW\" + str(l + 1)]\n",
    "        parameters[\"b\" + str(l +1)] = parameters[\"b\" + str(l + 1)] - learning_rate * grads[\"db\" + str(l + 1)]\n",
    "    \n",
    "    return parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试代码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#测试update_parameters_with_gd\n",
    "print(\"-------------测试update_parameters_with_gd-------------\")\n",
    "parameters , grads , learning_rate = testCase.update_parameters_with_gd_test_case()\n",
    "parameters = update_parameters_with_gd(parameters,grads,learning_rate)\n",
    "print(\"W1 = \" + str(parameters[\"W1\"]))\n",
    "print(\"b1 = \" + str(parameters[\"b1\"]))\n",
    "print(\"W2 = \" + str(parameters[\"W2\"]))\n",
    "print(\"b2 = \" + str(parameters[\"b2\"]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### mini-batch梯度下降法\n",
    "我们要使用mini-batch要经过两个步骤：\n",
    "  1. 把训练集打乱，但是X和Y依旧是一一对应的，之后，X的第i列是与Y中的第i个标签对应的样本。乱序步骤确保将样本被随机分成不同的小批次。如下图，X和Y的每一列代表一个样本\n",
    "  ![1](https://img-blog.csdn.net/20180412102421401?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTM3MzMzMjY=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)\n",
    "  2. 切分，我们把训练集打乱之后，我们就可以对它进行切分了。这里切分的大小是64，如下图\n",
    "  ![2](https://img-blog.csdn.net/20180412102559746?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTM3MzMzMjY=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####  mini-batch\n",
    "    就是数据集如果太大，程序一次内存吃不下，或者更新整体一次的时间太长，就先把数据集拆分，一次只训练一部分，将每个mini训练完毕后，也就训练完了一次所有的数据集，称为一个epoch。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def random_mini_batches(X,Y,mini_batch_size=64,seed=0):\n",
    "    \"\"\"\n",
    "    从（X，Y）中创建一个随机的mini-batch列表\n",
    "    \n",
    "    参数：\n",
    "        X - 输入数据，维度为(输入节点数量，样本的数量)\n",
    "        Y - 对应的是X的标签，【1 | 0】（蓝|红），维度为(1,样本的数量)\n",
    "        mini_batch_size - 每个mini-batch的样本数量\n",
    "        \n",
    "    返回：\n",
    "        mini-bacthes - 一个同步列表，维度为（mini_batch_X,mini_batch_Y）\n",
    "        \n",
    "    \"\"\"\n",
    "    \n",
    "    np.random.seed(seed) #指定随机种子\n",
    "    m = X.shape[1]\n",
    "    mini_batches = []\n",
    "    \n",
    "    #第一步：打乱顺序\n",
    "    permutation = list(np.random.permutation(m)) #它会返回一个长度为m的随机数组，且里面的数是0到m-1\n",
    "    shuffled_X = X[:,permutation]   #将每一列的数据按permutation的顺序来重新排列。\n",
    "    shuffled_Y = Y[:,permutation].reshape((1,m))\n",
    "    \n",
    "    \"\"\"\n",
    "    #博主注：\n",
    "    #如果你不好理解的话请看一下下面的伪代码，看看X和Y是如何根据permutation来打乱顺序的。\n",
    "    x = np.array([[1,2,3,4,5,6,7,8,9],\n",
    "\t\t\t\t  [9,8,7,6,5,4,3,2,1]])\n",
    "    y = np.array([[1,0,1,0,1,0,1,0,1]])\n",
    "    \n",
    "    random_mini_batches(x,y)\n",
    "    permutation= [7, 2, 1, 4, 8, 6, 3, 0, 5]\n",
    "    shuffled_X= [[8 3 2 5 9 7 4 1 6]\n",
    "                 [2 7 8 5 1 3 6 9 4]]\n",
    "    shuffled_Y= [[0 1 0 1 1 1 0 1 0]]\n",
    "    \"\"\"\n",
    "    \n",
    "    #第二步，分割\n",
    "    num_complete_minibatches = math.floor(m / mini_batch_size) #把你的训练集分割成多少份,请注意，如果值是99.99，那么返回值是99，剩下的0.99会被舍弃\n",
    "    for k in range(0,num_complete_minibatches):\n",
    "        mini_batch_X = shuffled_X[:,k * mini_batch_size:(k+1)*mini_batch_size]\n",
    "        mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k+1)*mini_batch_size]\n",
    "        \"\"\"\n",
    "        #博主注：\n",
    "        #如果你不好理解的话请单独执行下面的代码，它可以帮你理解一些。\n",
    "        a = np.array([[1,2,3,4,5,6,7,8,9],\n",
    "                      [9,8,7,6,5,4,3,2,1],\n",
    "                      [1,2,3,4,5,6,7,8,9]])\n",
    "        k=1\n",
    "        mini_batch_size=3\n",
    "        print(a[:,1*3:(1+1)*3]) #从第4列到第6列\n",
    "        '''\n",
    "        [[4 5 6]\n",
    "         [6 5 4]\n",
    "         [4 5 6]]\n",
    "        '''\n",
    "        k=2\n",
    "        print(a[:,2*3:(2+1)*3]) #从第7列到第9列\n",
    "        '''\n",
    "        [[7 8 9]\n",
    "         [3 2 1]\n",
    "         [7 8 9]]\n",
    "        '''\n",
    "\n",
    "        #看一下每一列的数据你可能就会好理解一些\n",
    "        \"\"\"\n",
    "        mini_batch = (mini_batch_X,mini_batch_Y)\n",
    "        mini_batches.append(mini_batch)\n",
    "    \n",
    "    # 如果训练集的大小刚好是mini_batch_size的整数倍，那么这里已经处理完了\n",
    "    # 如果训练集的大小不是mini_batch_size的整数倍，那么最后肯定会剩下一些，我们要把它处理了\n",
    "    # 如果是我，我可能就扔掉了\n",
    "    if m % mini_batch_size != 0:\n",
    "        #获取最后剩余的部分\n",
    "        mini_batch_X = shuffled_X[:,mini_batch_size * num_complete_minibatches:]\n",
    "        mini_batch_Y = shuffled_Y[:,mini_batch_size * num_complete_minibatches:]\n",
    "        \n",
    "        mini_batch = (mini_batch_X,mini_batch_Y)\n",
    "        mini_batches.append(mini_batch)\n",
    "        \n",
    "    return mini_batches\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 包含动量的梯度下降\n",
    "   由于小批量梯度下降只看到了一个子集的参数更新，更新的方向有一定的差异，所以小批量梯度下降的路径将“振荡地”走向收敛，使用动量可以减少这些振荡，动量考虑了过去的梯度以平滑更新， 我们将把以前梯度的方向存储在变量v中，从形式上讲，这将是前面的梯度的指数加权平均值。我们也可以把V看作是滚下坡的速度，根据山坡的坡度建立动量。我们来看一下下面的图：\n",
    "   ![包含动量的梯度下降](https://img-blog.csdn.net/20180412104630539?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTM3MzMzMjY=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)\n",
    "   **红色箭头显示具有动量的小批量梯度下降一步时所采取的方向\n",
    "蓝色的点显示每个步骤的梯度方向（相对于当前的小批量）\n",
    "当然我们不仅要观察梯度，还要让$v$影响梯度，然后朝$v$方向前进一步,尽量让前进的方向指向最小值**\n",
    "\n",
    ">课上老师是用的天气数据举例子，非常的直观易于理解。\n",
    "\n",
    "既然我们要影响梯度的方向，而梯度需要使用到dW和db，那么我们就要建立一个和dW和db相同结构的变量来影响他们，我们现在来进行初始化："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def initialize_velocity(parameters):\n",
    "    \"\"\"\n",
    "    初始化速度，velocity是一个字典：\n",
    "        - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n",
    "        - values:与相应的梯度/参数维度相同的值为零的矩阵。\n",
    "    参数：\n",
    "        parameters - 一个字典，包含了以下参数：\n",
    "            parameters[\"W\" + str(l)] = Wl\n",
    "            parameters[\"b\" + str(l)] = bl\n",
    "    返回:\n",
    "        v - 一个字典变量，包含了以下参数：\n",
    "            v[\"dW\" + str(l)] = dWl的速度\n",
    "            v[\"db\" + str(l)] = dbl的速度\n",
    "    \n",
    "    \"\"\"\n",
    "    L = len(parameters) // 2 #神经网络的层数\n",
    "    v = {}\n",
    "    \n",
    "    for l in range(L):\n",
    "        v[\"dW\" + str(l + 1)] = np.zeros_like(parameters[\"W\" + str(l + 1)])\n",
    "        v[\"db\" + str(l + 1)] = np.zeros_like(parameters[\"b\" + str(l + 1)])\n",
    "    \n",
    "    return v\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
