{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 7. 优化算法\n",
    "\n",
    "如果读者一直按照本书的顺序读到这里，那么一定已经使用了优化算法来训练深度学习模型。具体来说，在训练模型时，我们会使用优化算法不断迭代模型参数以降低模型损失函数的值。当迭代终止时，模型的训练随之终止，此时的模型参数就是模型通过训练所学习到的参数。\n",
    "\n",
    "优化算法对于深度学习十分重要。一方面，训练一个复杂的深度学习模型可能需要数小时、数日，甚至数周时间，而优化算法的表现直接影响模型的训练效率；另一方面，理解各种优化算法的原理以及其中超参数的意义将有助于我们更有针对性地调参，从而使深度学习模型表现更好。\n",
    "\n",
    "本章将详细介绍深度学习中常用的优化算法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.1. 优化与深度学习\n",
    "\n",
    "本节将讨论优化与深度学习的关系，以及优化在深度学习中的挑战。在一个深度学习问题中，我们通常会预先定义一个损失函数。有了损失函数以后，我们就可以使用优化算法试图将其最小化。在优化中，这样的损失函数通常被称作优化问题的目标函数（objective function）。依据惯例，优化算法通常只考虑最小化目标函数。其实，任何最大化问题都可以很容易地转化为最小化问题，只需令目标函数的相反数为新的目标函数即可。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.1.1. 优化与深度学习的关系\n",
    "\n",
    "虽然优化为深度学习提供了最小化损失函数的方法，但本质上，优化与深度学习的目标是有区别的。 在“模型选择、欠拟合和过拟合”一节中，我们区分了训练误差和泛化误差。 由于优化算法的目标函数通常是一个基于训练数据集的损失函数，优化的目标在于降低训练误差。 而深度学习的目标在于降低泛化误差。为了降低泛化误差，除了使用优化算法降低训练误差以外，还需要注意应对过拟合。\n",
    "\n",
    "本章中，我们只关注优化算法在最小化目标函数上的表现，而不关注模型的泛化误差。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.1.2. 优化在深度学习中的挑战\n",
    "\n",
    "我们在“线性回归”一节中对优化问题的解析解和数值解做了区分。深度学习中绝大多数目标函数都很复杂。因此，很多优化问题并不存在解析解，而需要使用基于数值方法的优化算法找到近似解，即数值解。本书中讨论的优化算法都是这类基于数值方法的算法。为了求得最小化目标函数的数值解，我们将通过优化算法有限次迭代模型参数来尽可能降低损失函数的值。\n",
    "\n",
    "优化在深度学习中有很多挑战。下面描述了其中的两个挑战，即局部最小值和鞍点。为了更好地描述问题，我们先导入本节中实验需要的包或模块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from mpl_toolkits import mplot3d\r\n",
    "import matplotlib.pyplot as plt\r\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "#### 7.1.2.1. 局部最小值\n",
    "\n",
    "对于目标函数 f(x) ，如果 f(x) 在 x 上的值比在 x 邻近的其他点的值更小，那么 f(x) 可能是一个局部最小值（local minimum）。如果 f(x) 在 x 上的值是目标函数在整个定义域上的最小值，那么 f(x) 是全局最小值（global minimum）。\n",
    "\n",
    "举个例子，给定函数：\n",
    "<center>f(x)=x⋅cos(πx),−1.0≤x≤2.0,</center>\n",
    "\n",
    "我们可以大致找出该函数的局部最小值和全局最小值的位置。需要注意的是，图中箭头所指示的只是大致位置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def f(x):\r\n",
    "    return x * np.cos(np.pi * x)\r\n",
    "\r\n",
    "fig = plt.figure(figsize=(4.5, 2.5))\r\n",
    "x = np.arange(-1.0, 2.0, 0.1)\r\n",
    "fig, = plt.plot(x, f(x))  # 逗号表示只取返回列表中的第一个元素\r\n",
    "fig.axes.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0),\r\n",
    "                  arrowprops=dict(arrowstyle='->'))\r\n",
    "fig.axes.annotate('global minimum', xy=(1.1, -0.95), xytext=(0.6, 0.8),\r\n",
    "                  arrowprops=dict(arrowstyle='->'))\r\n",
    "plt.xlabel('x')\r\n",
    "plt.ylabel('f(x)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/384ea6722f624505a4f1d76073c311bcfac1a0703fe644b6826488631fa0748e\" alt=\"\" />\n",
    "\n",
    "深度学习模型的目标函数可能有若干局部最优值。当一个优化问题的数值解在局部最优解附近时，由于目标函数有关解的梯度接近或变成0，最终迭代求得的数值解可能只令目标函数局部最小化而非全局最小化。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "#### 7.1.2.2. 鞍点\n",
    "\n",
    "刚刚我们提到，梯度接近或变成0可能是由于当前解在局部最优解附近造成的。事实上，另一种可能性是当前解在鞍点（saddle point）附近。\n",
    "\n",
    "举个例子，给定函数\n",
    "\n",
    "<center>f(x)=x3,</center>\n",
    "\n",
    "我们可以找出该函数的鞍点位置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "x = np.arange(-2.0, 2.0, 0.1)\r\n",
    "fig, = plt.plot(x, x**3)\r\n",
    "fig.axes.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),\r\n",
    "                  arrowprops=dict(arrowstyle='->'))\r\n",
    "plt.xlabel('x')\r\n",
    "plt.ylabel('f(x)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/1dd5b208df664c01a5302b0088fe4664133679f1905d4864acb7aa7ad5c7beaa\" alt=\"\" />\n",
    "\n",
    "再举个定义在二维空间的函数的例子，例如：\n",
    "\n",
    "<center>f(x,y)=x2−y2.</center>\n",
    "\n",
    "我们可以找出该函数的鞍点位置。也许你已经发现了，该函数看起来像一个马鞍，而鞍点恰好是马鞍上可坐区域的中心。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "x, y = np.mgrid[-1: 1: 31j, -1: 1: 31j]\r\n",
    "z = x**2 - y**2\r\n",
    "\r\n",
    "ax = plt.figure().add_subplot(111, projection='3d')\r\n",
    "ax.plot_wireframe(x, y, z, **{'rstride': 2, 'cstride': 2})\r\n",
    "ax.plot([0], [0], [0], 'rx')\r\n",
    "ticks = [-1,  0, 1]\r\n",
    "plt.xticks(ticks)\r\n",
    "plt.yticks(ticks)\r\n",
    "ax.set_zticks(ticks)\r\n",
    "plt.xlabel('x')\r\n",
    "plt.ylabel('y')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/7b59c048d3ba4b7eb9ed5a8ee62eefe1f8027a49867e4dc9bdb02bfa8c3f4400\" alt=\"\" />\n",
    "\n",
    "在图的鞍点位置，目标函数在 x 轴方向上是局部最小值，但在 y 轴方向上是局部最大值。\n",
    "\n",
    "假设一个函数的输入为 k 维向量，输出为标量，那么它的海森矩阵（Hessian matrix）有 k 个特征值（参见附录中“数学基础”一节）。该函数在梯度为0的位置上可能是局部最小值、局部最大值或者鞍点：\n",
    "\n",
    "当函数的海森矩阵在梯度为0的位置上的特征值全为正时，该函数得到局部最小值。\n",
    "当函数的海森矩阵在梯度为0的位置上的特征值全为负时，该函数得到局部最大值。\n",
    "当函数的海森矩阵在梯度为0的位置上的特征值有正有负时，该函数得到鞍点。\n",
    "随机矩阵理论告诉我们，对于一个大的高斯随机矩阵来说，任一特征值是正或者是负的概率都是0.5 [1]。那么，以上第一种情况的概率为  0.5k 。由于深度学习模型参数通常都是高维的（ k 很大），目标函数的鞍点通常比局部最小值更常见。\n",
    "\n",
    "在深度学习中，虽然找到目标函数的全局最优解很难，但这并非必要。我们将在本章接下来的几节中逐一介绍深度学习中常用的优化算法，它们在很多实际问题中都能够训练出十分有效的深度学习模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.1.3. 小结\n",
    "\n",
    "- 由于优化算法的目标函数通常是一个基于训练数据集的损失函数，优化的目标在于降低训练误差。\n",
    "- 由于深度学习模型参数通常都是高维的，目标函数的鞍点通常比局部最小值更常见。\n",
    "\n",
    "### 7.1.4. 练习\n",
    "对于深度学习中的优化问题，你还能想到哪些其他的挑战？\n",
    "\n",
    "### 7.1.5. 参考文献\n",
    "[1] Wigner, E. P. (1958). On the distribution of the roots of certain symmetric matrices. Annals of Mathematics, 325-327."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.2. 梯度下降和随机梯度下降\n",
    "\n",
    "在本节中，我们将介绍梯度下降（gradient descent）的工作原理。虽然梯度下降在深度学习中很少被直接使用，但理解梯度的意义以及沿着梯度反方向更新自变量可能降低目标函数值的原因是学习后续优化算法的基础。随后，我们将引出随机梯度下降（stochastic gradient descent）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.2.1. 一维梯度下降\n",
    "\n",
    "先以简单的一维梯度下降为例，解释梯度下降算法可能降低目标函数值的原因。假设连续可导的函数f:R→R的输入和输出都是标量。给定绝对值足够小的数ϵ，根据泰勒展开公式（参见附录中“数学基础”一节），我们得到以下的近似：\n",
    "\n",
    "<center>f(x+ϵ)≈f(x)+ϵf′(x).</center>\n",
    "\n",
    "这里 f′(x) 是函数 f 在 x 处的梯度。一维函数的梯度是一个标量，也称导数。\n",
    "\n",
    "接下来，找到一个常数 η>0 ，使得 |ηf′(x)| 足够小，那么可以将 ϵ 替换为 −ηf′(x) 并得到\n",
    "\n",
    "<center>f(x−ηf′(x))≈f(x)−ηf′(x)2.</center>\n",
    "\n",
    "如果导数 f′(x)≠0 ，那么 ηf′(x)2>0 ，所以\n",
    "\n",
    "<center>f(x−ηf′(x))≲f(x).</center>\n",
    "\n",
    "这意味着，如果通过\n",
    "\n",
    "<center>x←x−ηf′(x)</center>\n",
    "\n",
    "来迭代 x ，函数 f(x) 的值可能会降低。因此在梯度下降中，我们先选取一个初始值 x 和常数 η>0 ，然后不断通过上式来迭代 x ，直到达到停止条件，例如 f′(x)2 的值已足够小或迭代次数已达到某个值。\n",
    "\n",
    "下面我们以目标函数 f(x)=x2 为例来看一看梯度下降是如何工作的。虽然我们知道最小化 f(x) 的解为 x=0 ，这里依然使用这个简单函数来观察 x 是如何被迭代的。首先，导入本节实验所需的包或模块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "%matplotlib inline\r\n",
    "import math\r\n",
    "import numpy as np\r\n",
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "接下来使用 x=10 作为初始值，并设 η=0.2 。使用梯度下降对 x 迭代10次，可见最终 x 的值较接近最优解。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 10, x: 0.06046617599999997\n"
     ]
    }
   ],
   "source": [
    "def gd(eta):\r\n",
    "    x = 10\r\n",
    "    results = [x]\r\n",
    "    for i in range(10):\r\n",
    "        x -= eta * 2 * x  # f(x) = x * x的导数为f'(x) = 2 * x\r\n",
    "        results.append(x)\r\n",
    "    print('epoch 10, x:', x)\r\n",
    "    return results\r\n",
    "\r\n",
    "res = gd(0.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "下面将绘制出自变量 x 的迭代轨迹。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def show_trace(res):\r\n",
    "    n = max(abs(min(res)), abs(max(res)), 10)\r\n",
    "    f_line = np.arange(-n, n, 0.1)\r\n",
    "    plt.figure(figsize=(4.5, 2.5))\r\n",
    "    plt.plot(f_line, [x * x for x in f_line])\r\n",
    "    plt.plot(res, [x * x for x in res], '-o')\r\n",
    "    plt.xlabel('x')\r\n",
    "    plt.ylabel('f(x)')\r\n",
    "\r\n",
    "show_trace(res)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/96c12e3df44a4a159a038bccaf95436cc39283d75da24339b3d9d2f12afd061c\" alt=\"\" />"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.2.2. 学习率\n",
    "\n",
    "上述梯度下降算法中的正数 η 通常叫作学习率。这是一个超参数，需要人工设定。如果使用过小的学习率，会导致 x 更新缓慢从而需要更多的迭代才能得到较好的解。\n",
    "\n",
    "下面展示使用学习率 η=0.05 时自变量 x 的迭代轨迹。可见，同样迭代10次后，当学习率过小时，最终 x 的值依然与最优解存在较大偏差。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "show_trace(gd(0.05))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/3319ce52c79345319dadbff8fc2436646497d52b7a324159a7551959d8a66955\" alt=\"\" />"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "如果使用过大的学习率， |ηf′(x)| 可能会过大从而使前面提到的一阶泰勒展开公式不再成立：这时我们无法保证迭代 x 会降低 f(x) 的值。\n",
    "\n",
    "举个例子，当设学习率 η=1.1 时，可以看到 x 不断越过（overshoot）最优解 x=0 并逐渐发散。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "show_trace(gd(1.1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/0e5c1338b59f4a33888c5b7afa97877eb556c1629d0c4fe4879bb5ee15cfc3eb\" alt=\"\" />"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.2.3. 多维梯度下降\n",
    "\n",
    "在了解了一维梯度下降之后，我们再考虑一种更广义的情况：目标函数的输入为向量，输出为标量。假设目标函数 f:Rd→R 的输入是一个 d 维向量 $x=[x1,x2,…,xd]^⊤$ 。目标函数 f(x) 有关 x 的梯度是一个由 d 个偏导数组成的向量：\n",
    "\n",
    "$$\n",
    "\\nabla_{x} f(\\boldsymbol{x})=\\left[\\frac{\\partial f(\\boldsymbol{x})}{\\partial x_{1}}, \\frac{\\partial f(\\boldsymbol{x})}{\\partial x_{2}}, \\ldots, \\frac{\\partial f(\\boldsymbol{x})}{\\partial x_{d}}\\right]^{\\top}\n",
    "$$\n",
    "\n",
    "为表示简洁，我们用 ∇f(x) 代替 ∇xf(x) 。梯度中每个偏导数元素 ∂f(x)/∂xi 代表着 f 在 x 有关输入 xi 的变化率。为了测量 f 沿着单位向量 u （即 $∥u∥=1$ ）方向上的变化率，在多元微积分中，我们定义 f 在 x 上沿着 u 方向的方向导数为\n",
    "\n",
    "$$\n",
    "\\mathrm{D}_{u} f(\\boldsymbol{x})=\\lim _{h \\rightarrow 0} \\frac{f(\\boldsymbol{x}+h \\boldsymbol{u})-f(\\boldsymbol{x})}{h}\n",
    "$$\n",
    "\n",
    "依据方向导数性质 [1，14.6节定理三]，以上方向导数可以改写为\n",
    "\n",
    "$$D_uf(x)=∇f(x)⋅u$$\n",
    "\n",
    "方向导数 Duf(x) 给出了 f 在 x 上沿着所有可能方向的变化率。为了最小化 f ，我们希望找到 f 能被降低最快的方向。因此，我们可以通过单位向量 u 来最小化方向导数 Duf(x) 。\n",
    "\n",
    "由于 $D_uf(x)=∥∇f(x)∥⋅∥u∥⋅cos(θ)=∥∇f(x)∥⋅cos(θ)$， 其中 θ 为梯度 ∇f(x) 和单位向量 u 之间的夹角，当 θ=π 时， cos(θ) 取得最小值 −1 。因此，当 u 在梯度方向 ∇f(x) 的相反方向时，方向导数 $D_uf(x)$ 被最小化。因此，我们可能通过梯度下降算法来不断降低目标函数 f 的值：\n",
    "\n",
    "$$x←x−η∇f(x)$$\n",
    "\n",
    "同样，其中 η （取正数）称作学习率。\n",
    "\n",
    "下面我们构造一个输入为二维向量 $x=[x1,x2]^⊤$ 和输出为标量的目标函数 $f(x)={x_1}^2+2{x_2}^2$ 。那么，梯度 $∇f(x)=[2x1,4x2]^⊤$ 。我们将观察梯度下降从初始位置 [−5,−2] 开始对自变量 x 的迭代轨迹。我们先定义两个辅助函数，第一个函数使用给定的自变量更新函数，从初始位置 [−5,−2] 开始迭代自变量 x 共20次，第二个函数对自变量 x 的迭代轨迹进行可视化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def train_2d(trainer): \r\n",
    "    x1, x2, s1, s2 = -5, -2, 0, 0  # s1和s2是自变量状态，本章后续几节会使用\r\n",
    "    results = [(x1, x2)]\r\n",
    "    for i in range(20):\r\n",
    "        x1, x2, s1, s2 = trainer(x1, x2, s1, s2)\r\n",
    "        results.append((x1, x2))\r\n",
    "    print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))\r\n",
    "    return results\r\n",
    "\r\n",
    "def show_trace_2d(f, results): \r\n",
    "    plt.plot(*zip(*results), '-o', color='#ff7f0e')\r\n",
    "    x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))\r\n",
    "    plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')\r\n",
    "    plt.xlabel('x1')\r\n",
    "    plt.ylabel('x2')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "然后，观察学习率为 0.1 时自变量的迭代轨迹。使用梯度下降对自变量 x 迭代20次后，可见最终 x 的值较接近最优解 [0,0] 。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "eta = 0.1\r\n",
    "\r\n",
    "def f_2d(x1, x2):  # 目标函数\r\n",
    "    return x1 ** 2 + 2 * x2 ** 2\r\n",
    "\r\n",
    "def gd_2d(x1, x2, s1, s2):\r\n",
    "    return (x1 - eta * 2 * x1, x2 - eta * 4 * x2, 0, 0)\r\n",
    "\r\n",
    "show_trace_2d(f_2d, train_2d(gd_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/802b30f27b7f419288956d79b5a37085851d8a203f8e440f9fcb86f87ad4e48d\" alt=\"\" />"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.2.4. 随机梯度下降\n",
    "\n",
    "在深度学习里，目标函数通常是训练数据集中有关各个样本的损失函数的平均。设 fi(x) 是有关索引为 i 的训练数据样本的损失函数， n 是训练数据样本数， x 是模型的参数向量，那么目标函数定义为\n",
    "\n",
    "$$\n",
    "f(\\boldsymbol{x})=\\frac{1}{n} \\sum_{i=1}^{n} f_{i}(\\boldsymbol{x})\n",
    "$$\n",
    "\n",
    "目标函数在 x 处的梯度计算为\n",
    "\n",
    "$$\n",
    "∇f(\\boldsymbol{x})=\\frac{1}{n} \\sum_{i=1}^{n} ∇f_{i}(\\boldsymbol{x})\n",
    "$$\n",
    "\n",
    "如果使用梯度下降，每次自变量迭代的计算开销为 O(n) ，它随着 n 线性增长。因此，当训练数据样本数很大时，梯度下降每次迭代的计算开销很高。\n",
    "\n",
    "随机梯度下降（stochastic gradient descent，SGD）减少了每次迭代的计算开销。在随机梯度下降的每次迭代中，我们随机均匀采样的一个样本索引 i∈{1,…,n} ，并计算梯度 ∇fi(x) 来迭代 x ：\n",
    "\n",
    "$$x←x−η∇fi(x).$$\n",
    "\n",
    "这里 η 同样是学习率。可以看到，每次迭代的计算开销从梯度下降的 O(n) 降到了常数 O(1) 。值得强调的是，随机梯度 ∇fi(x) 是对梯度 ∇f(x) 的无偏估计：\n",
    "\n",
    "$$\n",
    "E_{i} \\nabla f_{i}(\\boldsymbol{x})=\\frac{1}{n} \\sum_{i=1}^{n} \\nabla f_{i}(\\boldsymbol{x})=\\nabla f(\\boldsymbol{x})\n",
    "$$\n",
    "\n",
    "这意味着，平均来说，随机梯度是对梯度的一个良好的估计。\n",
    "\n",
    "下面我们通过在梯度中添加均值为0的随机噪声来模拟随机梯度下降，以此来比较它与梯度下降的区别。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def sgd_2d(x1, x2, s1, s2):\r\n",
    "    return (x1 - eta * (2 * x1 + np.random.normal(0.1)),\r\n",
    "            x2 - eta * (4 * x2 + np.random.normal(0.1)), 0, 0)\r\n",
    "\r\n",
    "show_trace_2d(f_2d, train_2d(sgd_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/220bb4a24be54deca4b1ea9bc92f655a18d8ab74959f4591acac9fdbf12fc949\" alt=\"\" />"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "可以看到，随机梯度下降中自变量的迭代轨迹相对于梯度下降中的来说更为曲折。这是由于实验所添加的噪声使模拟的随机梯度的准确度下降。在实际中，这些噪声通常指训练数据集中的无意义的干扰。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.2.5. 小结\n",
    "\n",
    "- 使用适当的学习率，沿着梯度反方向更新自变量可能降低目标函数值。梯度下降重复这一更新过程直到得到满足要求的解。\n",
    "- 学习率过大或过小都有问题。一个合适的学习率通常是需要通过多次实验找到的。\n",
    "- 当训练数据集的样本较多时，梯度下降每次迭代的计算开销较大，因而随机梯度下降通常更受青睐。\n",
    "\n",
    "### 7.2.6. 练习\n",
    "- 使用一个不同的目标函数，观察梯度下降和随机梯度下降中自变量的迭代轨迹。\n",
    "- 在二维梯度下降的实验中尝试使用不同的学习率，观察并分析实验现象。\n",
    "\n",
    "### 7.2.7. 参考文献\n",
    "[1] Stewart, J. (2010). Calculus: early transcendentals. 7th ed. Cengage Learning."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.3. 小批量随机梯度下降\n",
    "\n",
    "在每一次迭代中，梯度下降使用整个训练数据集来计算梯度，因此它有时也被称为批量梯度下降（batch gradient descent）。而随机梯度下降在每次迭代中只随机采样一个样本来计算梯度。正如我们在前几章中所看到的，我们还可以在每轮迭代中随机均匀采样多个样本来组成一个小批量，然后使用这个小批量来计算梯度。下面就来描述小批量随机梯度下降。\n",
    "\n",
    "设目标函数 $f(x):Rd→R$ 。在迭代开始前的时间步设为0。该时间步的自变量记为 $x_0∈R_d$ ，通常由随机初始化得到。在接下来的每一个时间步 $t>0$ 中，小批量随机梯度下降随机均匀采样一个由训练数据样本索引组成的小批量 $B_t$ 。我们可以通过重复采样（sampling with replacement）或者不重复采样（sampling without replacement）得到一个小批量中的各个样本。前者允许同一个小批量中出现重复的样本，后者则不允许如此，且更常见。对于这两者间的任一种方式，都可以使用\n",
    "\n",
    "$$\n",
    "\\boldsymbol{g}_{t} \\leftarrow \\nabla f_{\\mathcal{B}_{t}}\\left(\\boldsymbol{x}_{t-1}\\right)=\\frac{1}{|\\mathcal{B}|} \\sum_{i \\in \\mathcal{B}_{t}} \\nabla f_{i}\\left(\\boldsymbol{x}_{t-1}\\right)\n",
    "$$\n",
    "\n",
    "来计算时间步 t 的小批量 $B_t$ 上目标函数位于 $x_t−1$ 处的梯度 $g_t$ 。这里 $|B|$ 代表批量大小，即小批量中样本的个数，是一个超参数。同随机梯度一样，重复采样所得的小批量随机梯度 $g_t$ 也是对梯度 $∇f(x_t−1)$ 的无偏估计。给定学习率 $η_t$ （取正数），小批量随机梯度下降对自变量的迭代如下：\n",
    "\n",
    "$$x_t←x_t−1−η_tg_t.$$\n",
    "\n",
    "基于随机采样得到的梯度的方差在迭代过程中无法减小，因此在实际中，（小批量）随机梯度下降的学习率可以在迭代过程中自我衰减，例如 $η_t=η_tα$ （通常 α=−1 或者 −0.5 ）、 $η_t=ηα^t$ （如 α=0.95 ）或者每迭代若干次后将学习率衰减一次。如此一来，学习率和（小批量）随机梯度乘积的方差会减小。而梯度下降在迭代过程中一直使用目标函数的真实梯度，无须自我衰减学习率。\n",
    "\n",
    "小批量随机梯度下降中每次迭代的计算开销为 $O(|B|)$ 。当批量大小为1时，该算法即随机梯度下降；当批量大小等于训练数据样本数时，该算法即梯度下降。当批量较小时，每次迭代中使用的样本少，这会导致并行处理和内存使用效率变低。这使得在计算同样数目样本的情况下比使用更大批量时所花时间更多。当批量较大时，每个小批量梯度里可能含有更多的冗余信息。为了得到较好的解，批量较大时比批量较小时需要计算的样本数目可能更多，例如增大迭代周期数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.3.1. 读取数据集\n",
    "\n",
    "本章里我们将使用一个来自NASA的测试不同飞机机翼噪音的数据集来比较各个优化算法 [1]。我们使用该数据集的前1,500个样本和5个特征，并使用标准化对数据进行预处理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "%matplotlib inline\r\n",
    "import numpy as np\r\n",
    "import pandas as pd\r\n",
    "import time\r\n",
    "import paddle\r\n",
    "import paddle.fluid as fluid\r\n",
    "import paddle.nn.functional as F\r\n",
    "import matplotlib.pyplot as plt\r\n",
    "from paddle.io import Dataset\r\n",
    "paddle.set_default_dtype(\"float64\")\r\n",
    "\r\n",
    "class AirfoilSelfNoise(Dataset):\r\n",
    "    def __init__(self, mode='train'):\r\n",
    "        super(AirfoilSelfNoise, self).__init__()\r\n",
    "        data = pd.read_csv('airfoil_self_noise.dat', header=None, delimiter=\"\\t\")\r\n",
    "        data = (data - data.mean(axis=0)) / data.std(axis=0)\r\n",
    "        features = np.array(data.values[:1500, :-1])\r\n",
    "        labels = np.array(data.values[:1500, -1])\r\n",
    "        datas = []\r\n",
    "        for item in range(len(features)):\r\n",
    "            datas.append([features[item], labels[item]])\r\n",
    "\r\n",
    "        if mode == 'train':\r\n",
    "            self.data = datas\r\n",
    "        else:\r\n",
    "            self.data = datas\r\n",
    "\r\n",
    "    def __getitem__(self, index):\r\n",
    "        data = self.data[index][0]\r\n",
    "        label = self.data[index][1]\r\n",
    "\r\n",
    "        # 在这里对训练数据进行应用\r\n",
    "        return data, label\r\n",
    "\r\n",
    "    def __len__(self):\r\n",
    "        return len(self.data)\r\n",
    "\r\n",
    "# 用高层API定义数据集，无需进行数据处理等，高层API为您一条龙搞定\r\n",
    "train_dataset = AirfoilSelfNoise(mode='train')\r\n",
    "eval_dataset = AirfoilSelfNoise(mode='test')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.3.2. 从零开始实现\n",
    "\n",
    "“线性回归的从零开始实现”一节中已经实现过小批量随机梯度下降算法。我们在这里将它的输入参数变得更加通用，主要是为了方便本章后面介绍的其他优化算法也可以使用同样的输入。具体来说，我们添加了一个状态输入states并将超参数放在字典hyperparams里。此外，我们将在训练函数里对各个小批量样本的损失求平均，因此优化算法里的梯度不需要除以批量大小。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "线性回归就是一个从输入到输出的简单的全连接层。对于飞机机翼噪音数据集，假设属性和声音之间的关系可以被属性间的线性组合描述。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class Regressor(paddle.nn.Layer):\r\n",
    "    def __init__(self):\r\n",
    "        super(Regressor, self).__init__()\r\n",
    "        self.fc = paddle.nn.Linear(5, 1,)\r\n",
    "\r\n",
    "    def forward(self, inputs):\r\n",
    "        pred = self.fc(inputs)\r\n",
    "        return pred\r\n",
    "\r\n",
    "model = paddle.Model(Regressor())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "在本示例当中，我们用最简单的均方误差(mean square error)作为损失函数(paddle.nn.MSELoss)；和最常见的优化算法SGD（stocastic gradient descent)作为优化算法（传给paddle.optimizer.SGD的参数learning_rate，你可以理解为控制每次调整的步子大小的参数）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "mse_loss = paddle.nn.MSELoss()\r\n",
    "sgd_optimizer = fluid.optimizer.SGD(learning_rate = 0.001, parameter_list = model.parameters())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "下面实现一个通用的训练函数，以方便本章后面介绍的其他优化算法使用。它初始化一个线性回归模型，然后可以使用小批量随机梯度下降以及后续小节介绍的其他算法来训练模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# 调用飞桨框架的VisualDL模块，保存信息到目录中。\r\n",
    "callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')\r\n",
    "model.prepare(fluid.optimizer.SGD(learning_rate=0.001, parameter_list = model.parameters()),\r\n",
    "              paddle.nn.MSELoss())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "当批量大小为8时，优化使用的是小批量随机梯度下降。它在每个迭代周期的耗时介于梯度下降和随机梯度下降的耗时之间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step 188/188 [==============================] - loss: 114999.1358 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 437874.9208 - 827us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 2/10\n",
      "step 188/188 [==============================] - loss: 19334.1910 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 89796.3029 - 823us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 3/10\n",
      "step 188/188 [==============================] - loss: 3376.1870 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 18261.8242 - 790us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 4/10\n",
      "step 188/188 [==============================] - loss: 1656.1266 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 3637.6765 - 849us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 5/10\n",
      "step 188/188 [==============================] - loss: 222.5377 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 705.5349 - 853us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 6/10\n",
      "step 188/188 [==============================] - loss: 20.1735 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 125.6608 - 883us/step       \n",
      "Eval samples: 1500\n",
      "Epoch 7/10\n",
      "step 188/188 [==============================] - loss: 16.9396 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 18.2450 - 901us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 8/10\n",
      "step 188/188 [==============================] - loss: 2.1606 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 1.3088 - 893us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 9/10\n",
      "step 188/188 [==============================] - loss: 1.4851 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 0.1879 - 927us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 10/10\n",
      "step 188/188 [==============================] - loss: 0.3186 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 188/188 [==============================] - loss: 1.0148 - 865us/step         \n",
      "Eval samples: 1500\n"
     ]
    }
   ],
   "source": [
    "model.fit(train_dataset, eval_dataset, epochs=10, batch_size=8, verbose=1, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:50%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/052939b72b704ed4843e61453bcbec20455ded8f5cab44f3ba3ead6996e7cf4d\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "当批量大小为1时，优化使用的是随机梯度下降。为了简化实现，有关（小批量）随机梯度下降的实验中，我们未对学习率进行自我衰减，而是直接采用较小的常数学习率。随机梯度下降中，每处理一个样本会更新一次自变量（模型参数），一个迭代周期里会对自变量进行1,500次更新。可以看到，目标函数值的下降在1个迭代周期后就变得较为平缓。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step 1500/1500 [==============================] - loss: 5.5747e-04 - 1ms/step     \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.7043 - 779us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 2/10\n",
      "step 1500/1500 [==============================] - loss: 0.0341 - 2ms/step           \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.5321 - 861us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 3/10\n",
      "step 1500/1500 [==============================] - loss: 0.0148 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.5759 - 770us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 4/10\n",
      "step 1500/1500 [==============================] - loss: 0.2910 - 1ms/step           \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.5873 - 772us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 5/10\n",
      "step 1500/1500 [==============================] - loss: 7.5816e-06 - 2ms/step      \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.5236 - 778us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 6/10\n",
      "step 1500/1500 [==============================] - loss: 0.3710 - 1ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.5748 - 813us/step        \n",
      "Eval samples: 1500\n",
      "Epoch 7/10\n",
      "step 1500/1500 [==============================] - loss: 0.1127 - 1ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.6587 - 826us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 8/10\n",
      "step 1500/1500 [==============================] - loss: 0.2969 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.6718 - 919us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 9/10\n",
      "step 1500/1500 [==============================] - loss: 0.5623 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.6614 - 901us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 10/10\n",
      "step 1500/1500 [==============================] - loss: 0.0550 - 1ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 1500/1500 [==============================] - loss: 0.7413 - 819us/step        \n",
      "Eval samples: 1500\n"
     ]
    }
   ],
   "source": [
    "model.fit(train_dataset, eval_dataset, epochs=10, batch_size=1, verbose=1, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:50%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/4a5a36a64d5e48b3844488d6f77155a84955703036b04315be3ed75a084ed758\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "虽然随机梯度下降和梯度下降在一个迭代周期里都处理了1,500个样本，但实验中随机梯度下降的一个迭代周期耗时更多。这是因为随机梯度下降在一个迭代周期里做了更多次的自变量迭代，而且单样本的梯度计算难以有效利用矢量计算。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.3.3. 小结\n",
    "- 小批量随机梯度每次随机均匀采样一个小批量的训练样本来计算梯度。\n",
    "- 在实际中，（小批量）随机梯度下降的学习率可以在迭代过程中自我衰减。\n",
    "- 通常，小批量随机梯度在每个迭代周期的耗时介于梯度下降和随机梯度下降的耗时之间。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.3.4. 参考文献\n",
    "[1] 飞机机翼噪音数据集。[https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise](https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.4. 动量法\n",
    "\n",
    "在“梯度下降和随机梯度下降”一节中我们提到，目标函数有关自变量的梯度代表了目标函数在自变量当前位置下降最快的方向。因此，梯度下降也叫作最陡下降（steepest descent）。在每次迭代中，梯度下降根据自变量当前位置，沿着当前位置的梯度更新自变量。然而，如果自变量的迭代方向仅仅取决于自变量当前位置，这可能会带来一些问题。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.4.1. 梯度下降的问题\n",
    "\n",
    "让我们考虑一个输入和输出分别为二维向量$\\boldsymbol{x} = [x_1, x_2]^\\top$和标量的目标函数$f(\\boldsymbol{x})=0.1x_1^2+2x_2^2$。与“梯度下降和随机梯度下降”一节中不同，这里将$x_1^2$系数从1减小到了0.1。下面实现基于这个目标函数的梯度下降，并演示使用学习率为0.4时自变量的迭代轨迹。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "%matplotlib inline\r\n",
    "import numpy as np\r\n",
    "import matplotlib.pyplot as plt\r\n",
    "\r\n",
    "eta = 0.4\r\n",
    "\r\n",
    "def f_2d(x1, x2):\r\n",
    "    return 0.1 * x1 ** 2 + 2 * x2 ** 2\r\n",
    "\r\n",
    "def gd_2d(x1, x2, s1, s2):\r\n",
    "    return (x1 - eta * 0.2 * x1, x2 - eta * 4 * x2, 0, 0)\r\n",
    "\r\n",
    "show_trace_2d(f_2d, train_2d(gd_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/3540112bd4e340e1a2d1b2736374588942160503ba1f4b8faafa1b4ecde6e38a\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "可以看到，同一位置上，目标函数在竖直方向（$x_2$轴方向）比在水平方向（x_1轴方向）的斜率的绝对值更大。因此，给定学习率，梯度下降迭代自变量时会使自变量在竖直方向比在水平方向移动幅度更大。那么，我们需要一个较小的学习率从而避免自变量在竖直方向上越过目标函数最优解。然而，这会造成自变量在水平方向上朝最优解移动变慢。\n",
    "\n",
    "下面我们试着将学习率调得稍大一点，此时自变量在竖直方向不断越过最优解并逐渐发散。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "eta = 0.6\r\n",
    "show_trace_2d(f_2d, train_2d(gd_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/939caf16f3754825935a510a969ee3fa9326992b10d148a6982dfdb209778bad\" alt=\"\"/>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.4.2. 动量法\n",
    "\n",
    "动量法的提出是为了解决梯度下降的上述问题。由于小批量随机梯度下降比梯度下降更为广义，本章后续讨论将沿用“小批量随机梯度下降”一节中时间步t的小批量随机梯度$\\boldsymbol{g}_t$的定义。设时间步t的自变量为$\\boldsymbol{x}_t$，学习率为$\\eta_t$。 在时间步0，动量法创建速度变量$\\boldsymbol{v}_0$，并将其元素初始化成0。在时间步$t>0$，动量法对每次迭代的步骤做如下修改：\n",
    "\n",
    "$$\n",
    "\\begin{array}{l}\n",
    "\\boldsymbol{v}_{t} \\leftarrow \\gamma \\boldsymbol{v}_{t-1}+\\eta_{t} \\boldsymbol{g}_{t} \\\\\n",
    "\\boldsymbol{x}_{t} \\leftarrow \\boldsymbol{x}_{t-1}-\\boldsymbol{v}_{t}\n",
    "\\end{array}\n",
    "$$\n",
    "\n",
    "其中，动量超参数$\\gamma$满足$0 \\leq \\gamma < 1$。当$\\gamma=0$时，动量法等价于小批量随机梯度下降。\n",
    "\n",
    "在解释动量法的数学原理前，让我们先从实验中观察梯度下降在使用动量法后的迭代轨迹。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def momentum_2d(x1, x2, v1, v2):\r\n",
    "    v1 = gamma * v1 + eta * 0.2 * x1\r\n",
    "    v2 = gamma * v2 + eta * 4 * x2\r\n",
    "    return x1 - v1, x2 - v2, v1, v2\r\n",
    "\r\n",
    "eta, gamma = 0.4, 0.5\r\n",
    "show_trace_2d(f_2d, train_2d(momentum_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/949a60bf3ec744c9b4457f4fb0670759ab25f4be185345ad8eeca3f10e7800b9\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "可以看到使用较小的学习率 $η=0.4$ 和动量超参数 $γ=0.5$ 时，动量法在竖直方向上的移动更加平滑，且在水平方向上更快逼近最优解。下面使用较大的学习率 $η=0.6$ ，此时自变量也不再发散。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "eta = 0.6\r\n",
    "show_trace_2d(f_2d, train_2d(momentum_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/595615297b154076aea557d416ade0424835bdcd1f954996aa39d5404768533a\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "#### 7.4.2.1. 指数加权移动平均\n",
    "\n",
    "为了从数学上理解动量法，让我们先解释一下指数加权移动平均（exponentially weighted moving average）。给定超参数 $0≤γ<1$ ，当前时间步 t 的变量 $y_t$ 是上一时间步 $t−1$ 的变量 $y_{t−1}$ 和当前时间步另一变量 $x_t$ 的线性组合：\n",
    "\n",
    "$$\n",
    "y_{t}=\\gamma y_{t-1}+(1-\\gamma) x_{t}\n",
    "$$\n",
    "\n",
    "我们可以对 $yt$ 展开：\n",
    "\n",
    "$$\n",
    "\\begin{aligned}\n",
    "y_{t} &=(1-\\gamma) x_{t}+\\gamma y_{t-1} \\\\\n",
    "&=(1-\\gamma) x_{t}+(1-\\gamma) \\cdot \\gamma x_{t-1}+\\gamma^{2} y_{t-2} \\\\\n",
    "&=(1-\\gamma) x_{t}+(1-\\gamma) \\cdot \\gamma x_{t-1}+(1-\\gamma) \\cdot \\gamma^{2} x_{t-2}+\\gamma^{3} y_{t-3}\n",
    "\\end{aligned}\n",
    "$$\n",
    "\n",
    "令 $n=1/(1−γ)$ ，那么  $(1−1/n)^n=γ^{1/(1−γ)}$ 。因为\n",
    "\n",
    "$$\n",
    "\\lim _{n \\rightarrow \\infty}\\left(1-\\frac{1}{n}\\right)^{n}=\\exp (-1) \\approx 0.3679\n",
    "$$\n",
    "\n",
    "所以当 $γ→1$ 时， $γ^{1/(1−γ)}=exp(−1)$ ，如 $0.9520≈exp(−1)$ 。如果把 $exp(−1)$ 当作一个比较小的数，我们可以在近似中忽略所有含 $γ^{1/(1−γ)}$ 和比 $γ^{1/(1−γ)} 更高阶的系数的项。例如，当 $γ=0.95$ 时，\n",
    "\n",
    "$$\n",
    "y_{t} \\approx 0.05 \\sum_{i=0}^{19} 0.95^{i} x_{t-i}\n",
    "$$\n",
    "\n",
    "因此，在实际中，我们常常将 $y_t$ 看作是对最近 $1/(1−γ)$ 个时间步的 $x_t$ 值的加权平均。例如，当 $γ=0.95$ 时， $y_t$ 可以被看作对最近20个时间步的 xt 值的加权平均；当 $γ=0.9$ 时， $y_t$ 可以看作是对最近10个时间步的 $x_t$ 值的加权平均。而且，离当前时间步 t 越近的 $x_t$ 值获得的权重越大（越接近1）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "#### 7.4.2.2. 由指数加权移动平均理解动量法\n",
    "\n",
    "现在，我们对动量法的速度变量做变形：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{v}_{t} \\leftarrow \\gamma \\boldsymbol{v}_{t-1}+(1-\\gamma)\\left(\\frac{\\eta_{t}}{1-\\gamma} \\boldsymbol{g}_{t}\\right)\n",
    "$$\n",
    "\n",
    "由指数加权移动平均的形式可得，速度变量 $v_t$ 实际上对序列 ${ηt−igt−i/(1−γ):i=0,…,1/(1−γ)−1}$ 做了指数加权移动平均。换句话说，相比于小批量随机梯度下降，动量法在每个时间步的自变量更新量近似于将前者对应的最近 $1/(1−γ)$ 个时间步的更新量做了指数加权移动平均后再除以 $1−γ$ 。所以，在动量法中，自变量在各个方向上的移动幅度不仅取决于当前梯度，还取决于过去的各个梯度在各个方向上是否一致。在本节之前示例的优化问题中，所有梯度在水平方向上为正（向右），而在竖直方向上时正（向上）时负（向下）。这样，我们就可以使用较大的学习率，从而使自变量向最优解更快移动。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.4.3. 从零开始实现\n",
    "\n",
    "相对于小批量随机梯度下降，动量法需要对每一个自变量维护一个同它一样形状的速度变量，且超参数里多了动量超参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# 调用飞桨框架的VisualDL模块，保存信息到目录中。\r\n",
    "callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')\r\n",
    "model.prepare(paddle.optimizer.Momentum(learning_rate = 0.1,parameters = model.parameters(), weight_decay = 0.5),\r\n",
    "              paddle.nn.MSELoss())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "我们先将动量超参数momentum设0.5，这时可以看成是特殊的小批量随机梯度下降：其小批量随机梯度为最近2个时间步的2倍小批量梯度的加权平均。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step 94/94 [==============================] - loss: 1.0932 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 3.9757 - 842us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 2/10\n",
      "step 94/94 [==============================] - loss: 1.0222 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.9030 - 911us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 3/10\n",
      "step 94/94 [==============================] - loss: 0.8240 - 2ms/step        \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.9852 - 877us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 4/10\n",
      "step 94/94 [==============================] - loss: 1.3451 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.6854 - 814us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 5/10\n",
      "step 94/94 [==============================] - loss: 1.4065 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.2402 - 835us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 6/10\n",
      "step 94/94 [==============================] - loss: 0.7980 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.7074 - 861us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 7/10\n",
      "step 94/94 [==============================] - loss: 0.8983 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.5167 - 867us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 8/10\n",
      "step 94/94 [==============================] - loss: 0.5585 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.1962 - 852us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 9/10\n",
      "step 94/94 [==============================] - loss: 1.3017 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 0.5946 - 812us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 10/10\n",
      "step 94/94 [==============================] - loss: 0.8352 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 3.1420 - 821us/step         \n",
      "Eval samples: 1500\n"
     ]
    }
   ],
   "source": [
    "model.fit(train_dataset, eval_dataset, epochs=10, batch_size=16, verbose=1, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:50%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/280b235c151b4e75986752a34d4e7ce0c611df84266e4005b32b826242969c23\" alt=\"\"/>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "将动量超参数momentum增大到0.9，这时依然可以看成是特殊的小批量随机梯度下降：其小批量随机梯度为最近10个时间步的10倍小批量梯度的加权平均。我们先保持学习率0.1不变。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step 94/94 [==============================] - loss: 1.1381 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.4198 - 900us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 2/10\n",
      "step 94/94 [==============================] - loss: 1.0109 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.7299 - 852us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 3/10\n",
      "step 94/94 [==============================] - loss: 0.8026 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.0533 - 870us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 4/10\n",
      "step 94/94 [==============================] - loss: 1.3794 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.2581 - 864us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 5/10\n",
      "step 94/94 [==============================] - loss: 1.2991 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.7729 - 889us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 6/10\n",
      "step 94/94 [==============================] - loss: 0.8383 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.6676 - 889us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 7/10\n",
      "step 94/94 [==============================] - loss: 0.8299 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.2903 - 875us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 8/10\n",
      "step 94/94 [==============================] - loss: 0.5519 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.2792 - 815us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 9/10\n",
      "step 94/94 [==============================] - loss: 1.1472 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 0.9990 - 835us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 10/10\n",
      "step 94/94 [==============================] - loss: 0.8818 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.3936 - 840us/step         \n",
      "Eval samples: 1500\n"
     ]
    }
   ],
   "source": [
    "# 调用飞桨框架的VisualDL模块，保存信息到目录中。\r\n",
    "callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')\r\n",
    "model.prepare(paddle.optimizer.Momentum(learning_rate = 0.1,parameters = model.parameters(), weight_decay = 0.9),\r\n",
    "              paddle.nn.MSELoss())\r\n",
    "model.fit(train_dataset, eval_dataset, epochs=10, batch_size=16, verbose=1, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:50%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/0b87f983bc9c48f7902c5d81003387fac59df575c9604c2eadbc6e7be72c967d\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "可见目标函数值在后期迭代过程中的变化不够平滑。直觉上，10倍小批量梯度比2倍小批量梯度大了5倍，我们可以试着将学习率减小到原来的1/5。此时目标函数值在下降了一段时间后变化更加平滑。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step 94/94 [==============================] - loss: 1.5174 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.7236 - 842us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 2/10\n",
      "step 94/94 [==============================] - loss: 1.0508 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.9967 - 817us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 3/10\n",
      "step 94/94 [==============================] - loss: 0.7473 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.0587 - 882us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 4/10\n",
      "step 94/94 [==============================] - loss: 1.3919 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.9061 - 836us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 5/10\n",
      "step 94/94 [==============================] - loss: 1.2686 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.5313 - 876us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 6/10\n",
      "step 94/94 [==============================] - loss: 0.8115 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.6710 - 865us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 7/10\n",
      "step 94/94 [==============================] - loss: 0.8038 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.5001 - 869us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 8/10\n",
      "step 94/94 [==============================] - loss: 0.5754 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.0068 - 826us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 9/10\n",
      "step 94/94 [==============================] - loss: 1.1413 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 1.4764 - 895us/step         \n",
      "Eval samples: 1500\n",
      "Epoch 10/10\n",
      "step 94/94 [==============================] - loss: 0.8550 - 2ms/step         \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 94/94 [==============================] - loss: 2.0062 - 868us/step         \n",
      "Eval samples: 1500\n"
     ]
    }
   ],
   "source": [
    "# 调用飞桨框架的VisualDL模块，保存信息到目录中。\r\n",
    "callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')\r\n",
    "model.prepare(paddle.optimizer.Momentum(learning_rate = 0.02,parameters = model.parameters(), weight_decay = 0.9),\r\n",
    "              paddle.nn.MSELoss())\r\n",
    "model.fit(train_dataset, eval_dataset, epochs=10, batch_size=16, verbose=1, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:50%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/fd75720123ae4d189896d2e797c54a14d6a22ddb678949ff95a2657e66153b1c\" alt=\"\"/>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.4.4. 简洁实现\n",
    "\n",
    "在PaddlePaddle中，只需要指定paddle.optimizer.Momentum()即可使用动量法。\n",
    "\n",
    "代码示例：\n",
    "\n",
    "```\n",
    "import paddle\n",
    "\n",
    "inp = paddle.uniform(min=-0.1, max=0.1, shape=[10, 10], dtype='float32')\n",
    "linear = paddle.nn.Linear(10, 10)\n",
    "out = linear(inp)\n",
    "loss = paddle.mean(out)\n",
    "momentum = paddle.optimizer.Momentum(learning_rate=0.1, parameters=linear.parameters(), weight_decay=0.01)\n",
    "out.backward()\n",
    "momentum.step()\n",
    "momentum.clear_grad()\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.4.5. 小结\n",
    "- 动量法使用了指数加权移动平均的思想。它将过去时间步的梯度做了加权平均，且权重按时间步指数衰减。\n",
    "- 动量法使得相邻时间步的自变量更新在方向上更加一致。\n",
    "### 7.4.6. 练习\n",
    "使用其他动量超参数和学习率的组合，观察并分析实验结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.5. AdaGrad算法\n",
    "\n",
    "在之前介绍过的优化算法中，目标函数自变量的每一个元素在相同时间步都使用同一个学习率来自我迭代。举个例子，假设目标函数为 f ，自变量为一个二维向量 ${[x1,x2]}^⊤$ ，该向量中每一个元素在迭代时都使用相同的学习率。例如，在学习率为 $η$ 的梯度下降中，元素 $x_1$ 和 $x_2$ 都使用相同的学习率 $η$ 来自我迭代：\n",
    "\n",
    "$$\n",
    "x_{1} \\leftarrow x_{1}-\\eta \\frac{\\partial f}{\\partial x_{1}}, \\quad x_{2} \\leftarrow x_{2}-\\eta \\frac{\\partial f}{\\partial x_{2}}\n",
    "$$\n",
    "\n",
    "在“动量法”一节里我们看到，当 $x_1$ 和 $x_2$ 的梯度值有较大差别时，需要选择足够小的学习率使得自变量在梯度值较大的维度上不发散。但这样会导致自变量在梯度值较小的维度上迭代过慢。动量法依赖指数加权移动平均使得自变量的更新方向更加一致，从而降低发散的可能。本节我们介绍AdaGrad算法，它根据自变量在每个维度的梯度值的大小来调整各个维度上的学习率，从而避免统一的学习率难以适应所有维度的问题 [1]。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.5.1. 算法\n",
    "\n",
    "AdaGrad算法会使用一个小批量随机梯度 $g_t$ 按元素平方的累加变量 $s_t$ 。在时间步0，AdaGrad将 $s_0$ 中每个元素初始化为0。在时间步 t ，首先将小批量随机梯度 $g_t$ 按元素平方后累加到变量 $s_t$ ：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{s}_{t} \\leftarrow \\boldsymbol{s}_{t-1}+\\boldsymbol{g}_{t} \\odot \\boldsymbol{g}_{t}\n",
    "$$\n",
    "\n",
    "其中 $⊙$ 是按元素相乘。接着，我们将目标函数自变量中每个元素的学习率通过按元素运算重新调整一下：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{x}_{t} \\leftarrow \\boldsymbol{x}_{t-1}-\\frac{\\eta}{\\sqrt{\\boldsymbol{s}_{t}+\\epsilon}} \\odot \\boldsymbol{g}_{t}\n",
    "$$\n",
    "\n",
    "其中 $η$ 是学习率， $ϵ$ 是为了维持数值稳定性而添加的常数，如 $10^{−6}$ 。这里开方、除法和乘法的运算都是按元素运算的。这些按元素运算使得目标函数自变量中每个元素都分别拥有自己的学习率。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.5.2. 特点\n",
    "\n",
    "需要强调的是，小批量随机梯度按元素平方的累加变量 $s_t$ 出现在学习率的分母项中。因此，如果目标函数有关自变量中某个元素的偏导数一直都较大，那么该元素的学习率将下降较快；反之，如果目标函数有关自变量中某个元素的偏导数一直都较小，那么该元素的学习率将下降较慢。然而，由于 $s_t$ 一直在累加按元素平方的梯度，自变量中每个元素的学习率在迭代过程中一直在降低（或不变）。所以，当学习率在迭代早期降得较快且当前解依然不佳时，AdaGrad算法在迭代后期由于学习率过小，可能较难找到一个有用的解。\n",
    "\n",
    "下面我们仍然以目标函数 $f(x)=0.1x_1^2+2x_2^2$ 为例观察AdaGrad算法对自变量的迭代轨迹。我们实现AdaGrad算法并使用和上一节实验中相同的学习率0.4。可以看到，自变量的迭代轨迹较平滑。但由于 $s_t$ 的累加效果使学习率不断衰减，自变量在迭代后期的移动幅度较小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "%matplotlib inline\r\n",
    "import math\r\n",
    "import numpy as np\r\n",
    "import matplotlib.pyplot as plt\r\n",
    "\r\n",
    "def adagrad_2d(x1, x2, s1, s2):\r\n",
    "    g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6  # 前两项为自变量梯度\r\n",
    "    s1 += g1 ** 2\r\n",
    "    s2 += g2 ** 2\r\n",
    "    x1 -= eta / math.sqrt(s1 + eps) * g1\r\n",
    "    x2 -= eta / math.sqrt(s2 + eps) * g2\r\n",
    "    return x1, x2, s1, s2\r\n",
    "\r\n",
    "def f_2d(x1, x2):\r\n",
    "    return 0.1 * x1 ** 2 + 2 * x2 ** 2\r\n",
    "\r\n",
    "eta = 0.4\r\n",
    "show_trace_2d(f_2d, train_2d(adagrad_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/262fba2c5d524820b5fc813fb5caade3a0a6760f520b4ea2950754db10693e2f\" alt=\"\"/>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "下面将学习率增大到2。可以看到自变量更为迅速地逼近了最优解。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "eta = 2\r\n",
    "show_trace_2d(f_2d, train_2d(adagrad_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/c7857b2a9f3b4630b361363c935ff5726ed9633b2bbf47629ac4dd3f6ec0b0bc\" alt=\"\"/>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.5.3. 简洁实现\n",
    "\n",
    "使用以下API即可使用Adagrad算法进行训练模型。\n",
    "\n",
    "```\n",
    "paddle.optimizer.Adagrad()\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.5.4. 小结\n",
    "- AdaGrad算法在迭代过程中不断调整学习率，并让目标函数自变量中每个元素都分别拥有自己的学习率。\n",
    "- 使用AdaGrad算法时，自变量中每个元素的学习率在迭代过程中一直在降低（或不变）。\n",
    "\n",
    "### 7.5.5. 练习\n",
    "- 在介绍AdaGrad算法的特点时，我们提到了它可能存在的问题。你能想到什么办法来解决这个问题？\n",
    "- 在实验中尝试使用其他的初始学习率，结果有什么变化？\n",
    "### 7.5.6. 参考文献\n",
    "[1] Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121-2159."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.6. RMSProp算法\n",
    "我们在“AdaGrad算法”一节中提到，因为调整学习率时分母上的变量$\\boldsymbol{s}_t$一直在累加按元素平方的小批量随机梯度，所以目标函数自变量每个元素的学习率在迭代过程中一直在降低（或不变）。因此，当学习率在迭代早期降得较快且当前解依然不佳时，AdaGrad算法在迭代后期由于学习率过小，可能较难找到一个有用的解。为了解决这一问题，RMSProp算法对AdaGrad算法做了一点小小的修改。该算法源自Coursera上的一门课，即“机器学习的神经网络” [1]。\n",
    "\n",
    "### 7.6.1. 算法\n",
    "我们在“动量法”一节里介绍过指数加权移动平均。不同于AdaGrad算法里状态变量$\\boldsymbol{s}_t$是截至时间步t所有小批量随机梯度$\\boldsymbol{g}_t$按元素平方和，RMSProp算法将这些梯度按元素平方做指数加权移动平均。具体来说，给定超参数$0 \\leq \\gamma < 1$，RMSProp算法在时间步$t>0$计算\n",
    "\n",
    "$$\n",
    "\\boldsymbol{s}_t \\leftarrow \\gamma \\boldsymbol{s}_{t-1} + (1 - \\gamma) \\boldsymbol{g}_t \\odot \\boldsymbol{g}_t\n",
    "$$\n",
    "\n",
    "和AdaGrad算法一样，RMSProp算法将目标函数自变量中每个元素的学习率通过按元素运算重新调整，然后更新自变量\n",
    "\n",
    "$$\n",
    "\\boldsymbol{x}_t \\leftarrow \\boldsymbol{x}_{t-1} - \\frac{\\eta}{\\sqrt{\\boldsymbol{s}_t + \\epsilon}} \\odot \\boldsymbol{g}_t\n",
    "$$\n",
    "\n",
    "其中$\\eta$是学习率，$\\epsilon$是为了维持数值稳定性而添加的常数，如$10^{-6}$。因为RMSProp算法的状态变量$\\boldsymbol{s}_t$是对平方项$\\boldsymbol{g}_t \\odot \\boldsymbol{g}_t$的指数加权移动平均，所以可以看作最近$1/(1-\\gamma)$个时间步的小批量随机梯度平方项的加权平均。如此一来，自变量每个元素的学习率在迭代过程中就不再一直降低（或不变）。\n",
    "\n",
    "照例，让我们先观察RMSProp算法对目标函数$f(\\boldsymbol{x})=0.1x_1^2+2x_2^2$中自变量的迭代轨迹。回忆在“AdaGrad算法”一节使用的学习率为0.4的AdaGrad算法，自变量在迭代后期的移动幅度较小。但在同样的学习率下，RMSProp算法可以更快逼近最优解。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def rmsprop_2d(x1, x2, s1, s2):\r\n",
    "    g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6\r\n",
    "    s1 = gamma * s1 + (1 - gamma) * g1 ** 2\r\n",
    "    s2 = gamma * s2 + (1 - gamma) * g2 ** 2\r\n",
    "    x1 -= eta / math.sqrt(s1 + eps) * g1\r\n",
    "    x2 -= eta / math.sqrt(s2 + eps) * g2\r\n",
    "    return x1, x2, s1, s2\r\n",
    "\r\n",
    "def f_2d(x1, x2):\r\n",
    "    return 0.1 * x1 ** 2 + 2 * x2 ** 2\r\n",
    "\r\n",
    "eta, gamma = 0.4, 0.9\r\n",
    "show_trace_2d(f_2d, train_2d(rmsprop_2d))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "<img style=\"display: block; margin: 0 auto;zoom:100%;\" src=\"https://ai-studio-static-online.cdn.bcebos.com/29ffd30bf537427395a86b6f0258f2844c87279d842d448c989736819c7d08e7\" alt=\"\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.6.2. 小结\n",
    "RMSProp算法和AdaGrad算法的不同在于，RMSProp算法使用了小批量随机梯度按元素平方的指数加权移动平均来调整学习率。\n",
    "\n",
    "### 7.6.3. 练习\n",
    "- 把$\\gamma$的值设为1，实验结果有什么变化？为什么？\n",
    "- 试着使用其他的初始学习率和\\gamma超参数的组合，观察并分析实验结果。\n",
    "\n",
    "### 7.6.4. 参考文献\n",
    "[1] Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 26-31."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.7. AdaDelta算法\n",
    "除了RMSProp算法以外，另一个常用优化算法AdaDelta算法也针对AdaGrad算法在迭代后期可能较难找到有用解的问题做了改进 [1]。有意思的是，AdaDelta算法没有学习率这一超参数。\n",
    "\n",
    "### 7.7.1. 算法\n",
    "AdaDelta算法也像RMSProp算法一样，使用了小批量随机梯度$\\boldsymbol{g}_t$按元素平方的指数加权移动平均变量$\\boldsymbol{s}_t$。在时间步0，它的所有元素被初始化为0。给定超参数0$ \\leq \\rho < 1$（对应RMSProp算法中的$\\gamma$），在时间步$t>0$，同RMSProp算法一样计算\n",
    "\n",
    "$$\n",
    "\\boldsymbol{s}_t \\leftarrow \\rho \\boldsymbol{s}_{t-1} + (1 - \\rho) \\boldsymbol{g}_t \\odot \\boldsymbol{g}_t\n",
    "$$\n",
    "\n",
    "与RMSProp算法不同的是，AdaDelta算法还维护一个额外的状态变量$\\Delta\\boldsymbol{x}_t$，其元素同样在时间步0时被初始化为0。我们使用$\\Delta\\boldsymbol{x}_{t-1}$来计算自变量的变化量：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{g}_t' \\leftarrow \\sqrt{\\frac{\\Delta\\boldsymbol{x}_{t-1} + \\epsilon}{\\boldsymbol{s}_t + \\epsilon}} \\odot \\boldsymbol{g}_t\n",
    "$$\n",
    "\n",
    "其中$\\epsilon$是为了维持数值稳定性而添加的常数，如$10^{-5}$。接着更新自变量：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{x}_t \\leftarrow \\boldsymbol{x}_{t-1} - \\boldsymbol{g}'_t\n",
    "$$\n",
    "\n",
    "最后，我们使用$\\Delta\\boldsymbol{x}_t$来记录自变量变化量$\\boldsymbol{g}'_t$按元素平方的指数加权移动平均：\n",
    "\n",
    "$$\n",
    "\\Delta\\boldsymbol{x}_t \\leftarrow \\rho \\Delta\\boldsymbol{x}_{t-1} + (1 - \\rho) \\boldsymbol{g}'_t \\odot \\boldsymbol{g}'_t\n",
    "$$\n",
    "\n",
    "可以看到，如不考虑$\\epsilon$的影响，AdaDelta算法与RMSProp算法的不同之处在于使用$\\sqrt{\\Delta\\boldsymbol{x}_{t-1}}$来替代超参数$\\eta$。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.7.2. 从零开始实现\n",
    "\n",
    "AdaDelta算法需要对每个自变量维护两个状态变量，即$\\boldsymbol{s}_t$和$\\Delta\\boldsymbol{x}_t$。我们按AdaDelta算法中的公式实现该算法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class Adadelta(Optimizer):\r\n",
    "    def __init__(self,\r\n",
    "                 learning_rate=0.001,\r\n",
    "                 epsilon=1.0e-6,\r\n",
    "                 rho=0.95,\r\n",
    "                 parameters=None,\r\n",
    "                 weight_decay=None,\r\n",
    "                 grad_clip=None,\r\n",
    "                 name=None):\r\n",
    "        if learning_rate is None:\r\n",
    "            raise ValueError(\"learning_rate is not set.\")\r\n",
    "        if epsilon is None:\r\n",
    "            raise ValueError(\"epsilon is not set.\")\r\n",
    "        if rho is None:\r\n",
    "            raise ValueError(\"rho is not set.\")\r\n",
    "        super(Adadelta, self).__init__(\r\n",
    "            learning_rate=learning_rate,\r\n",
    "            parameters=parameters,\r\n",
    "            weight_decay=weight_decay,\r\n",
    "            grad_clip=grad_clip,\r\n",
    "            name=name)\r\n",
    "        self.type = \"adadelta\"\r\n",
    "        self._epsilon = epsilon\r\n",
    "        self._rho = rho\r\n",
    "\r\n",
    "    def _create_accumulators(self, block, parameters):\r\n",
    "        if not isinstance(block, framework.Block):\r\n",
    "            raise TypeError(\"block is not instance of framework.Block.\")\r\n",
    "\r\n",
    "        for p in parameters:\r\n",
    "            self._add_accumulator(self._avg_squared_grad_acc_str, p)\r\n",
    "            self._add_accumulator(self._avg_squared_update_acc_str, p)\r\n",
    "\r\n",
    "    def _append_optimize_op(self, block, param_and_grad):\r\n",
    "        if not isinstance(block, framework.Block):\r\n",
    "            raise TypeError(\"block is not instance of framework.Block.\")\r\n",
    "\r\n",
    "        avg_squared_grad_acc = self._get_accumulator(\r\n",
    "            self._avg_squared_grad_acc_str, param_and_grad[0])\r\n",
    "        avg_squared_update_acc = self._get_accumulator(\r\n",
    "            self._avg_squared_update_acc_str, param_and_grad[0])\r\n",
    "\r\n",
    "        # Create the adadelta optimizer op\r\n",
    "        adadelta_op = block.append_op(\r\n",
    "            type=self.type,\r\n",
    "            inputs={\r\n",
    "                \"Param\": param_and_grad[0],\r\n",
    "                \"Grad\": param_and_grad[1],\r\n",
    "                \"AvgSquaredGrad\": avg_squared_grad_acc,\r\n",
    "                \"AvgSquaredUpdate\": avg_squared_update_acc\r\n",
    "            },\r\n",
    "            outputs={\r\n",
    "                \"ParamOut\": param_and_grad[0],\r\n",
    "                \"AvgSquaredGradOut\": avg_squared_grad_acc,\r\n",
    "                \"AvgSquaredUpdateOut\": avg_squared_update_acc\r\n",
    "            },\r\n",
    "            attrs={\"epsilon\": self._epsilon,\r\n",
    "                   \"rho\": self._rho},\r\n",
    "            stop_gradient=True)\r\n",
    "\r\n",
    "        return adadelta_op"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.7.3. 简洁实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import paddle\r\n",
    "\r\n",
    "inp = paddle.uniform(min=-0.1, max=0.1, shape=[10, 10], dtype='float32')\r\n",
    "linear = paddle.nn.Linear(10, 10)\r\n",
    "out = linear(inp)\r\n",
    "loss = paddle.mean(out)\r\n",
    "adadelta = paddle.optimizer.Adadelta(learning_rate=0.0003, epsilon=1.0e-6, rho=0.95,\r\n",
    "        parameters=linear.parameters())\r\n",
    "out.backward()\r\n",
    "adadelta.step()\r\n",
    "adadelta.clear_grad()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.7.4. 小结\n",
    "AdaDelta算法没有学习率超参数，它通过使用有关自变量更新量平方的指数加权移动平均的项来替代RMSProp算法中的学习率。\n",
    "### 7.7.5. 练习\n",
    "调节AdaDelta算法中超参数\\rho的值，观察实验结果。\n",
    "### 7.7.6. 参考文献\n",
    "[1] Zeiler, M. D. (2012). ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 7.8. Adam算法\n",
    "Adam算法在RMSProp算法基础上对小批量随机梯度也做了指数加权移动平均 [1]。下面我们来介绍这个算法。\n",
    "\n",
    "### 7.8.1. 算法\n",
    "Adam算法使用了动量变量$\\boldsymbol{v}_t$和RMSProp算法中小批量随机梯度按元素平方的指数加权移动平均变量$\\boldsymbol{s}_t$，并在时间步0将它们中每个元素初始化为0。给定超参数$0 \\leq \\beta_1 < 1$（算法作者建议设为0.9），时间步$t$的动量变量$\\boldsymbol{v}_t$即小批量随机梯度$\\boldsymbol{g}_t$的指数加权移动平均：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{v}_t \\leftarrow \\beta_1 \\boldsymbol{v}_{t-1} + (1 - \\beta_1) \\boldsymbol{g}_t\n",
    "$$\n",
    "\n",
    "和RMSProp算法中一样，给定超参数$0 \\leq \\beta_2 < 1$（算法作者建议设为0.999）， 将小批量随机梯度按元素平方后的项$\\boldsymbol{g}_t \\odot \\boldsymbol{g}_t$做指数加权移动平均得到$\\boldsymbol{s}_t$：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{s}_t \\leftarrow \\beta_2 \\boldsymbol{s}_{t-1} + (1 - \\beta_2) \\boldsymbol{g}_t \\odot \\boldsymbol{g}_t\n",
    "$$\n",
    "\n",
    "由于我们将$\\boldsymbol{v}_0$和$\\boldsymbol{s}_0$中的元素都初始化为0， 在时间步t我们得到$\\boldsymbol{v}_t = (1-\\beta_1) \\sum_{i=1}^t \\beta_1^{t-i} \\boldsymbol{g}_i$。将过去各时间步小批量随机梯度的权值相加，得到 $(1-\\beta_1) \\sum_{i=1}^t \\beta_1^{t-i} = 1 - \\beta_1^t$。需要注意的是，当$t$较小时，过去各时间步小批量随机梯度权值之和会较小。例如，当$\\beta_1 = 0.9$时，$\\boldsymbol{v}_1 = 0.1\\boldsymbol{g}_1$。为了消除这样的影响，对于任意时间步$t$，我们可以将$\\boldsymbol{v}_t$再除以$1 - \\beta_1^t$，从而使过去各时间步小批量随机梯度权值之和为1。这也叫作偏差修正。在Adam算法中，我们对变量$\\boldsymbol{v}_t和\\boldsymbol{s}_t$均作偏差修正：\n",
    "\n",
    "$$\n",
    "\\hat{\\boldsymbol{v}}_t \\leftarrow \\frac{\\boldsymbol{v}_t}{1 - \\beta_1^t}\n",
    "$$\n",
    "\n",
    "$$\n",
    "\\hat{\\boldsymbol{s}}_t \\leftarrow \\frac{\\boldsymbol{s}_t}{1 - \\beta_2^t}\n",
    "$$\n",
    "\n",
    "接下来，Adam算法使用以上偏差修正后的变量$\\hat{\\boldsymbol{v}}_t$和$\\hat{\\boldsymbol{s}}_t$，将模型参数中每个元素的学习率通过按元素运算重新调整：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{g}_t' \\leftarrow \\frac{\\eta \\hat{\\boldsymbol{v}}_t}{\\sqrt{\\hat{\\boldsymbol{s}}_t} + \\epsilon}\n",
    "$$\n",
    "\n",
    "其中$\\eta$是学习率，$\\epsilon$是为了维持数值稳定性而添加的常数，如$10^{-8}$。和AdaGrad算法、RMSProp算法以及AdaDelta算法一样，目标函数自变量中每个元素都分别拥有自己的学习率。最后，使用$\\boldsymbol{g}_t'$迭代自变量：\n",
    "\n",
    "$$\n",
    "\\boldsymbol{x}_t \\leftarrow \\boldsymbol{x}_{t-1} - \\boldsymbol{g}_t'\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.8.2. 从零开始实现\n",
    "\n",
    "我们按照Adam算法中的公式实现该算法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class Adam(Optimizer):\r\n",
    "    _moment1_acc_str = \"moment1\"\r\n",
    "    _moment2_acc_str = \"moment2\"\r\n",
    "    _beta1_pow_acc_str = \"beta1_pow_acc\"\r\n",
    "    _beta2_pow_acc_str = \"beta2_pow_acc\"\r\n",
    "\r\n",
    "    def __init__(self,\r\n",
    "                 learning_rate=0.001,\r\n",
    "                 beta1=0.9,\r\n",
    "                 beta2=0.999,\r\n",
    "                 epsilon=1e-8,\r\n",
    "                 parameters=None,\r\n",
    "                 weight_decay=None,\r\n",
    "                 grad_clip=None,\r\n",
    "                 lazy_mode=False,\r\n",
    "                 multi_precision=False,\r\n",
    "                 name=None):\r\n",
    "        assert learning_rate is not None\r\n",
    "        assert beta1 is not None\r\n",
    "        assert beta2 is not None\r\n",
    "        assert epsilon is not None\r\n",
    "        if not 0 <= beta1 < 1:\r\n",
    "            raise ValueError(\"Invaild value of beta1, expect beta1 in [0,1).\")\r\n",
    "        if not 0 <= beta2 < 1:\r\n",
    "            raise ValueError(\"Invaild value of beta2, expect beta2 in [0,1).\")\r\n",
    "        if not 0 <= epsilon:\r\n",
    "            raise ValueError(\"Invaild value of epsilon, expect epsilon >= 0.\")\r\n",
    "        super(Adam, self).__init__(\r\n",
    "            learning_rate=learning_rate,\r\n",
    "            parameters=parameters,\r\n",
    "            weight_decay=weight_decay,\r\n",
    "            grad_clip=grad_clip,\r\n",
    "            name=name)\r\n",
    "        self.type = \"adam\"\r\n",
    "        self._beta1 = beta1\r\n",
    "        self._beta2 = beta2\r\n",
    "        self._epsilon = epsilon\r\n",
    "        self._lazy_mode = lazy_mode\r\n",
    "        self._multi_precision = multi_precision\r\n",
    "        self._master_weights = {}\r\n",
    "\r\n",
    "    def _create_master_weight(self, param):\r\n",
    "        assert isinstance(self.helper, LayerHelper)\r\n",
    "\r\n",
    "        var_name = param.name + \"_fp32_master\"\r\n",
    "        var_name = unique_name.generate(var_name)\r\n",
    "        var = layers.create_global_var(\r\n",
    "            name=var_name,\r\n",
    "            shape=param.shape,\r\n",
    "            value=0,\r\n",
    "            dtype='float32',\r\n",
    "            persistable=True)\r\n",
    "        block = self.helper.startup_program.global_block()\r\n",
    "        block.append_op(\r\n",
    "            type=\"cast\",\r\n",
    "            inputs={\"X\": [param]},\r\n",
    "            outputs={\"Out\": [var]},\r\n",
    "            attrs={\r\n",
    "                \"in_dtype\": param.dtype,\r\n",
    "                \"out_dtype\": core.VarDesc.VarType.FP32\r\n",
    "            })\r\n",
    "        self._master_weights[param.name] = var\r\n",
    "        return var\r\n",
    "\r\n",
    "    def _get_accumulator(self, name, param):\r\n",
    "        \"\"\"Utility function to fetch an accumulator for a parameter\r\n",
    "        Args:\r\n",
    "            name: name of the accumulator\r\n",
    "            param: parameter variable for which accumulator is to be fetched\r\n",
    "        Returns:\r\n",
    "            accumulator variable for the parameter\r\n",
    "        \"\"\"\r\n",
    "        if self._name is not None:\r\n",
    "            name = self._name + \"_\" + name\r\n",
    "        find_master = self._multi_precision and param.dtype == core.VarDesc.VarType.FP16\r\n",
    "        target_param = self._master_weights[\r\n",
    "            param.name] if find_master else param\r\n",
    "        target_name = target_param.name\r\n",
    "        if (name not in self._accumulators or\r\n",
    "                target_name not in self._accumulators[name]):\r\n",
    "            raise Exception(\"Accumulator {} does not exist for parameter {}\".\r\n",
    "                            format(name, target_name))\r\n",
    "        return self._accumulators[name][target_name]\r\n",
    "\r\n",
    "    def _add_moments_pows(self, p):\r\n",
    "        acc_dtype = p.dtype\r\n",
    "        if acc_dtype == core.VarDesc.VarType.FP16:\r\n",
    "            acc_dtype = core.VarDesc.VarType.FP32\r\n",
    "        self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype)\r\n",
    "        self._add_accumulator(self._moment2_acc_str, p, dtype=acc_dtype)\r\n",
    "        self._add_accumulator(\r\n",
    "            name=self._beta1_pow_acc_str,\r\n",
    "            param=p,\r\n",
    "            dtype=acc_dtype,\r\n",
    "            fill_value=0.9 if isinstance(self._beta1, Variable) \\\r\n",
    "                    else self._beta1,\r\n",
    "            shape=[1],\r\n",
    "            type=core.VarDesc.VarType.LOD_TENSOR, device='cpu')\r\n",
    "        self._add_accumulator(\r\n",
    "            name=self._beta2_pow_acc_str,\r\n",
    "            param=p,\r\n",
    "            dtype=acc_dtype,\r\n",
    "            fill_value=0.999 if isinstance(self._beta2, Variable) \\\r\n",
    "                    else self._beta2,\r\n",
    "            shape=[1],\r\n",
    "            type=core.VarDesc.VarType.LOD_TENSOR, device='cpu')\r\n",
    "\r\n",
    "    def _create_accumulators(self, block, parameters):\r\n",
    "        assert isinstance(block, framework.Block)\r\n",
    "\r\n",
    "        # Create accumulator tensors for first and second moments\r\n",
    "        for p in parameters:\r\n",
    "            if self._multi_precision and p.dtype == core.VarDesc.VarType.FP16:\r\n",
    "                master_p = self._create_master_weight(p)\r\n",
    "                self._add_moments_pows(master_p)\r\n",
    "                continue\r\n",
    "            if p.dtype == core.VarDesc.VarType.FP16 and not self._multi_precision:\r\n",
    "                warnings.warn(\r\n",
    "                    \"Accumulating with FP16 in optimizer can lead to poor accuracy or slow convergence.\"\r\n",
    "                    \"Consider using multi_precision=True option of the Momentum optimizer.\"\r\n",
    "                )\r\n",
    "            self._add_moments_pows(p)\r\n",
    "\r\n",
    "    def _append_optimize_op(self, block, param_and_grad):\r\n",
    "        assert isinstance(block, framework.Block)\r\n",
    "\r\n",
    "        moment1 = self._get_accumulator(self._moment1_acc_str,\r\n",
    "                                        param_and_grad[0])\r\n",
    "        moment2 = self._get_accumulator(self._moment2_acc_str,\r\n",
    "                                        param_and_grad[0])\r\n",
    "        beta1_pow_acc = self._get_accumulator(self._beta1_pow_acc_str,\r\n",
    "                                              param_and_grad[0])\r\n",
    "        beta2_pow_acc = self._get_accumulator(self._beta2_pow_acc_str,\r\n",
    "                                              param_and_grad[0])\r\n",
    "        find_master = self._multi_precision and param_and_grad[\r\n",
    "            0].dtype == core.VarDesc.VarType.FP16\r\n",
    "        master_weight = (self._master_weights[param_and_grad[0].name]\r\n",
    "                         if find_master else None)\r\n",
    "        lr = self._create_param_lr(param_and_grad)\r\n",
    "        # create the adam optimize op\r\n",
    "\r\n",
    "        if framework.in_dygraph_mode():\r\n",
    "            _beta1 = self._beta1 if not isinstance(\r\n",
    "                self._beta1, Variable) else self._beta1.numpy().item(0)\r\n",
    "            _beta2 = self._beta2 if not isinstance(\r\n",
    "                self._beta2, Variable) else self._beta2.numpy().item(0)\r\n",
    "            _, _, _, _, _ = core.ops.adam(\r\n",
    "                param_and_grad[0], param_and_grad[1], lr, moment1, moment2,\r\n",
    "                beta1_pow_acc, beta2_pow_acc, param_and_grad[0], moment1,\r\n",
    "                moment2, beta1_pow_acc, beta2_pow_acc, 'epsilon', self._epsilon,\r\n",
    "                'lazy_mode', self._lazy_mode, 'min_row_size_to_use_multithread',\r\n",
    "                1000, 'beta1', _beta1, 'beta2', _beta2)\r\n",
    "\r\n",
    "            return None\r\n",
    "\r\n",
    "        inputs = {\r\n",
    "            \"Param\": [param_and_grad[0]],\r\n",
    "            \"Grad\": [param_and_grad[1]],\r\n",
    "            \"LearningRate\": [lr],\r\n",
    "            \"Moment1\": [moment1],\r\n",
    "            \"Moment2\": [moment2],\r\n",
    "            \"Beta1Pow\": [beta1_pow_acc],\r\n",
    "            \"Beta2Pow\": [beta2_pow_acc]\r\n",
    "        }\r\n",
    "        outputs = {\r\n",
    "            \"ParamOut\": [param_and_grad[0]],\r\n",
    "            \"Moment1Out\": [moment1],\r\n",
    "            \"Moment2Out\": [moment2],\r\n",
    "            \"Beta1PowOut\": [beta1_pow_acc],\r\n",
    "            \"Beta2PowOut\": [beta2_pow_acc],\r\n",
    "        }\r\n",
    "        attrs = {\r\n",
    "            \"epsilon\": self._epsilon,\r\n",
    "            \"lazy_mode\": self._lazy_mode,\r\n",
    "            \"min_row_size_to_use_multithread\": 1000,\r\n",
    "            \"multi_precision\": find_master\r\n",
    "        }\r\n",
    "\r\n",
    "        if isinstance(self._beta1, Variable):\r\n",
    "            inputs['Beta1Tensor'] = self._beta1\r\n",
    "        else:\r\n",
    "            attrs['beta1'] = self._beta1\r\n",
    "        if isinstance(self._beta2, Variable):\r\n",
    "            inputs['Beta2Tensor'] = self._beta2\r\n",
    "        else:\r\n",
    "            attrs['beta2'] = self._beta2\r\n",
    "\r\n",
    "        if find_master:\r\n",
    "            inputs[\"MasterParam\"] = master_weight\r\n",
    "            outputs[\"MasterParamOut\"] = master_weight\r\n",
    "\r\n",
    "        adam_op = block.append_op(\r\n",
    "            type=self.type,\r\n",
    "            inputs=inputs,\r\n",
    "            outputs=outputs,\r\n",
    "            attrs=attrs,\r\n",
    "            stop_gradient=True)\r\n",
    "\r\n",
    "        return adam_op\r\n",
    "\r\n",
    "    @imperative_base.no_grad\r\n",
    "    @framework.dygraph_only\r\n",
    "    def step(self):\r\n",
    "        \"\"\"\r\n",
    "        Execute the optimizer and update parameters once.\r\n",
    "        Returns:\r\n",
    "            None\r\n",
    "        Examples:\r\n",
    "            .. code-block:: python\r\n",
    "                import paddle\r\n",
    "                \r\n",
    "                a = paddle.rand([2,13], dtype=\"float32\")\r\n",
    "                linear = paddle.nn.Linear(13, 5)\r\n",
    "                # This can be any optimizer supported by dygraph.\r\n",
    "                adam = paddle.optimizer.Adam(learning_rate = 0.01,\r\n",
    "                                            parameters = linear.parameters())\r\n",
    "                out = linear(a)\r\n",
    "                out.backward()\r\n",
    "                adam.step()\r\n",
    "                adam.clear_grad()\r\n",
    "        \"\"\"\r\n",
    "        self._dtype = None\r\n",
    "        params_grads = []\r\n",
    "        for param in self._parameter_list:\r\n",
    "            if not param.trainable:\r\n",
    "                continue\r\n",
    "            if param._grad_ivar() is not None:\r\n",
    "                grad_var = param._grad_ivar()\r\n",
    "                if hasattr(grad_var, \"_is_sparse\") and grad_var._is_sparse(\r\n",
    "                ) and self.regularization is not None:\r\n",
    "                    raise RuntimeError(\r\n",
    "                        \"Adam don't support weight_decay with sparse parameters, please set it to None.\"\r\n",
    "                    )\r\n",
    "                params_grads.append((param, grad_var))\r\n",
    "\r\n",
    "        optimize_ops = self._apply_optimize(\r\n",
    "            loss=None, startup_program=None, params_grads=params_grads)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.8.3. 简洁实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import paddle\r\n",
    "import numpy as np\r\n",
    "\r\n",
    "inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype(\"float32\")\r\n",
    "linear = paddle.nn.Linear(10, 10)\r\n",
    "inp = paddle.to_tensor(inp)\r\n",
    "out = linear(inp)\r\n",
    "loss = paddle.mean(out)\r\n",
    "adam = paddle.optimizer.Adam(learning_rate=0.1,\r\n",
    "        parameters=linear.parameters())\r\n",
    "out.backward()\r\n",
    "adam.step()\r\n",
    "adam.clear_grad()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7.8.4. 小结\n",
    "Adam算法在RMSProp算法的基础上对小批量随机梯度也做了指数加权移动平均。\n",
    "Adam算法使用了偏差修正。\n",
    "\n",
    "### 7.8.5. 练习\n",
    "- 调节学习率，观察并分析实验结果。\n",
    "- 有人说Adam算法是RMSProp算法与动量法的结合。想一想，这是为什么？\n",
    "\n",
    "###  7.8.6. 参考文献\n",
    "[1] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 2.0.0b0 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
