{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 动量法\n",
    ":label:`sec_momentum`\n",
    "\n",
    "在 :numref:`sec_sgd`一节中，我们详述了如何执行随机梯度下降，即在只有嘈杂的梯度可用的情况下执行优化时会发生什么。\n",
    "对于嘈杂的梯度，我们在选择学习率需要格外谨慎。\n",
    "如果衰减速度太快，收敛就会停滞。\n",
    "相反，如果太宽松，我们可能无法收敛到最优解。\n",
    "\n",
    "## 基础\n",
    "\n",
    "在本节中，我们将探讨更有效的优化算法，尤其是针对实验中常见的某些类型的优化问题。\n",
    "\n",
    "### 泄漏平均值\n",
    "\n",
    "上一节中我们讨论了小批量随机梯度下降作为加速计算的手段。\n",
    "它也有很好的副作用，即平均梯度减小了方差。\n",
    "小批量随机梯度下降可以通过以下方式计算：\n",
    "\n",
    "$$\\mathbf{g}_{t, t-1} = \\partial_{\\mathbf{w}} \\frac{1}{|\\mathcal{B}_t|} \\sum_{i \\in \\mathcal{B}_t} f(\\mathbf{x}_{i}, \\mathbf{w}_{t-1}) = \\frac{1}{|\\mathcal{B}_t|} \\sum_{i \\in \\mathcal{B}_t} \\mathbf{h}_{i, t-1}.\n",
    "$$\n",
    "\n",
    "为了保持记法简单，在这里我们使用$\\mathbf{h}_{i, t-1} = \\partial_{\\mathbf{w}} f(\\mathbf{x}_i, \\mathbf{w}_{t-1})$作为样本$i$的随机梯度下降，使用时间$t-1$时更新的权重$t-1$。\n",
    "如果我们能够从方差减少的影响中受益，甚至超过小批量上的梯度平均值，那很不错。\n",
    "完成这项任务的一种选择是用*泄漏平均值*（leaky average）取代梯度计算：\n",
    "\n",
    "$$\\mathbf{v}_t = \\beta \\mathbf{v}_{t-1} + \\mathbf{g}_{t, t-1}$$\n",
    "\n",
    "其中$\\beta \\in (0, 1)$。\n",
    "这有效地将瞬时梯度替换为多个“过去”梯度的平均值。\n",
    "$\\mathbf{v}$被称为*动量*（momentum），\n",
    "它累加了过去的梯度。\n",
    "为了更详细地解释，让我们递归地将$\\mathbf{v}_t$扩展到\n",
    "\n",
    "$$\\begin{aligned}\n",
    "\\mathbf{v}_t = \\beta^2 \\mathbf{v}_{t-2} + \\beta \\mathbf{g}_{t-1, t-2} + \\mathbf{g}_{t, t-1}\n",
    "= \\ldots, = \\sum_{\\tau = 0}^{t-1} \\beta^{\\tau} \\mathbf{g}_{t-\\tau, t-\\tau-1}.\n",
    "\\end{aligned}$$\n",
    "\n",
    "其中，较大的$\\beta$相当于长期平均值，而较小的$\\beta$相对于梯度法只是略有修正。\n",
    "新的梯度替换不再指向特定实例下降最陡的方向，而是指向过去梯度的加权平均值的方向。\n",
    "这使我们能够实现对单批量计算平均值的大部分好处，而不产生实际计算其梯度的代价。\n",
    "\n",
    "上述推理构成了\"加速\"梯度方法的基础，例如具有动量的梯度。\n",
    "在优化问题条件不佳的情况下（例如，有些方向的进展比其他方向慢得多，类似狭窄的峡谷），\"加速\"梯度还额外享受更有效的好处。\n",
    "此外，它们允许我们对随后的梯度计算平均值，以获得更稳定的下降方向。\n",
    "诚然，即使是对于无噪声凸问题，加速度这方面也是为什么动量如此起效的关键原因之一。\n",
    "\n",
    "正如人们所期望的，由于其功效，动量是深度学习及其后优化中一个深入研究的主题。\n",
    "例如，请参阅[文章](https://distill.pub/2017/momentum/）（作者是 :cite:`Goh.2017`)，观看深入分析和互动动画。\n",
    "动量是由 :cite:`Polyak.1964`提出的。\n",
    " :cite:`Nesterov.2018`在凸优化的背景下进行了详细的理论讨论。\n",
    "长期以来，深度学习的动量一直被认为是有益的。\n",
    "有关实例的详细信息，请参阅 :cite:`Sutskever.Martens.Dahl.ea.2013`的讨论。\n",
    "\n",
    "### 条件不佳的问题\n",
    "\n",
    "为了更好地了解动量法的几何属性，我们复习一下梯度下降。\n",
    "回想我们在 :numref:`sec_gd`中使用了$f(\\mathbf{x}) = x_1^2 + 2 x_2^2$，即中度扭曲的椭球目标。\n",
    "我们通过向$x_1$方向伸展它来进一步扭曲这个函数\n",
    "\n",
    "$$f(\\mathbf{x}) = 0.1 x_1^2 + 2 x_2^2.$$\n",
    "\n",
    "与之前一样，$f$在$(0, 0)$有最小值，\n",
    "该函数在$x_1$的方向上非常*平坦。\n",
    "让我们看看在这个新函数上执行梯度下降时会发生什么。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%load ../utils/djl-imports\n",
    "%load ../utils/plot-utils\n",
    "%load ../utils/Functions.java\n",
    "%load ../utils/GradDescUtils.java\n",
    "%load ../utils/Accumulator.java\n",
    "%load ../utils/StopWatch.java\n",
    "%load ../utils/Training.java\n",
    "%load ../utils/TrainingChapter11.java"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import org.apache.commons.lang3.ArrayUtils;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "float eta = 0.4f;\n",
    "BiFunction<Float, Float, Float> f2d = (x1, x2) -> 0.1f * x1 * x1 + 2 * x2 * x2;\n",
    "\n",
    "Function<Float[], Float[]> gd2d = (state) -> {\n",
    "    Float x1 = state[0], x2 = state[1], s1 = state[2], s2 = state[3];\n",
    "    return new Float[]{x1 - eta * 0.2f * x1, x2 - eta * 4 * x2, 0f, 0f};\n",
    "};\n",
    "\n",
    "GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(gd2d, 20));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Ellipsoid with Flat x1.](https://d2l-java-resources.s3.amazonaws.com/img/gd_flatx1.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从构造来看，$x_2$方向的梯度比水平$x_1$方向的梯度大得多，变化也快得多。\n",
    "因此，我们陷入两难：如果选择较小的学习率，我们会确保解不会在$x_2$方向发散，但要承受在$x_1$方向的缓慢收敛。相反，如果学习率较高，我们在$x_1$方向上进展很快，但在$x_2$方向将会发散。\n",
    "下面的例子说明了即使学习率从$0.4$略微提高到$0.6$，也会发生变化。\n",
    "$x_1$方向上的收敛有所改善，但整体来看解的质量更差了。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "float eta = 0.6f;\n",
    "GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(gd2d, 20));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Ellipsoid with Flat x1 with Large Learning Rate.](https://d2l-java-resources.s3.amazonaws.com/img/gd_flatx1_large_lr.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 动量法\n",
    "\n",
    "*动量法*（momentum）使我们能够解决上面描述的梯度下降问题。\n",
    "观察上面的优化轨迹，我们可能会直觉到计算过去的平均梯度效果会很好。\n",
    "毕竟，在$x_1$方向上，这将聚合非常对齐的梯度，从而增加我们在每一步中覆盖的距离。\n",
    "相反，在梯度振荡的$x_2$方向，由于相互抵消了对方的振荡，聚合梯度将减小步长大小。\n",
    "使用$\\mathbf{v}_t$而不是梯度$\\mathbf{g}_t$可以生成以下更新等式：\n",
    "\n",
    "$$\n",
    "\\begin{aligned}\n",
    "\\mathbf{v}_t &\\leftarrow \\beta \\mathbf{v}_{t-1} + \\mathbf{g}_{t, t-1}, \\\\\n",
    "\\mathbf{x}_t &\\leftarrow \\mathbf{x}_{t-1} - \\eta_t \\mathbf{v}_t.\n",
    "\\end{aligned}\n",
    "$$\n",
    "\n",
    "请注意，对于$\\beta = 0$，我们恢复常规的梯度下降。\n",
    "在深入研究它的数学属性之前，让我们快速看一下算法在实验中的表现如何。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "float eta = 0.6f;\n",
    "float beta = 0.5f;\n",
    "\n",
    "Function<Float[], Float[]> momentum2d = (state) -> {\n",
    "    Float x1 = state[0], x2 = state[1], v1 = state[2], v2 = state[3];\n",
    "    v1 = beta * v1 + 0.2f * x1;\n",
    "    v2 = beta * v2 + 4 * x2;\n",
    "    return new Float[]{x1 - eta * v1, x2 - eta * v2, v1, v2};\n",
    "};\n",
    "\n",
    "GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(momentum2d, 20));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Contour Momentum.](https://d2l-java-resources.s3.amazonaws.com/img/contour_gd_mom.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "正如所见，尽管学习率与我们以前使用的相同，动量法仍然很好地收敛了。\n",
    "让我们看看当降低动量参数时会发生什么。\n",
    "将其减半至$\\beta = 0.25$会导致一条几乎没有收敛的轨迹。\n",
    "尽管如此，它比没有动量时解将会发散要好得多。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "eta = 0.6f;\n",
    "beta = 0.25f;\n",
    "GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(momentum2d, 20));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Contour Momentum Less.](https://d2l-java-resources.s3.amazonaws.com/img/contour_gd_mom_less.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "请注意，我们可以将动量法与随机梯度下降，特别是小批量随机梯度下降结合起来。\n",
    "唯一的变化是，在这种情况下，我们将梯度$\\mathbf{g}_{t, t-1}$替换为$\\mathbf{g}_t$。\n",
    "为了方便起见，我们在时间$t=0$初始化$\\mathbf{v}_0 = 0$。\n",
    "\n",
    "### 有效样本权重\n",
    "\n",
    "回想一下$\\mathbf{v}_t = \\sum_{\\tau = 0}^{t-1} \\beta^{\\tau} \\mathbf{g}_{t-\\tau, t-\\tau-1}$。\n",
    "极限条件下，$\\sum_{\\tau=0}^\\infty \\beta^\\tau = \\frac{1}{1-\\beta}$。\n",
    "换句话说，我们测中处理可能潜在表现会更好的下降方向。\n",
    "为了说明$\\beta$的不同选择的权重效果如何，请参考下面的图表。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "/* Saved in GradDescUtils.java */\n",
    "public static Figure plotGammas(float[] time, float[] gammas,\n",
    "                              int width, int height) {\n",
    "    double[] gamma1 = new double[time.length];\n",
    "    double[] gamma2 = new double[time.length];\n",
    "    double[] gamma3 = new double[time.length];\n",
    "    double[] gamma4 = new double[time.length];\n",
    "\n",
    "    // Calculate all gammas over time\n",
    "    for (int i = 0; i < time.length; i++) {\n",
    "        gamma1[i] = Math.pow(gammas[0], i);\n",
    "        gamma2[i] = Math.pow(gammas[1], i);\n",
    "        gamma3[i] = Math.pow(gammas[2], i);\n",
    "        gamma4[i] = Math.pow(gammas[3], i);\n",
    "    }\n",
    "\n",
    "    // Gamma 1 Line\n",
    "    ScatterTrace gamma1trace = ScatterTrace.builder(Functions.floatToDoubleArray(time),\n",
    "            gamma1)\n",
    "            .mode(ScatterTrace.Mode.LINE)\n",
    "            .name(String.format(\"gamma = %.2f\", gammas[0]))\n",
    "            .build();\n",
    "\n",
    "    // Gamma 2 Line\n",
    "    ScatterTrace gamma2trace = ScatterTrace.builder(Functions.floatToDoubleArray(time),\n",
    "            gamma2)\n",
    "            .mode(ScatterTrace.Mode.LINE)\n",
    "            .name(String.format(\"gamma = %.2f\", gammas[1]))\n",
    "            .build();\n",
    "\n",
    "    // Gamma 3 Line\n",
    "    ScatterTrace gamma3trace = ScatterTrace.builder(Functions.floatToDoubleArray(time),\n",
    "            gamma3)\n",
    "            .mode(ScatterTrace.Mode.LINE)\n",
    "            .name(String.format(\"gamma = %.2f\", gammas[2]))\n",
    "            .build();\n",
    "\n",
    "    // Gamma 4 Line\n",
    "    ScatterTrace gamma4trace = ScatterTrace.builder(Functions.floatToDoubleArray(time),\n",
    "            gamma4)\n",
    "            .mode(ScatterTrace.Mode.LINE)\n",
    "            .name(String.format(\"gamma = %.2f\", gammas[3]))\n",
    "            .build();\n",
    "\n",
    "    Axis xAxis = Axis.builder()\n",
    "            .title(\"time\")\n",
    "            .build();\n",
    "\n",
    "    Layout layout = Layout.builder()\n",
    "            .height(height)\n",
    "            .width(width)\n",
    "            .xAxis(xAxis)\n",
    "            .build();\n",
    "\n",
    "    return new Figure(layout, gamma1trace, gamma2trace, gamma3trace, gamma4trace);\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "NDManager manager = NDManager.newBaseManager();\n",
    "\n",
    "float[] gammas = new float[]{0.95f, 0.9f, 0.6f, 0f};\n",
    "\n",
    "NDArray timesND = manager.arange(40f);\n",
    "float[] times = timesND.toFloatArray();\n",
    "\n",
    "plotGammas(times, gammas, 600, 400)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实际实验\n",
    "\n",
    "让我们来看看动量法在实验中是如何运作的。\n",
    "为此，我们需要一个更加可扩展的实现。\n",
    "\n",
    "### 从零开始实现\n",
    "\n",
    "相比于小批量随机梯度下降，动量方法需要维护一组辅助变量，即速度。\n",
    "它与梯度以及优化问题的变量具有相同的形状。\n",
    "在下面的实现中，我们称这些变量为`states`。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "NDList initMomentumStates(int featureDim) {\n",
    "    NDManager manager = NDManager.newBaseManager();\n",
    "    NDArray vW = manager.zeros(new Shape(featureDim, 1));\n",
    "    NDArray vB = manager.zeros(new Shape(1));\n",
    "    return new NDList(vW, vB);\n",
    "}\n",
    "\n",
    "public class Optimization {\n",
    "    public static void sgdMomentum(NDList params, NDList states, Map<String, Float> hyperparams) {\n",
    "        for (int i = 0; i < params.size(); i++) {\n",
    "            NDArray param = params.get(i);\n",
    "            NDArray velocity = states.get(i);\n",
    "            // Update param\n",
    "            velocity.muli(hyperparams.get(\"momentum\")).addi(param.getGradient());\n",
    "            param.subi(velocity.mul(hyperparams.get(\"lr\")));\n",
    "        }\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "让我们看看它在实验中是如何运作的。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "AirfoilRandomAccess airfoil = TrainingChapter11.getDataCh11(10, 1500);\n",
    "\n",
    "public TrainingChapter11.LossTime trainMomentum(float lr, float momentum, int numEpochs) \n",
    "    throws IOException, TranslateException {\n",
    "    int featureDim = airfoil.getColumnNames().size();\n",
    "    Map<String, Float> hyperparams = new HashMap<>();\n",
    "    hyperparams.put(\"lr\", lr);\n",
    "    hyperparams.put(\"momentum\", momentum);\n",
    "    return TrainingChapter11.trainCh11(Optimization::sgdMomentum, initMomentumStates(featureDim), hyperparams, airfoil, featureDim, numEpochs);\n",
    "}\n",
    "\n",
    "trainMomentum(0.02f, 0.5f, 2);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "当我们将动量超参数`momentum`增加到0.9时，它相当于有效样本数量增加到$\\frac{1}{1 - 0.9} = 10$。\n",
    "我们将学习率略微降至$0.01$，以确保可控。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainMomentum(0.01f, 0.9f, 2);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "降低学习率进一步解决了任何非平滑优化问题的困难，将其设置为$0.005$会产生良好的收敛性能。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainMomentum(0.005f, 0.9f, 2);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 简洁实现\n",
    "\n",
    "由于深度学习框架中的优化求解器早已构建了动量法，设置匹配参数会产生非常类似的轨迹。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Tracker lrt = Tracker.fixed(0.005f);\n",
    "Optimizer sgd = Optimizer.sgd().setLearningRateTracker(lrt).optMomentum(0.9f).build();\n",
    "\n",
    "TrainingChapter11.trainConciseCh11(sgd, airfoil, 2);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 理论分析\n",
    "\n",
    "$f(x) = 0.1 x_1^2 + 2 x_2^2$的2D示例似乎相当牵强。\n",
    "下面我们将看到，它在实际生活中非常具有代表性，至少最小化凸二次目标函数的情况下是如此。\n",
    "\n",
    "### 二次凸函数\n",
    "\n",
    "考虑这个函数\n",
    "\n",
    "$$h(\\mathbf{x}) = \\frac{1}{2} \\mathbf{x}^\\top \\mathbf{Q} \\mathbf{x} + \\mathbf{x}^\\top \\mathbf{c} + b.$$\n",
    "\n",
    "这是一个普通的二次函数。\n",
    "对于正定矩阵$\\mathbf{Q} \\succ 0$，即对于具有正特征值的矩阵，有最小化器为$\\mathbf{x}^* = -\\mathbf{Q}^{-1} \\mathbf{c}$，最小值为$b - \\frac{1}{2} \\mathbf{c}^\\top \\mathbf{Q}^{-1} \\mathbf{c}$。\n",
    "因此我们可以将$h$重写为\n",
    "\n",
    "$$h(\\mathbf{x}) = \\frac{1}{2} (\\mathbf{x} - \\mathbf{Q}^{-1} \\mathbf{c})^\\top \\mathbf{Q} (\\mathbf{x} - \\mathbf{Q}^{-1} \\mathbf{c}) + b - \\frac{1}{2} \\mathbf{c}^\\top \\mathbf{Q}^{-1} \\mathbf{c}.$$\n",
    "\n",
    "梯度由$\\partial_{\\mathbf{x}} f(\\mathbf{x}) = \\mathbf{Q} (\\mathbf{x} - \\mathbf{Q}^{-1} \\mathbf{c})$给出。\n",
    "也就是说，它是由$\\mathbf{x}$和最小化器之间的距离乘以$\\mathbf{Q}$所得出的。\n",
    "因此，动量法还是$\\mathbf{Q} (\\mathbf{x}_t - \\mathbf{Q}^{-1} \\mathbf{c})$的线性组合。\n",
    "\n",
    "由于$\\mathbf{Q}$是正定的，因此可以通过$\\mathbf{Q} = \\mathbf{O}^\\top \\boldsymbol{\\Lambda} \\mathbf{O}$分解为正交（旋转）矩阵$\\mathbf{O}$和正特征值的对角矩阵$\\boldsymbol{\\Lambda}$。\n",
    "这使我们能够将变量从$\\mathbf{x}$更改为$\\mathbf{z} := \\mathbf{O} (\\mathbf{x} - \\mathbf{Q}^{-1} \\mathbf{c})$，以获得一个非常简化的表达式：\n",
    "\n",
    "$$h(\\mathbf{z}) = \\frac{1}{2} \\mathbf{z}^\\top \\boldsymbol{\\Lambda} \\mathbf{z} + b'.$$\n",
    "\n",
    "这里$b' = b - \\frac{1}{2} \\mathbf{c}^\\top \\mathbf{Q}^{-1} \\mathbf{c}$。\n",
    "由于$\\mathbf{O}$只是一个正交矩阵，因此不会真正意义上扰动梯度。\n",
    "以$\\mathbf{z}$表示的梯度下降变成\n",
    "\n",
    "$$\\mathbf{z}_t = \\mathbf{z}_{t-1} - \\boldsymbol{\\Lambda} \\mathbf{z}_{t-1} = (\\mathbf{I} - \\boldsymbol{\\Lambda}) \\mathbf{z}_{t-1}.$$\n",
    "\n",
    "这个表达式中的重要事实是梯度下降在不同的特征空间之间不会混合。\n",
    "也就是说，如果用$\\mathbf{Q}$的特征系统来表示，优化问题是以逐坐标顺序的方式进行的。\n",
    "这在动量法中也适用。\n",
    "\n",
    "$$\\begin{aligned}\n",
    "\\mathbf{v}_t & = \\beta \\mathbf{v}_{t-1} + \\boldsymbol{\\Lambda} \\mathbf{z}_{t-1} \\\\\n",
    "\\mathbf{z}_t & = \\mathbf{z}_{t-1} - \\eta \\left(\\beta \\mathbf{v}_{t-1} + \\boldsymbol{\\Lambda} \\mathbf{z}_{t-1}\\right) \\\\\n",
    "    & = (\\mathbf{I} - \\eta \\boldsymbol{\\Lambda}) \\mathbf{z}_{t-1} - \\eta \\beta \\mathbf{v}_{t-1}.\n",
    "\\end{aligned}$$\n",
    "\n",
    "在这样做的过程中，我们只是证明了以下定理：带有和带有不凸二次函数动量的梯度下降，可以分解为朝二次矩阵特征向量方向坐标顺序的优化。\n",
    "\n",
    "### 标量函数\n",
    "\n",
    "鉴于上述结果，让我们看看当我们最小化函数$f(x) = \\frac{\\lambda}{2} x^2$时会发生什么。\n",
    "对于梯度下降我们有\n",
    "\n",
    "$$x_{t+1} = x_t - \\eta \\lambda x_t = (1 - \\eta \\lambda) x_t.$$\n",
    "\n",
    "每$|1 - \\eta \\lambda| < 1$时，这种优化以指数速度收敛，因为在$t$步之后我们可以得到$x_t = (1 - \\eta \\lambda)^t x_0$。\n",
    "这显示了在我们将学习率$\\eta$提高到$\\eta \\lambda = 1$之前，收敛率最初是如何提高的。\n",
    "超过该数值之后，梯度开始发散，对于$\\eta \\lambda > 2$而言，优化问题将会发散。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "float[] lambdas = new float[]{0.1f, 1f, 10f, 19f};\n",
    "float eta = 0.1f;\n",
    "\n",
    "float[] time = new float[0];\n",
    "float[] convergence = new float[0];\n",
    "String[] lambda = new String[0]; \n",
    "for (float lam : lambdas) {\n",
    "    float[] timeTemp = new float[20];\n",
    "    float[] convergenceTemp = new float[20];\n",
    "    String[] lambdaTemp = new String[20];\n",
    "    for (int i = 0; i < timeTemp.length; i++) {\n",
    "        timeTemp[i] = i;\n",
    "        convergenceTemp[i] = (float) Math.pow(1 - eta * lam, i);\n",
    "        lambdaTemp[i] = String.format(\"lambda = %.2f\", lam);\n",
    "    }\n",
    "    time = ArrayUtils.addAll(time, timeTemp);\n",
    "    convergence = ArrayUtils.addAll(convergence, convergenceTemp);\n",
    "    lambda = ArrayUtils.addAll(lambda, lambdaTemp);\n",
    "}\n",
    "\n",
    "Table data = Table.create(\"data\")\n",
    "    .addColumns(\n",
    "        DoubleColumn.create(\"time\", Functions.floatToDoubleArray(time)),\n",
    "        DoubleColumn.create(\"convergence\", Functions.floatToDoubleArray(convergence)),\n",
    "        StringColumn.create(\"lambda\", lambda)\n",
    ");\n",
    "\n",
    "LinePlot.create(\"convergence vs. time\", data, \"time\", \"convergence\", \"lambda\");"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了分析动量的收敛情况，我们首先用两个标量重写更新方程：一个用于$x$，另一个用于动量$v$。这产生了：\n",
    "\n",
    "$$\n",
    "\\begin{bmatrix} v_{t+1} \\\\ x_{t+1} \\end{bmatrix} =\n",
    "\\begin{bmatrix} \\beta & \\lambda \\\\ -\\eta \\beta & (1 - \\eta \\lambda) \\end{bmatrix}\n",
    "\\begin{bmatrix} v_{t} \\\\ x_{t} \\end{bmatrix} = \\mathbf{R}(\\beta, \\eta, \\lambda) \\begin{bmatrix} v_{t} \\\\ x_{t} \\end{bmatrix}.\n",
    "$$\n",
    "\n",
    "我们用$\\mathbf{R}$来表示$2 \\times 2$管理的收敛表现。\n",
    "在$t$步之后，最初的值$[v_0, x_0]$变为$\\mathbf{R}(\\beta, \\eta, \\lambda)^t [v_0, x_0]$。\n",
    "因此，收敛速度是由$\\mathbf{R}$的特征值决定的。\n",
    "请参阅[文章](https://distill.pub/2017/momentum/) :cite:`Goh.2017`了解精彩动画。\n",
    "请参阅 :cite:`Flammarion.Bach.2015`了解详细分析。\n",
    "简而言之，当$0 < \\eta \\lambda < 2 + 2 \\beta$时动量收敛。\n",
    "与梯度下降的$0 < \\eta \\lambda < 2$相比，这是更大范围的可行参数。\n",
    "另外，一般而言较大值的$\\beta$是可取的。\n",
    "\n",
    "## 小结\n",
    "\n",
    "* 动量法用过去梯度的平均值来替换梯度，这大大加快了收敛速度。\n",
    "* 对于无噪声梯度下降和嘈杂随机梯度下降，动量法都是可取的。\n",
    "* 动量法可以防止在随机梯度下降的优化过程停滞的问题。\n",
    "* 由于对过去的数据进行了指数降权，有效梯度数为$\\frac{1}{1-\\beta}$。\n",
    "* 在凸二次问题中，可以对动量法进行明确而详细的分析。\n",
    "* 动量法的实现非常简单，但它需要我们存储额外的状态向量（动量$\\mathbf{v}$）。\n",
    "\n",
    "## 练习\n",
    "\n",
    "1. 使用动量超参数和学习率的其他组合，观察和分析不同的实验结果。\n",
    "1. 试试梯度下降和动量法来解决一个二次问题，其中你有多个特征值，即$f(x) = \\frac{1}{2} \\sum_i \\lambda_i x_i^2$，例如$\\lambda_i = 2^{-i}$。绘制出$x$的值在初始化$x_i = 1$时如何下降。\n",
    "1. 推导$h(\\mathbf{x}) = \\frac{1}{2} \\mathbf{x}^\\top \\mathbf{Q} \\mathbf{x} + \\mathbf{x}^\\top \\mathbf{c} + b$的最小值和最小化器。\n",
    "1. 当我们执行带动量法的随机梯度下降时会有什么变化？当我们使用带动量法的小批量随机梯度下降时会发生什么？试验参数如何？\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Java",
   "language": "java",
   "name": "java"
  },
  "language_info": {
   "codemirror_mode": "java",
   "file_extension": ".jshell",
   "mimetype": "text/x-java-source",
   "name": "Java",
   "pygments_lexer": "java",
   "version": "14.0.2+12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
