{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8903aeb9",
   "metadata": {},
   "source": [
    "# 优化器\n",
    "在训练模型店过程中，使用优化器来计算梯度并更新网络参数，合适的优化器可以有效减少训练时间，提高模型性能。\n",
    "\n",
    "在学习优化器之前，有一个很重要的概念——**学习率**，学习率指参数更新的速率，学习率的设置非常讲究，学习率过大会导致目标函数无法收敛，过小会导致训练耗时过长。\n",
    "\n",
    "一般来讲，静态的学习率能满足大多数场景。MindSpore还支持动态学习率，`mindspore.nn`提供了动态学习率的模块，分为Dynamic LR函数和LearningRateSchedule类。具体使用方法可查看官方文档，这里不做赘述，主要探讨优化器部分。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef6a0d43",
   "metadata": {},
   "source": [
    "## 一、内置优化器\n",
    "MindSpore中的nn模块提供了常用的优化器。主要参数有待优化的网络参数`params`，学习率`learning_rate`和一些特定优化器需要的参数。\n",
    "\n",
    "在为优化器配置 `params` 入参时，可使用`net.trainable_params()`方法来指定需要优化和更新的网络参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "809de315",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义网络\n",
    "import numpy as np\n",
    "import mindspore.ops as ops\n",
    "from mindspore import nn\n",
    "import mindspore as ms\n",
    "\n",
    "class Net(nn.Cell):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        self.matmul = ops.MatMul()\n",
    "        self.conv = nn.Conv2d(1, 6, 5, pad_mode=\"valid\")\n",
    "        self.param = ms.Parameter(ms.Tensor(np.array([1.0], np.float32)))\n",
    "\n",
    "    def construct(self, x):\n",
    "        x = self.conv(x)\n",
    "        x = x * self.param\n",
    "        out = self.matmul(x, x)\n",
    "        return out\n",
    "    \n",
    "net = Net()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "a42b2d9d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Parameter (name=param, shape=(1,), dtype=Float32, requires_grad=True), Parameter (name=conv.weight, shape=(6, 1, 5, 5), dtype=Float32, requires_grad=True)]\n"
     ]
    }
   ],
   "source": [
    "# 配置优化器，以Adam为例\n",
    "optim = nn.Adam(params=net.trainable_params())\n",
    "\n",
    "# 查看可训练参数\n",
    "print(net.trainable_params())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1bc335c9",
   "metadata": {},
   "source": [
    "我们可以看到在这个网络中，可训练的参数有两个，我们可以手动设置网络权重Paramter的`requires_grad`属性来指定其是否能更新。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "3c7723c9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Parameter (name=conv.weight, shape=(6, 1, 5, 5), dtype=Float32, requires_grad=True)]\n"
     ]
    }
   ],
   "source": [
    "class Net(nn.Cell):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        self.matmul = ops.MatMul()\n",
    "        self.conv = nn.Conv2d(1, 6, 5, pad_mode=\"valid\")\n",
    "        self.param = ms.Parameter(ms.Tensor(np.array([1.0], np.float32)), requires_grad=False)\n",
    "        \n",
    "    def construct(self, x):\n",
    "        x = self.conv(x)\n",
    "        x = x * self.param\n",
    "        out = self.matmul(x, x)\n",
    "        return out\n",
    "\n",
    "net = Net()\n",
    "# 查看可训练参数\n",
    "print(net.trainable_params())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a06fa9e1",
   "metadata": {},
   "source": [
    "## 二、自定义优化器\n",
    "我们可以根据需要自定义优化器，自定义优化器时需继承`nn.Optimizer`基类，并重写`__init__`和`construct`方法,`construct`方法的输入为梯度，在训练中会自动传入梯度gradients。其中有`get_lr()`方法可以直接获得学习率，`ops`中的`Assign()`方法可以将参数输入网络模型。\n",
    "\n",
    "\n",
    "下面以Momentum优化器为例：\n",
    "$$ v_{t+1} = v_t×u+grad \\tag{1} $$\n",
    "\n",
    "$$p_{t+1} = p_t - lr*v_{t+1} \\tag{2} $$\n",
    "\n",
    "其中，$grad$ 、$lr$ 、$p$ 、$v$ 和 $u$ 分别表示梯度、学习率、权重参数、动量参数（Momentum）和初始速度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "6de57e45",
   "metadata": {},
   "outputs": [],
   "source": [
    "class Momentum(nn.Optimizer):\n",
    "    \"\"\"自定义优化器\"\"\"\n",
    "    def __init__(self, params, learning_rate, momentum=0.9):\n",
    "        # 初始化参数和算子\n",
    "        super(Momentum, self).__init__(learning_rate, params) # 先learning_rate,再params\n",
    "        self.momentum = ms.Parameter(ms.Tensor(momentum, ms.float32), name='momentum')\n",
    "        self.moments = self.parameters.clone(prefix='moments', init=\"zeros\")\n",
    "        self.assign = ops.Assign()\n",
    "        \n",
    "    def construct(self, gradients):\n",
    "        # 构造momentum算法\n",
    "        lr = self.get_lr()\n",
    "        params = self.parameters # 待更新的网络参数\n",
    "        \n",
    "        for i in range(len(params)):\n",
    "            self.assign(self.moments[i], self.moments[i] * self.momentum + gradients[i])\n",
    "            update = params[i] - self.moments[i] * lr\n",
    "            self.assign(params[i], update)\n",
    "        \n",
    "        return params\n",
    "\n",
    "net = Net()\n",
    "optm = Momentum(net.trainable_params(), 0.01)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mindspore",
   "language": "python",
   "name": "mindvision"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
