{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1f78d74d",
   "metadata": {},
   "source": [
    "# 损失函数\n",
    "在这一节，我们将学习损失函数相关内容，在深度学习中，模型训练就是通过不断迭代来缩小损失函数值的过程。因此，在模型训练过程中损失函数的选择非常重要，一个好的损失函数能有效提升模型的性能。\n",
    "\n",
    "我们直接可以使用已经封装好的损失函数，也可根据不同的场合需求自定义损失函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "abbca4d9",
   "metadata": {},
   "source": [
    "## 一、内置损失函数\n",
    "`mindspore.nn`模块中封装好了常见的通用损失函数，我们以平均绝对误差`nn.L1Loss`为例：\n",
    "$$\\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad \\text{with } l_n = \\left| x_n - y_n \\right|$$\n",
    "\n",
    "其中N为数据集中的`batch_size`值。\n",
    "\n",
    "$$\\ell(x, y) =\n",
    "        \\begin{cases}\n",
    "            \\operatorname{mean}(L), & \\text{if reduction} = \\text{'mean';}\\\\\n",
    "            \\operatorname{sum}(L),  & \\text{if reduction} = \\text{'sum'.}\n",
    "        \\end{cases}$$\n",
    "        \n",
    "参数`reduction`取值可为`mean`，`sum`，或`none`（默认为mean）：\n",
    "- `mean`：求平均后输出\n",
    "- `sum`：求和后输出\n",
    "- `none`：直接输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "912a58f1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "mean loss: 1.0\n",
      "sum loss: 3.0\n",
      "none loss: [[1. 1. 1.]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import mindspore as ms\n",
    "import mindspore.nn as nn\n",
    "\n",
    "input = ms.Tensor(np.array([[1, 2, 3]]), ms.float32)\n",
    "output = ms.Tensor(np.array([[2, 3, 4]]), ms.float32)\n",
    "\n",
    "loss_mean = nn.L1Loss()\n",
    "loss_sum = nn.L1Loss(reduction='sum')\n",
    "loss_none = nn.L1Loss(reduction='none')\n",
    "\n",
    "print(\"mean loss:\", loss_mean(input, output))\n",
    "print(\"sum loss:\", loss_sum(input, output))\n",
    "print(\"none loss:\", loss_none(input, output))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8f9d398",
   "metadata": {},
   "source": [
    "## 二、自定义损失函数\n",
    "对于特定场合，内置损失函数并不能满足我们的需求，或者当我们需要拼接两个损失函数时，我们可以选择自定义损失函数的方法。\n",
    "\n",
    "自定义损失函数有两种方法，一种是继承`nn.Cell`，另一种是继承`nn.LossBase`。下面我们用平均绝对误差损失函数MAE举例：\n",
    "$$ loss= \\frac{1}{m}\\sum_{i=1}^m\\lvert y_i-f(x_i) \\rvert$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c395687c",
   "metadata": {},
   "source": [
    "### 基于`nn.Cell`构造损失函数\n",
    "`nn.Cell`是MindSpore的基类，不但可用于构建网络，还可用于定义损失函数。使用`nn.Cell`定义损失函数的过程与定义一个普通的网络相似，差别在于，其执行逻辑部分要计算的是前向网络输出与真实值之间的误差。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "8eabf0a0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "mean loss: 1.0\n"
     ]
    }
   ],
   "source": [
    "import mindspore.ops as ops\n",
    "\n",
    "class MAELoss(nn.Cell):\n",
    "    \"\"\"自定义损失函数MAELoss\"\"\"\n",
    "    def __init__(self):\n",
    "        # 初始化算子\n",
    "        super(MAELoss, self).__init__()\n",
    "        # 求绝对值\n",
    "        self.abs = ops.Abs()\n",
    "        # 求平均\n",
    "        self.mean = ops.ReduceMean()\n",
    "        \n",
    "    def construct(self, target, predict):\n",
    "        # 构造损失函数\n",
    "        x = self.abs(target - predict)\n",
    "        return self.mean(x)\n",
    "    \n",
    "loss = MAELoss()\n",
    "\n",
    "output_loss = loss(input, output)\n",
    "print(\"mean loss:\", output_loss)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0e34d82",
   "metadata": {},
   "source": [
    "### 基于`nn.LossBase`构造损失函数\n",
    "基于`nn.LossBase`构造损失函数与基于`nn.Cell`构造损失函数的过程类似，都要重写`__init__`方法和`construct`方法。不同点在于，`nn.LossBase`内置的`get_loss`方法将`reduction`应用于损失计算，让我们更关注损失值的计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "04a7684d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "mean loss: 1.0\n"
     ]
    }
   ],
   "source": [
    "class MAELoss(nn.LossBase):\n",
    "    \"\"\"自定义损失函数MAELoss\"\"\"\n",
    "    def __init__(self, reduction=\"mean\"):\n",
    "        # 初始化算子\n",
    "        super(MAELoss, self).__init__(reduction)\n",
    "        self.abs = ops.Abs()\n",
    "        \n",
    "    def construct(self, target, predict):\n",
    "        # 构造损失函数\n",
    "        x = self.abs(target - predict)\n",
    "        return self.get_loss(x)\n",
    "    \n",
    "loss = MAELoss()\n",
    "\n",
    "output_loss = loss(input, output)\n",
    "print(\"mean loss:\", output_loss)        "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "755238a3",
   "metadata": {},
   "source": [
    "可以通过修改`reduction`参数来指定输出平均损失、求和损失等。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a4de46c",
   "metadata": {},
   "source": [
    "## 三、损失函数与模型训练\n",
    "一般情况下，损失函数自定义完成后，可使用MindSpore的`Model`接口中`train`接口进行模型训练，构造`Model`时需传入前向网络、损失函数和优化器，`Model`会在内部将它们关联起来，生成一个可用于训练的网络模型。\n",
    "\n",
    "下面我们以一个简单的线性回归为例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "e0e8dc47",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[WARNING] ME(69700:68992,MainProcess):2022-11-05-16:19:45.271.853 [mindspore\\dataset\\engine\\datasets_user_defined.py:656] Python multiprocessing is not supported on Windows platform.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch:[  0/  1], step:[    1/   10], loss:[9.228/9.228], time:296.091 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    2/   10], loss:[9.104/9.166], time:1.042 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    3/   10], loss:[11.419/9.917], time:1.007 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    4/   10], loss:[9.266/9.754], time:0.994 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    5/   10], loss:[10.831/9.970], time:2.013 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    6/   10], loss:[11.125/10.162], time:1.947 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    7/   10], loss:[9.856/10.119], time:2.002 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    8/   10], loss:[7.334/9.771], time:1.999 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    9/   10], loss:[7.636/9.533], time:1.004 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[   10/   10], loss:[6.139/9.194], time:1.999 ms, lr:0.00500\n",
      "Epoch time: 331.094 ms, per step time: 33.109 ms, avg loss: 9.194\n"
     ]
    }
   ],
   "source": [
    "# 以下代码与之前初识自定义时基本相同\n",
    "import mindspore as ms\n",
    "from mindspore import dataset as ds\n",
    "from mindspore.common.initializer import Normal\n",
    "from mindvision.engine.callback import LossMonitor\n",
    "\n",
    "def get_data(num, w=2.0, b=3.0):\n",
    "    \"\"\"生成数据及对应标签\"\"\"\n",
    "    for _ in range(num):\n",
    "        x = np.random.uniform(-10.0, 10.0)\n",
    "        noise = np.random.normal(0, 1)\n",
    "        y = x * w + b + noise\n",
    "        yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)\n",
    "\n",
    "def create_dataset(num_data, batch_size=16):\n",
    "    \"\"\"加载数据集\"\"\"\n",
    "    dataset = ds.GeneratorDataset(list(get_data(num_data)), column_names=['data', 'label'])\n",
    "    dataset = dataset.batch(batch_size)\n",
    "    return dataset\n",
    "\n",
    "class LinearNet(nn.Cell):\n",
    "    \"\"\"定义线性回归网络\"\"\"\n",
    "    def __init__(self):\n",
    "        super(LinearNet, self).__init__()\n",
    "        self.fc = nn.Dense(1, 1, Normal(0.02), Normal(0.02))\n",
    "\n",
    "    def construct(self, x):\n",
    "        return self.fc(x)\n",
    "    \n",
    "# 定义损失函数\n",
    "class MAELoss(nn.LossBase):\n",
    "    \"\"\"自定义损失函数MAELoss\"\"\"\n",
    "    def __init__(self, reduction=\"mean\"):\n",
    "        # 初始化算子\n",
    "        super(MAELoss, self).__init__(reduction)\n",
    "        self.abs = ops.Abs()\n",
    "        \n",
    "    def construct(self, target, predict):\n",
    "        # 构造损失函数\n",
    "        x = self.abs(target - predict)\n",
    "        return self.get_loss(x)\n",
    "    \n",
    "dataset = create_dataset(num_data=160)\n",
    "net = LinearNet()\n",
    "loss = MAELoss()\n",
    "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)\n",
    "\n",
    "# 使用高阶API Model接口\n",
    "model = ms.Model(net, loss, opt)\n",
    "model.train(epoch=1, train_dataset=dataset, callbacks=[LossMonitor(0.005)])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4673c090",
   "metadata": {},
   "source": [
    "使用Model接口极大的降低了我们的代码量，但是也有Model接口不适用的情况，Model接口只支持两个输入的情况，即`data`和`label`，如果是多标签的情况，即一个`data`对应多个`label`的情况，Model接口就不能直接使用，此时，我们需要自定义一个网络来实现Model接口中连接网络、损失函数和优化器的功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80b227f5",
   "metadata": {},
   "source": [
    "## 四、多标签损失函数与模型训练\n",
    "多标签损失函数与单标签基本相同，唯一差别在于返回的是多个loss的平均值。由与无法使用直接Model接口，我们需要自定义一个网络来连接神经网络、损失函数和优化器，这个网络就是`损失网络`，这是MindSpore中特有的概念。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0216297b",
   "metadata": {},
   "source": [
    "我们依旧使用之前的线性拟合代码举例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "4ad5c44f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 以下代码与之前初识自定义时基本相同\n",
    "import numpy as np\n",
    "from mindspore import dataset as ds\n",
    "\n",
    "def get_multilabel_data(num, w=2.0, b=3.0):\n",
    "    for _ in range(num):\n",
    "        x = np.random.uniform(-10.0, 10.0)\n",
    "        noise1 = np.random.normal(0, 1)\n",
    "        noise2 = np.random.normal(-1, 1)\n",
    "        y1 = x * w + b + noise1\n",
    "        y2 = x * w + b + noise2\n",
    "        yield np.array([x]).astype(np.float32), np.array([y1]).astype(np.float32), np.array([y2]).astype(np.float32)\n",
    "\n",
    "def create_multilabel_dataset(num_data, batch_size=16):\n",
    "    dataset = ds.GeneratorDataset(list(get_multilabel_data(num_data)), column_names=['data', 'label1', 'label2'])\n",
    "    dataset = dataset.batch(batch_size)  # 每个batch有16个数据\n",
    "    return dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69c003d7",
   "metadata": {},
   "source": [
    "### 定义多标签损失函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "c6d45164",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义多标签损失函数\n",
    "class MAELossForMultiLabel(nn.LossBase):\n",
    "    \"\"\"定义多标签损失函数\"\"\"\n",
    "    def __init__(self, reduction='mean'):\n",
    "        # 初始化算子\n",
    "        super(MAELossForMultiLabel, self).__init__(reduction)\n",
    "        self.abs = ops.Abs()\n",
    "        \n",
    "    def construct(self, predict, target1, target2):\n",
    "        # 构造损失函数\n",
    "        x1 = self.abs(predict - target1)\n",
    "        x2 = self.abs(predict - target2)\n",
    "        return (self.get_loss(x1) + self.get_loss(x2)) / 2"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f8ef9728",
   "metadata": {},
   "source": [
    "### 定义损失网络\n",
    "定义损失网络`CustomWithLossCell`，其中`__init__`方法的两个参数`backbone`和`loss_fn`分别表示前向网络和损失函数，`construct`方法的输入分别为样例输入`data`和样例真实标签`label1`、`label2`，将样例输入`data`传给前向网络`backbone`，将预测值和两标签值传给损失函数`loss_fn`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "ba0cee78",
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomWithLossCell(nn.Cell):\n",
    "    \"\"\"定义损失网络\"\"\"\n",
    "    def __init__(self, backbone, loss_fn):\n",
    "        # 初始化网络和损失函数\n",
    "        super(CustomWithLossCell, self).__init__()\n",
    "        self.backbone = backbone\n",
    "        self.loss_fn = loss_fn\n",
    "        \n",
    "    def construct(self, data, label1, label2):\n",
    "        # 构造网络\n",
    "        x = self.backbone(data)\n",
    "        return self.loss_fn(x, label1, label2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ca3b9bb",
   "metadata": {},
   "source": [
    "执行训练："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "ba4e6355",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[WARNING] ME(69700:68992,MainProcess):2022-11-05-16:50:00.311.501 [mindspore\\dataset\\engine\\datasets_user_defined.py:656] Python multiprocessing is not supported on Windows platform.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch:[  0/  1], step:[    1/   10], loss:[11.163/11.163], time:219.099 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    2/   10], loss:[9.076/10.120], time:3.001 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    3/   10], loss:[8.545/9.595], time:10.999 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    4/   10], loss:[5.997/8.695], time:16.000 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    5/   10], loss:[9.254/8.807], time:31.996 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    6/   10], loss:[6.755/8.465], time:6.003 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    7/   10], loss:[10.579/8.767], time:7.998 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    8/   10], loss:[6.442/8.476], time:5.000 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[    9/   10], loss:[7.992/8.423], time:1.001 ms, lr:0.00500\n",
      "Epoch:[  0/  1], step:[   10/   10], loss:[4.923/8.073], time:5.996 ms, lr:0.00500\n",
      "Epoch time: 324.070 ms, per step time: 32.407 ms, avg loss: 8.073\n"
     ]
    }
   ],
   "source": [
    "dataset = create_multilabel_dataset(num_data=160)\n",
    "net = LinearNet()\n",
    "\n",
    "loss = MAELossForMultiLabel()\n",
    "loss_net = CustomWithLossCell(net, loss)\n",
    "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)\n",
    "\n",
    "model = ms.Model(network=loss_net, optimizer=opt)\n",
    "model.train(epoch=1, train_dataset=dataset, callbacks=[LossMonitor(0.005)])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mindspore",
   "language": "python",
   "name": "mindvision"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
