{
 "nbformat": 4,
 "nbformat_minor": 2,
 "metadata": {
  "language_info": {
   "name": "python",
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "version": "3.7.5-final"
  },
  "orig_nbformat": 2,
  "file_extension": ".py",
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3,
  "kernelspec": {
   "name": "python37564bitpy37conda1e3fc6a3d3c5471085c207e6a65f8f51",
   "display_name": "Python 3.7.5 64-bit ('py37': conda)"
  }
 },
 "cells": [
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## GHM Loss 测试\n",
    "\n",
    "用自己的方式实现了GHM，并与`mmdetection`的实现方式做了一个对比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# import sys\n",
    "# sys.path.append('.')\n",
    "\n",
    "import torch\n",
    "\n",
    "import ghm_loss_me as ghme # 自已实现的ghm loss\n",
    "import ghm_loss_mmlab as ghmm # mmdetection 实现的ghm loss"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### `GHMR`简单实现\n",
    "\n",
    "这是一个最为便的实现代码，但是没有继承`nn.Module`，只能用于测试。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def py_ghmr(pred, target, bins=10, mu=0.02):\n",
    "    # calculate the loss\n",
    "    diff = pred - target\n",
    "    loss = torch.sqrt(diff * diff + mu * mu) - mu\n",
    "\n",
    "    # static the histogram of gradient norm\n",
    "    g = torch.abs(diff / torch.sqrt(mu * mu + diff * diff)).detach()\n",
    "    g_hist = torch.histc(g, bins=bins, min=0, max=1)\n",
    "    g_index = (g * bins).long()\n",
    "    valid_mask = g_hist > 0\n",
    "\n",
    "    # calculate the weight by above histogram\n",
    "    weight = torch.zeros_like(g_hist, dtype=torch.float)\n",
    "    weight[valid_mask] = g_hist[valid_mask].reciprocal()\n",
    "    weight = weight[g_index]\n",
    "    weight = weight / weight.sum()\n",
    "    return (loss * weight).sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.manual_seed(2020)\n",
    "# 分类数据\n",
    "x_c = torch.randn((256, 5), requires_grad=True)\n",
    "t_c = torch.randint(0, 5, (256,))\n",
    "\n",
    "# 回归数据\n",
    "x_r = torch.randn((256, 1), requires_grad=True)\n",
    "t_r = torch.randn((256, 1))"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### 评估时间2种`GHMC`的实现\n",
    "\n",
    "#### 我的`GHMC`实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "ghmc = ghme.GHMCLoss()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": "244 µs ± 4.35 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
    }
   ],
   "source": [
    "%%timeit\n",
    "\n",
    "loss1 = ghmc(x_c, t_c)\n",
    "# print(loss1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": "tensor(0.9331, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)"
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ghmc(x_c, t_c)"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#### `mmdetection`的`GHMC`实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "ghmc = ghmm.GHMC()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": "1.05 ms ± 9.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
    }
   ],
   "source": [
    "%%timeit\n",
    "\n",
    "loss2 = ghmc(x_c, t_c, torch.tensor([1.0]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": "tensor(0.9344, grad_fn=<MulBackward0>)"
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ghmc(x_c, t_c, torch.tensor([1.0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "1. 从结果上来看，我的实现和`mmdetection`的实现结果差不多，这是由于其中计算顺序的不同，会带来结果的一些小小的差异，但这并不影响网络的训练。\n",
    "2. 从运行的时间上来看，我的实现是比`mmdetection`快了4倍左右，这是由于`mmdetection`的实现中有原生的python循环，而在我的实现中，全部用pytorch自带的函数实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### 评估时间2种GHMC的实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "ghmr = ghme.GHMRLoss()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": "220 µs ± 797 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
    }
   ],
   "source": [
    "%%timeit\n",
    "loss_r1 = ghmr(x_r, t_r)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": "tensor(0.1574, grad_fn=<SumBackward0>)"
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ghmr(x_r, t_r)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "ghmr = ghmm.GHMR()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": "777 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
    }
   ],
   "source": [
    "%%timeit\n",
    "loss_r2 = ghmr(x_r, t_r, torch.tensor(1.0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": "tensor(0.1574, grad_fn=<MulBackward0>)"
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ghmr(x_r, t_r, torch.tensor(1.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "1. 从结果上来看，我的实现和`mmdetection`的实现结果相同。\n",
    "\n",
    "2. 从运行的时间上来看，我的实现的`GHMR`比`mmdetection`快了3倍左右，这是由于`mmdetection`的实现中有原生的python循环，而在我的实现中，全部用pytorch自带的函数实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "此外，还写了一个最终的[实现版本](./ghm_loss.py)，使结构更加清晰。"
   ]
  },
  {
   "cell_type": "markdown",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### lightgbm实现版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "import ghm_loss_lightgbm as ghml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": "0.9331270216372425 0.15742306722658794\n"
    }
   ],
   "source": [
    "loss_c = ghml.py_ghmc_loss(x_c.detach().numpy(), t_c.numpy())\n",
    "loss_r = ghml.py_ghmr_loss(x_r.detach().numpy(), t_r.numpy())\n",
    "\n",
    "print(loss_c, loss_r)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ]
}