{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# PyTorch – 深度学习全栈工程师进阶案例实战（第十期）第6课书面作业\n",
    "学号：115539\n",
    "\n",
    "**作业内容：**  \n",
    "Insightface: https://github.com/deepinsight/insightface\n",
    "\n",
    "Dataset: https://pan.baidu.com/s/1S6LJZGdqcZRle1vlcMzHOQ\n",
    "\n",
    "Model Zoo: https://github.com/deepinsight/insightface/wiki/Model-Zoo\n",
    "\n",
    "考虑到数据量实在太大，同时insightface上有开源相应的程序，所以本次作业不提交最后训练结果。\n",
    "\n",
    "理解程序：除了网络，其他代码从头码一遍，最好按自己的风格写一遍。（面对新程序的时候，这种办法能够快速理解）；\n",
    "\n",
    "要上交的附件：把你修改的程序提交，保证或多或少有一定的修改；"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**答：**   \n",
    "* 将程序从头到脚读了一遍；   \n",
    "* 下载了训练集，并让程序能够运行。重试了多种配置组合，最后如下：  \n",
    "  * numpy==1.16.6  \n",
    "  * scikit-learn==0.19.2  \n",
    "  * scipy==1.0.0  \n",
    "  * tensorboard==2.8.0  \n",
    "  * torch==1.10.2  \n",
    "  * torchaudio==0.10.2  \n",
    "  * torchvision==0.11.3  \n",
    "  * cuda==11.3  \n",
    "  * mxnet==1.7.0.post2  \n",
    "* 按自己的方式改了一下，重点改了如下几点：  \n",
    "  1. 修改了原代码中两处错误，一是实例化insightface模型时的入参传了个具体的数，这个数是错的，会导致cuda error: device-side assert triggered；二是验证集测试时，label要转化为int64类型的，不然后面的scatter_函数会报错；  \n",
    "  2. 将程序中.cuda()调用，全部替换为.to(device)调用，方便切换gpu与cpu运行，也方便调试，前面cuda error在cpu模式下就会很好定位解决；  \n",
    "  3. 将训练中train过程拆分了每个epoch一个函数调用的方式；   \n",
    "  4. 引入samplerate，方便调试，将samplerate设置小后，可以不必运行完整个训练集就能观察程序运行效果；   \n",
    "  5. 在tqdm bar中引入验证集测试结果。\n",
    "\n",
    "代码链接：https://gitee.com/dotzhen/pytorch_for_deeplearning/blob/master/class06/face_recognition/face_rec.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# -*- coding:utf-8 -*-\n",
    "import torch\n",
    "import torch.nn.functional as F\n",
    "from torch.utils.data import DataLoader\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "import torch.optim as optim\n",
    "from tqdm import tqdm\n",
    "import sys\n",
    "import torch.backends.cudnn as cudnn\n",
    "\n",
    "sys.path.append('E:/workspace/face_recognition')\n",
    "\n",
    "from networks import net\n",
    "from datasets.train_dataset import Ms1mDataset\n",
    "from datasets.verification_dataset import VerificationDataset\n",
    "from eval.Metric import AccMetric, LossMetric\n",
    "from shutil import rmtree\n",
    "from eval.verification import calc_acc\n",
    "\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
    "# device = 'cpu'\n",
    "\n",
    "def calc_acc_loss(data, target):\n",
    "    # Acc metric\n",
    "    output, output_real, _ = model(data, target)\n",
    "    loss = F.nll_loss(output, target.flatten())\n",
    "    pred = torch.argmax(output_real, dim=1, keepdim=True)\n",
    "    acc = (pred == target).cpu().float().mean()\n",
    "    return acc, loss\n",
    "\n",
    "\n",
    "def adjust_lr(epoch):\n",
    "    lr_t = {13: 0.1, 19: 0.01, 22: 0.001}\n",
    "    _lr = lr_t[epoch] if epoch in lr_t.keys() else 0.001\n",
    "\n",
    "    for param_group in optimizer.param_groups:\n",
    "        param_group['lr'] = _lr\n",
    "    print('lr is changed to ', _lr)\n",
    "\n",
    "\n",
    "def save_checkpoint(path='torch_face_params'):\n",
    "    state = {\n",
    "        'model': model.state_dict(),\n",
    "        'optimizer': optimizer.state_dict()\n",
    "    }\n",
    "    filepath = f'{path}/resnet50.pt'\n",
    "    torch.save(state, filepath)\n",
    "\n",
    "\n",
    "def verification():\n",
    "    model.eval()\n",
    "    acc_final = 0.0\n",
    "    iters = len(veri_loader)\n",
    "    with torch.no_grad():\n",
    "        for i, (data, label) in enumerate(veri_loader):\n",
    "            if i > samplerate * iters:\n",
    "                break\n",
    "            label = label.type(torch.int64) #不加这句话会报错\n",
    "            data, label = data.to(device), label.to(device)\n",
    "            _, _, embedding = model(data, label)\n",
    "            acc = calc_acc(embedding.cpu(), label.cpu())\n",
    "            acc_final += acc\n",
    "\n",
    "    return acc_final\n",
    "\n",
    "\n",
    "def train_on_epoch(epoch, top_acc, acc_metrics, loss_metrics):\n",
    "    progressbar = tqdm(train_loader)\n",
    "    iters = len(train_loader)\n",
    "    for iter_idx, (data, target) in enumerate(progressbar):\n",
    "        if iter_idx > samplerate * iters:\n",
    "            break\n",
    "        global_step = iter_idx + iters * epoch\n",
    "        data, target = data.to(device), target.to(device)\n",
    "        optimizer.zero_grad()\n",
    "        acc, loss = calc_acc_loss(data, target)\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        loss_metrics.update(loss)\n",
    "        acc_metrics.update(acc)\n",
    "\n",
    "        progressbar.set_description(f'Epoch: {epoch} Loss: {loss:.4f} Acc: {acc_metrics.avg * 100:.2f}%')\n",
    "        writer.add_scalars('training', {'loss': loss_metrics.cur,\n",
    "                                        'acc': acc_metrics.cur, }, global_step=global_step)\n",
    "\n",
    "        if acc_metrics.avg > top_acc:\n",
    "            top_acc = acc_metrics.avg\n",
    "            if top_acc > 0.75 and global_step % 500 == 0:\n",
    "                save_checkpoint()\n",
    "    veri_acc = verification()\n",
    "    progressbar.set_description(f'Epoch: {epoch} Loss: {loss:.4f} Acc: {acc_metrics.avg * 100:.2f}% Valid acc: {veri_acc * 100:.2f}%')\n",
    "    progressbar.close()\n",
    "    return top_acc, veri_acc\n",
    "\n",
    "\n",
    "def train(epochs=30):\n",
    "    adj_lr_num = [13, 19, 22]\n",
    "    top_acc = 0.0\n",
    "    for epoch in range(epochs):\n",
    "        # training\n",
    "        model.train()\n",
    "        train_acc = AccMetric('train_acc')\n",
    "        train_loss = LossMetric('train_loss')\n",
    "        if epoch in adj_lr_num:\n",
    "            adjust_lr(epoch)\n",
    "        top_acc, veri_acc = train_on_epoch(epoch, top_acc, train_acc, train_loss)\n",
    "        writer.add_scalar('validation/acc', veri_acc, epoch)\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    # auto-tuner to find the best algorithm to use for your hardware.\n",
    "    if device != 'cpu':\n",
    "        cudnn.benchmark = True\n",
    "\n",
    "    print('Training......')\n",
    "    tensorboard_path = './tb'\n",
    "    rmtree(tensorboard_path, ignore_errors=True)\n",
    "    writer = SummaryWriter(tensorboard_path)\n",
    "\n",
    "    per_batch_size = 128\n",
    "\n",
    "    train_dataset = Ms1mDataset('E:/datasets/faces_emore')\n",
    "    train_loader = DataLoader(dataset=train_dataset, batch_size=per_batch_size, shuffle=True, num_workers=1)\n",
    "\n",
    "    veri_dataset = VerificationDataset('E:/datasets/faces_emore')\n",
    "    veri_loader = DataLoader(dataset=veri_dataset, batch_size=per_batch_size, shuffle=False, num_workers=1)\n",
    "\n",
    "    print('total people: ', train_dataset.num_people)\n",
    "    model = net.InsightFace(train_dataset.num_people).to(device) #这里原来填的数是错的\n",
    "    optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)\n",
    "\n",
    "    samplerate = 0.0002 #正式运行时，这里改成1.0\n",
    "    train(10)\n",
    "\n",
    "    writer.close()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "运行效果截图："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://gitee.com/dotzhen/cloud-notes/raw/master/torch06-03.png)"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
