{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Tutorial 3. ResNet50 on CIFAR10. \n",
    "\n",
    "\n",
    "In this tutorial, we will show \n",
    "\n",
    "- How to end-to-end train and compress a ResNet50 on CIFAR10 from scratch to reproduce the results shown in the paper.\n",
    "- Properly setup `weight_decay` is important to retain reported accuracy, since it will affect the convergent local optimum over the objective landscape.\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 1. Create OTO instance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "graph constructor\n",
      "grow_non_stem_connected_components\n",
      "group_individual_nodes\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from only_train_once import OTO\n",
    "from backends import resnet50_cifar10\n",
    "\n",
    "model = resnet50_cifar10().cuda()\n",
    "dummy_input = torch.zeros(1, 3, 32, 32).cuda()\n",
    "oto = OTO(model=model, dummy_input=dummy_input)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### (Optional) Visualize the dependancy graph of DNN for ZIG partitions\n",
    "\n",
    "- Set `view` as `False` if no browser is accessiable.\n",
    "- Open the generated pdf file instead."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# A ResNet_zig.gv.pdf will be generated to display the depandancy graph.\n",
    "oto.visualize_zigs(view=False)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 2. Dataset Preparation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files already downloaded and verified\n",
      "Files already downloaded and verified\n"
     ]
    }
   ],
   "source": [
    "from torchvision.datasets import CIFAR10\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "trainset = CIFAR10(root='cifar10', train=True, download=True, transform=transforms.Compose([\n",
    "            transforms.RandomHorizontalFlip(),\n",
    "            transforms.RandomCrop(32, 4),\n",
    "            transforms.ToTensor(),\n",
    "            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]))\n",
    "testset = CIFAR10(root='cifar10', train=False, download=True, transform=transforms.Compose([\n",
    "            transforms.ToTensor(),\n",
    "            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]))\n",
    "\n",
    "trainloader =  torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=4)\n",
    "testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False, num_workers=4)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 3. Setup DHSPG optimizer\n",
    "\n",
    "The following main hyperparameters need to be taken care.\n",
    "\n",
    "- `variant`: The optimizer that is used for training the baseline full model. \n",
    "- `lr`: The initial learning rate.\n",
    "- `weight_decay`: Weight decay as standard DNN optimization.\n",
    "- `target_group_sparsity`: The target group sparsity, typically higher group sparsity refers to more FLOPs and model size reduction, meanwhile may regress model performance more.\n",
    "- `start_pruning_steps`: The number of steps that start to prune. \n",
    "- `epsilon`: The cofficient [0, 1) to control the aggresiveness of group sparsity exploration. Higher value means more aggressive group sparsity exploration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = oto.dhspg(\n",
    "    variant='sgd', \n",
    "    lr=0.1, \n",
    "    target_group_sparsity=0.8,\n",
    "    weight_decay=1e-3, # Weigth decay is important for ResNet50 on CIFAR10\n",
    "    start_pruning_steps=10 * len(trainloader), # start pruning after 10 epochs. Start pruning at initialization stage.\n",
    "    epsilon=0.95)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 4. Train ResNet50 as normal."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Start official training and compression.\n",
    "\n",
    "Under $80\\%$ group sparsity, the top-1 accuracy could reach $94.4\\%$ on the specific run.\n",
    "\n",
    "During our experiments, the top-1 accuracy is $(94.3\\pm0.2)\\%$ upon different random seeds and GPUs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 0, loss: 2.00, omega:17252.28, group_sparsity: 0.00, acc1: 0.3472\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 1, loss: 1.35, omega:15983.06, group_sparsity: 0.00, acc1: 0.4438\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 2, loss: 1.00, omega:14818.10, group_sparsity: 0.00, acc1: 0.5120\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 3, loss: 0.80, omega:13744.97, group_sparsity: 0.00, acc1: 0.6011\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 4, loss: 0.67, omega:12755.82, group_sparsity: 0.00, acc1: 0.7155\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 5, loss: 0.59, omega:11843.66, group_sparsity: 0.00, acc1: 0.5383\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 6, loss: 0.54, omega:11002.42, group_sparsity: 0.00, acc1: 0.7137\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 7, loss: 0.50, omega:10227.55, group_sparsity: 0.00, acc1: 0.7381\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 8, loss: 0.47, omega:9514.79, group_sparsity: 0.00, acc1: 0.6899\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 9, loss: 0.44, omega:8860.10, group_sparsity: 0.00, acc1: 0.6625\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "partition_groups\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 10, loss: 0.45, omega:7330.37, group_sparsity: 0.00, acc1: 0.5874\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 11, loss: 0.51, omega:5844.28, group_sparsity: 0.00, acc1: 0.6718\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 12, loss: 0.60, omega:4443.87, group_sparsity: 0.05, acc1: 0.4572\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 13, loss: 0.66, omega:3331.90, group_sparsity: 0.23, acc1: 0.6489\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 14, loss: 0.64, omega:2589.54, group_sparsity: 0.42, acc1: 0.7427\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 15, loss: 0.58, omega:2053.05, group_sparsity: 0.56, acc1: 0.7016\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 16, loss: 0.56, omega:1552.30, group_sparsity: 0.60, acc1: 0.6447\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 17, loss: 0.58, omega:1183.77, group_sparsity: 0.75, acc1: 0.5474\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 18, loss: 0.60, omega:1054.18, group_sparsity: 0.80, acc1: 0.7526\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 19, loss: 0.54, omega:1079.29, group_sparsity: 0.80, acc1: 0.6060\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "......\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 292, loss: 0.01, omega:733.84, group_sparsity: 0.80, acc1: 0.9434\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 293, loss: 0.01, omega:733.79, group_sparsity: 0.80, acc1: 0.9425\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 294, loss: 0.01, omega:733.73, group_sparsity: 0.80, acc1: 0.9421\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 295, loss: 0.01, omega:733.68, group_sparsity: 0.80, acc1: 0.9417\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 296, loss: 0.01, omega:733.62, group_sparsity: 0.80, acc1: 0.9435\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 297, loss: 0.01, omega:733.57, group_sparsity: 0.80, acc1: 0.9433\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 298, loss: 0.01, omega:733.51, group_sparsity: 0.80, acc1: 0.9441\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 299, loss: 0.01, omega:733.51, group_sparsity: 0.80, acc1: 0.9439\n"
     ]
    }
   ],
   "source": [
    "from tqdm import tqdm \n",
    "from utils.utils import check_accuracy\n",
    "\n",
    "max_epoch = 300\n",
    "model.cuda()\n",
    "criterion = torch.nn.CrossEntropyLoss()\n",
    "# Every 75 epochs, decay lr by 10.0\n",
    "lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=75, gamma=0.1) \n",
    "\n",
    "for epoch in range(max_epoch):\n",
    "    f_avg_val = 0.0\n",
    "    model.train()\n",
    "    lr_scheduler.step()\n",
    "    for X, y in tqdm(trainloader):\n",
    "        X = X.cuda()\n",
    "        y = y.cuda()                \n",
    "        y_pred = model.forward(X)\n",
    "        f = criterion(y_pred, y)\n",
    "        optimizer.zero_grad()\n",
    "        f.backward()\n",
    "        f_avg_val += f\n",
    "        optimizer.step()\n",
    "    group_sparsity, omega = optimizer.compute_group_sparsity_omega()\n",
    "    # Record the below four metrics after pruning step if you want to trouble shoot in depth.\n",
    "    # norm_redundant, norm_important, num_groups_redundant, num_groups_important = optimizer.compute_norm_group_partitions()\n",
    "    accuracy1, accuracy5 = check_accuracy(model, testloader)\n",
    "    f_avg_val = f_avg_val.cpu().item() / len(trainloader)\n",
    "    print(\"Epoch: {ep}, loss: {f:.2f}, omega:{om:.2f}, group_sparsity: {gs:.2f}, acc1: {acc:.4f}\".\\\n",
    "          format(ep=epoch, f=f_avg_val, om=omega, gs=group_sparsity, acc=accuracy1))\n",
    "    # print(\"Epoch: {ep}, norm_redundant: {norm_redundant:.2f}, norm_important:{norm_important:.2f}, num_groups_redundant: {num_groups_redundant}, num_groups_important: {num_groups_important}\".\\\n",
    "    #       format(ep=epoch, norm_redundant=norm_redundant, norm_important=norm_important, num_groups_redundant=num_groups_redundant, num_groups_important=num_groups_important))\n",
    "\n",
    "    # Save model checkpoint\n",
    "    # torch.save(model, CKPT_PATH)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 5. Get compressed model in ONNX format\n",
    "\n",
    "By default, OTO will compress the last checkpoint. \n",
    "\n",
    "If we want to compress another checkpoint, need to reinitialize OTO then compress\n",
    "\n",
    "    oto = OTO(model=torch.load(ckpt_path), dummy_input)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# A ResNet_compressed.onnx will be generated. \n",
    "oto.compress()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (Optional) Compute FLOPs and number of parameters before and after OTO training\n",
    "\n",
    "The compressed ResNet50 under 80% group sparsity reduces FLOPs by 90.0% and parameters by 96.4%."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Full FLOPs (M): 1297.83. Compressed FLOPs (M): 129.29. Reduction Ratio: 0.9004\n",
      "Full # Params: 23520842. Compressed # Params: 837108. Reduction Ratio: 0.9644\n"
     ]
    }
   ],
   "source": [
    "full_flops = oto.compute_flops()\n",
    "compressed_flops = oto.compute_flops(compressed=True)\n",
    "full_num_params = oto.compute_num_params()\n",
    "compressed_num_params = oto.compute_num_params(compressed=True)\n",
    "\n",
    "print(\"Full FLOPs (M): {f_flops:.2f}. Compressed FLOPs (M): {c_flops:.2f}. Reduction Ratio: {f_ratio:.4f}\"\\\n",
    "      .format(f_flops=full_flops, c_flops=compressed_flops, f_ratio=1 - compressed_flops/full_flops))\n",
    "print(\"Full # Params: {f_params}. Compressed # Params: {c_params}. Reduction Ratio: {f_ratio:.4f}\"\\\n",
    "      .format(f_params=full_num_params, c_params=compressed_num_params, f_ratio=1 - compressed_num_params/full_num_params))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (Optional) Check the compressed model accuracy\n",
    "\n",
    "#### Both full and compressed model should return the exact same accuracy.\n",
    "\n",
    "Compared to the [baseline of full ResNet50 on CIFAR10 (93.62% top-1 accuracy)](https://github.com/kuangliu/pytorch-cifar), the compressed ResNet50 improves top-1 accuracy by about 1%, but significantly reduces FLOPs and params."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Full model: Acc 1: 0.9439, Acc 5: 0.9976\n",
      "Compressed model: Acc 1: 0.9439, Acc 5: 0.9976\n"
     ]
    }
   ],
   "source": [
    "from utils.utils import check_accuracy_onnx\n",
    "testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=4)\n",
    "\n",
    "acc1_full, acc5_full = check_accuracy(model, testloader)\n",
    "print(\"Full model: Acc 1: {acc1}, Acc 5: {acc5}\".format(acc1=acc1_full, acc5=acc5_full))\n",
    "\n",
    "acc1_compressed, acc5_compressed = check_accuracy_onnx(oto.compressed_model_path, testloader)\n",
    "print(\"Compressed model: Acc 1: {acc1}, Acc 5: {acc5}\".format(acc1=acc1_compressed, acc5=acc5_compressed))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
