{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Tutorial 4. VGG16-BN on CIFAR10. \n",
    "\n",
    "\n",
    "In this tutorial, we will show \n",
    "\n",
    "- How to end-to-end train and compress a VGG16-BN on CIFAR10 to reproduce the results shown in the paper.\n",
    "- Please ensure the `only_train_once` version is `>=2.0.16`.\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 1. Create OTO instance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "graph constructor\n",
      "grow_non_stem_connected_components\n",
      "group_individual_nodes\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import random\n",
    "import numpy as np\n",
    "from only_train_once import OTO\n",
    "from backends import vgg16_bn\n",
    "\n",
    "# Set up random seed, experimental results may vary upon GPUs, CUDA as well.\n",
    "seed = 42\n",
    "random.seed(seed)\n",
    "np.random.seed(seed)\n",
    "torch.manual_seed(seed)\n",
    "\n",
    "model = vgg16_bn().cuda()\n",
    "dummy_input = torch.zeros(1, 3, 32, 32).cuda()\n",
    "oto = OTO(model=model, dummy_input=dummy_input)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### (Optional) Visualize the dependancy graph of DNN for ZIG partitions\n",
    "\n",
    "- Set `view` as `False` if no browser is accessiable.\n",
    "- Open the generated pdf file instead."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# A vgg16.gv.pdf will be generated to display the depandancy graph.\n",
    "oto.visualize_zigs(view=False)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 2. Dataset Preparation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files already downloaded and verified\n",
      "Files already downloaded and verified\n"
     ]
    }
   ],
   "source": [
    "from torchvision.datasets import CIFAR10\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "trainset = CIFAR10(root='cifar10', train=True, download=True, transform=transforms.Compose([\n",
    "            transforms.RandomHorizontalFlip(),\n",
    "            transforms.RandomCrop(32, 4),\n",
    "            transforms.ToTensor(),\n",
    "            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]))\n",
    "testset = CIFAR10(root='cifar10', train=False, download=True, transform=transforms.Compose([\n",
    "            transforms.ToTensor(),\n",
    "            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]))\n",
    "\n",
    "trainloader =  torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=4)\n",
    "testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False, num_workers=4)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 3. Setup DHSPG optimizer\n",
    "\n",
    "The following main hyperparameters need to be taken care.\n",
    "\n",
    "- `variant`: The optimizer that is used for training the baseline full model. \n",
    "- `lr`: The initial learning rate.\n",
    "- `weight_decay`: Weight decay as standard DNN optimization.\n",
    "- `target_group_sparsity`: The target group sparsity, typically higher group sparsity refers to more FLOPs and model size reduction, meanwhile may regress model performance more.\n",
    "- `tolerance_group_sparsity`: The percentage of groups that additionally want to feed into Half-space projection. (Special for VGG16 experiments).\n",
    "- `start_pruning_steps`: The number of steps that start to prune. \n",
    "- `epsilon`: The cofficient [0, 1) to control the aggresiveness of group sparsity exploration. Higher value means more aggressive group sparsity exploration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = oto.dhspg(\n",
    "    variant='sgd', \n",
    "    lr=0.1, \n",
    "    target_group_sparsity=0.79,\n",
    "    tolerance_group_sparsity=0.21, \n",
    "    lmbda=5e-4,\n",
    "    weight_decay=5e-4, \n",
    "    weight_decay_type='l1_norm', \n",
    "    start_pruning_steps=50 * len(trainloader), # start pruning after 50 epochs. Start pruning at initialization stage.\n",
    "    epsilon=0.9)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 4. Train VGG16 as normal."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Add function related to `mix_up`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch.nn.functional as F\n",
    "\n",
    "def one_hot(y, num_classes, smoothing_eps=None):\n",
    "    if smoothing_eps is None:\n",
    "        one_hot_y = F.one_hot(y, num_classes).float()\n",
    "        return one_hot_y\n",
    "    else:\n",
    "        one_hot_y = F.one_hot(y, num_classes).float()\n",
    "        v1 = 1 - smoothing_eps + smoothing_eps / float(num_classes)\n",
    "        v0 = smoothing_eps / float(num_classes)\n",
    "        new_y = one_hot_y * (v1 - v0) + v0\n",
    "        return new_y\n",
    "\n",
    "def mixup_func(input, target, alpha=0.2):\n",
    "    gamma = np.random.beta(alpha, alpha)\n",
    "    # target is onehot format!\n",
    "    perm = torch.randperm(input.size(0))\n",
    "    perm_input = input[perm]\n",
    "    perm_target = target[perm]\n",
    "    return input.mul_(gamma).add_(1 - gamma, perm_input), target.mul_(gamma).add_(1 - gamma, perm_target)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Start official training and compression.\n",
    "\n",
    "The top-1 accuracy could reach $93.3\\%$ on the specific run.\n",
    "\n",
    "Experiments of VGG16 fluctuates a bit more than others, perhaps due to its straight architecture.\n",
    "\n",
    "During our experiments, the top-1 accuracy is $93.1\\pm0.3\\%$ upon different random seeds and GPUs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 0, loss: 1.79, omega:7314.65, group_sparsity: 0.00, acc1: 0.4082\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 1, loss: 1.44, omega:7061.52, group_sparsity: 0.00, acc1: 0.5302\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 2, loss: 1.26, omega:6818.39, group_sparsity: 0.00, acc1: 0.5962\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 3, loss: 1.14, omega:6585.05, group_sparsity: 0.00, acc1: 0.6300\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 4, loss: 1.06, omega:6360.30, group_sparsity: 0.00, acc1: 0.7180\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 5, loss: 1.02, omega:6144.53, group_sparsity: 0.00, acc1: 0.6150\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "......\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 295, loss: 0.49, omega:667.54, group_sparsity: 0.79, acc1: 0.9323\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 296, loss: 0.48, omega:667.58, group_sparsity: 0.79, acc1: 0.9332\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 297, loss: 0.50, omega:667.56, group_sparsity: 0.79, acc1: 0.9311\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 298, loss: 0.49, omega:667.54, group_sparsity: 0.79, acc1: 0.9321\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 299, loss: 0.49, omega:667.54, group_sparsity: 0.79, acc1: 0.9327\n"
     ]
    }
   ],
   "source": [
    "from tqdm import tqdm \n",
    "from utils.utils import check_accuracy\n",
    "\n",
    "num_classes = 10\n",
    "\n",
    "mix_up = True\n",
    "max_epoch = 300\n",
    "model.cuda()\n",
    "criterion = torch.nn.CrossEntropyLoss()\n",
    "# Every 75 epochs, decay lr by 10.0\n",
    "lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=75, gamma=0.1) \n",
    "\n",
    "for epoch in range(max_epoch):\n",
    "    f_avg_val = 0.0\n",
    "    model.train()\n",
    "    lr_scheduler.step()\n",
    "    for X, y in trainloader:\n",
    "        X = X.cuda()\n",
    "        y = y.cuda()   \n",
    "        with torch.no_grad():\n",
    "            if mix_up:\n",
    "                y = one_hot(y, num_classes=num_classes)\n",
    "                X, y = mixup_func(X, y)\n",
    "        y_pred = model.forward(X)\n",
    "        f = criterion(y_pred, y)\n",
    "        optimizer.zero_grad()\n",
    "        f.backward()\n",
    "        f_avg_val += f\n",
    "        optimizer.step()\n",
    "    group_sparsity, omega = optimizer.compute_group_sparsity_omega()\n",
    "    # Record the below four metrics after pruning step if you want to trouble shoot in depth.\n",
    "    # norm_redundant, norm_important, num_groups_redundant, num_groups_important = optimizer.compute_norm_group_partitions()\n",
    "    accuracy1, accuracy5 = check_accuracy(model, testloader)\n",
    "    f_avg_val = f_avg_val.cpu().item() / len(trainloader)\n",
    "    print(\"Epoch: {ep}, loss: {f:.2f}, omega:{om:.2f}, group_sparsity: {gs:.2f}, acc1: {acc:.4f}\".\\\n",
    "          format(ep=epoch, f=f_avg_val, om=omega, gs=group_sparsity, acc=accuracy1))\n",
    "    # print(\"Epoch: {ep}, norm_redundant: {norm_redundant:.2f}, norm_important:{norm_important:.2f}, num_groups_redundant: {num_groups_redundant}, num_groups_important: {num_groups_important}\".\\\n",
    "    #       format(ep=epoch, norm_redundant=norm_redundant, norm_important=norm_important, num_groups_redundant=num_groups_redundant, num_groups_important=num_groups_important))\n",
    "\n",
    "    # Save model checkpoint\n",
    "    # torch.save(model, CKPT_PATH)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 5. Get compressed model in ONNX format\n",
    "\n",
    "By default, OTO will compress the last checkpoint. \n",
    "\n",
    "If we want to compress another checkpoint, need to reinitialize OTO then compress\n",
    "\n",
    "    oto = OTO(model=torch.load(ckpt_path), dummy_input)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# A VGG16_compressed.onnx will be generated. \n",
    "oto.compress()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (Optional) Compute FLOPs and number of parameters before and after OTO training\n",
    "\n",
    "The compressed VGG16-BN under about 80% group sparsity reduces FLOPs by 73.4% and parameters by 95.0% on this specific run."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Full FLOPs (M): 313.73. Compressed FLOPs (M): 83.44. Reduction Ratio: 0.7340\n",
      "Full # Params: 15253578. Compressed # Params: 766831. Reduction Ratio: 0.9497\n"
     ]
    }
   ],
   "source": [
    "full_flops = oto.compute_flops()\n",
    "compressed_flops = oto.compute_flops(compressed=True)\n",
    "full_num_params = oto.compute_num_params()\n",
    "compressed_num_params = oto.compute_num_params(compressed=True)\n",
    "\n",
    "print(\"Full FLOPs (M): {f_flops:.2f}. Compressed FLOPs (M): {c_flops:.2f}. Reduction Ratio: {f_ratio:.4f}\"\\\n",
    "      .format(f_flops=full_flops, c_flops=compressed_flops, f_ratio=1 - compressed_flops/full_flops))\n",
    "print(\"Full # Params: {f_params}. Compressed # Params: {c_params}. Reduction Ratio: {f_ratio:.4f}\"\\\n",
    "      .format(f_params=full_num_params, c_params=compressed_num_params, f_ratio=1 - compressed_num_params/full_num_params))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (Optional) Check the compressed model accuracy\n",
    "\n",
    "Both full and compressed model should return the exact same accuracy."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Full model: Acc 1: 0.9327, Acc 5: 0.9965\n",
      "Compressed model: Acc 1: 0.9327, Acc 5: 0.9965\n"
     ]
    }
   ],
   "source": [
    "from utils.utils import check_accuracy_onnx\n",
    "testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=4)\n",
    "\n",
    "acc1_full, acc5_full = check_accuracy(model, testloader)\n",
    "print(\"Full model: Acc 1: {acc1}, Acc 5: {acc5}\".format(acc1=acc1_full, acc5=acc5_full))\n",
    "\n",
    "acc1_compressed, acc5_compressed = check_accuracy_onnx(oto.compressed_model_path, testloader)\n",
    "print(\"Compressed model: Acc 1: {acc1}, Acc 5: {acc5}\".format(acc1=acc1_compressed, acc5=acc5_compressed))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
