{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<style>\n",
    "\n",
    "    .rst-content blockquote {\n",
    "\n",
    "        margin-left: 0px;\n",
    "\n",
    "    }\n",
    "\n",
    "   \n",
    "\n",
    "    blockquote > div {\n",
    "\n",
    "        margin: 1.5625em auto;\n",
    "\n",
    "        padding: 20px 15px 1px;\n",
    "\n",
    "        border-left: 0.2rem solid rgb(59, 136, 219);  \n",
    "\n",
    "        border-radius: 0.2rem;\n",
    "\n",
    "        box-shadow: 0 0.2rem 0.5rem rgb(0 0 0 / 5%), 0 0 0.0625rem rgb(0 0 0 / 10%);\n",
    "\n",
    "    }\n",
    "\n",
    "</style>\n",
    "\n",
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Vision Transformer Examples for InferenceOptimizer\n",
    "\n",
    "Today, vision Transformer is becoming more and more popular. On one hand, people are constantly searching for larger pre-training corpus and pre-training model, on the other hand, how to deploy the vision transformer in the industrial scene is also a very sought-after issue.\n",
    "\n",
    "Here we take several popular vision Transformer architectures as examples to demonstrate how to use InferenceOptimizer in BigDL-Nano to accelerate model's inference speed."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##  step 0 : Prepare the environment\n",
    "We recommend you to use [Miniconda](https://docs.conda.io/en/latest/miniconda.html) to prepare the environment.\n",
    "\n",
    "**Note**: during your installation, there may be some warnings or errors about version, just ignore them.\n",
    "```bash\n",
    "conda create -n nano python=3.7 setuptools=58.0.4 # \"nano\" is conda environment name, you can use any name you like.\n",
    "conda activate nano\n",
    "pip install --pre --upgrade bigdl-nano[pytorch,inference]  # install the nightly-bulit version\n",
    "# install timm package to use pre-trained model\n",
    "pip install timm\n",
    "```\n",
    "\n",
    "Initialize environment variables with script `bigdl-nano-init` installed with bigdl-nano.\n",
    "\n",
    "```bash\n",
    "source bigdl-nano-init\n",
    "``` \n",
    "\n",
    "You may find environment variables set like follows:\n",
    "\n",
    "```\n",
    "conda dir found: /opt/anaconda3/envs/nano/bin/..\n",
    "OpenMP library found...\n",
    "Setting OMP_NUM_THREADS...\n",
    "Setting OMP_NUM_THREADS specified for pytorch...\n",
    "Setting KMP_AFFINITY...\n",
    "Setting KMP_BLOCKTIME...\n",
    "Setting MALLOC_CONF...\n",
    "Setting LD_PRELOAD...\n",
    "nano_vars.sh already exists\n",
    "+++++ Env Variables +++++\n",
    "LD_PRELOAD=/opt/anaconda3/envs/nano/bin/../lib/libiomp5.so /opt/anaconda3/envs/nano/lib/python3.7/site-packages/bigdl/nano//libs/libtcmalloc.so\n",
    "MALLOC_CONF=\n",
    "OMP_NUM_THREADS=112\n",
    "KMP_AFFINITY=granularity=fine\n",
    "KMP_BLOCKTIME=1\n",
    "TF_ENABLE_ONEDNN_OPTS=1\n",
    "ENABLE_TF_OPTS=1\n",
    "NANO_TF_INTER_OP=1\n",
    "+++++++++++++++++++++++++\n",
    "Complete.\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## step 1 : Prepare Dataset\n",
    "\n",
    "As InferenceOptimizer needs validation data to calculate accuracy metric, we need to download [ImageNet validation dataset](https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar) and [development kit](https://image-net.org/data/ILSVRC/2012/ILSVRC2012_devkit_t12.tar.gz), and place them under directory `./img_data`.\n",
    "\n",
    "Here we provide a helper function `create_imagenet_val_dataset` to help users create a subset of ImageNet validation dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchvision.datasets import ImageNet\n",
    "from torch.utils.data import DataLoader, Subset\n",
    "import torch\n",
    "import numpy as np\n",
    "\n",
    "def create_imagenet_val_dataset(limit_num_samples=None):\n",
    "    dataset = ImageNet(root=\"img_data\", split=\"val\")\n",
    "    if limit_num_samples is not None:\n",
    "        indices = np.random.permutation(len(dataset))[:limit_num_samples]\n",
    "        dataset = Subset(dataset, indices)\n",
    "    return dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## step 2 : Import related package\n",
    "\n",
    "[PyTorch Image Models (timm)](https://github.com/rwightman/pytorch-image-models) provides a collection of image models. Here we use some vision Transformer models with pre-trained weights provided by timm to demonstrate acceleration of InferenceOptimizer in BigDL-Nano. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from bigdl.nano.pytorch import InferenceOptimizer\n",
    "import timm\n",
    "from torchmetrics.classification import MulticlassAccuracy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## step 3 : Define dataloader, model then optimize and get_best_model\n",
    "> 📝 **Note**\n",
    ">\n",
    "> Actually we highly recommand users pass real training dataloader to `training_data` for calibration of quantization. But as ImageNet training set is too large to download, we just use validation dataset as faked training dataset in blow cases.\n",
    "> \n",
    "> If you want to get real performance on ImageNet validation set, you can just set `limit_num_samples=None`. Here we choose a subset to make inference pipeline faster and we just want to get a rough metric to evaluate the effect of quantization.\n",
    "> \n",
    "> Below results is obtained on a Cooper Lake processor with 112 physical cores."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. MobileViT\n",
    "\n",
    "[MobileViT](https://arxiv.org/abs/2110.02178) is a light-weight, general-purpose, and mobile-friendly vision Transformer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from configparser import Interpolation\n",
    "from timm.data.loader import create_loader\n",
    "\n",
    "fake_train_dataset = create_imagenet_val_dataset()\n",
    "faked_train_dataloader = create_loader(fake_train_dataset,\n",
    "                                       input_size=256,\n",
    "                                       # in case we want to evaluate single sample latency, so set batch_size to 1\n",
    "                                       batch_size=1,\n",
    "                                       use_prefetcher=False,\n",
    "                                       no_aug=True,\n",
    "                                       crop_pct=0.9,\n",
    "                                       interpolation=\"bicubic\",\n",
    "                                       mean=(0.0, 0.0, 0.0),\n",
    "                                       std=(1.0, 1.0, 1.0),\n",
    "                                       persistent_workers=False)\n",
    "\n",
    "val_dataset = create_imagenet_val_dataset(limit_num_samples=320)\n",
    "val_dataloader = create_loader(val_dataset,\n",
    "                               input_size=256,\n",
    "                               batch_size=32,\n",
    "                               use_prefetcher=False,\n",
    "                               no_aug=True,\n",
    "                               crop_pct=0.9,\n",
    "                               interpolation=\"bicubic\",\n",
    "                               mean=(0.0, 0.0, 0.0),\n",
    "                               std=(1.0, 1.0, 1.0),\n",
    "                               persistent_workers=False)\n",
    "val_dataloader.dataset.dataset.transform = val_dataloader.dataset.transform"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 📝 **Note**\n",
    ">\n",
    "> Each model has its own data preprocessing and these parameters for data loader is found in timm.\n",
    "> \n",
    "> `val_dataloader.dataset.dataset.transform = val_dataloader.dataset.transform` is used to apply transform on subset dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Calculate latency using 1 thread"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "model = timm.create_model(\"mobilevit_xxs\", pretrained=True)\n",
    "\n",
    "optimizer = InferenceOptimizer()\n",
    "optimizer.optimize(model,\n",
    "                   training_data=faked_train_dataloader,\n",
    "                   validation_data=val_dataloader,\n",
    "                   metric=MulticlassAccuracy(num_classes=1000),\n",
    "                   direction=\"max\",\n",
    "                   thread_num=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "By calling `optimizer.summary()`, you can see the complete optimization results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|             method             |        status        | latency(ms)  |       accuracy       |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|            original            |      successful      |    25.479    |        0.656         |\n",
      "|              bf16              |      successful      |    32.14     |        0.662         |\n",
      "|          static_int8           |      successful      |    25.881    |         0.0          |\n",
      "|         jit_fp32_ipex          |      successful      |    19.996    |        0.656*        |\n",
      "|  jit_fp32_ipex_channels_last   |      successful      |    15.217    |        0.656*        |\n",
      "|         jit_bf16_ipex          |      successful      |    11.728    |        0.656         |\n",
      "|  jit_bf16_ipex_channels_last   |      successful      |    12.321    |        0.656         |\n",
      "|         openvino_fp32          |      successful      |    11.542    |        0.656*        |\n",
      "|         openvino_int8          |      successful      |    12.73     |        0.634         |\n",
      "|        onnxruntime_fp32        |      successful      |    12.933    |        0.656*        |\n",
      "|    onnxruntime_int8_qlinear    |      successful      |    12.203    |         0.0          |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "* means we assume the precision of the traced model does not change, so we don't recompute accuracy to save time.\n",
      "Optimization cost 188.0s in total.\n"
     ]
    }
   ],
   "source": [
    "optimizer.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After `optimizer.optimize`, you need to call `get_best_model()` to obtain an accelarated model which meet certain restrictions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "When accuracy drop less than 5%, the model with minimal latency is:  openvino\n"
     ]
    }
   ],
   "source": [
    "acc_model, option = optimizer.get_best_model(accuracy_criterion=0.05)\n",
    "print(\"When accuracy drop less than 5%, the model with minimal latency is: \", option)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then You can use the accelarated model as normal nn.Module."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "with InferenceOptimizer.get_context(acc_model):\n",
    "    input_sample = next(iter(val_dataloader))[0]\n",
    "    target = acc_model(input_sample).argmax()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### calculate latency using 8 threads"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "model = timm.create_model(\"mobilevit_xxs\", pretrained=True)\n",
    "\n",
    "optimizer = InferenceOptimizer()\n",
    "optimizer.optimize(model,\n",
    "                   training_data=faked_train_dataloader,\n",
    "                   validation_data=val_dataloader,\n",
    "                   metric=MulticlassAccuracy(num_classes=1000),\n",
    "                   direction=\"max\",\n",
    "                   thread_num=8)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|             method             |        status        | latency(ms)  |       accuracy       |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|            original            |      successful      |    24.329    |        0.656         |\n",
      "|              bf16              |      successful      |    21.045    |        0.662         |\n",
      "|          static_int8           |      successful      |    24.936    |         0.0          |\n",
      "|         jit_fp32_ipex          |      successful      |    17.953    |        0.656*        |\n",
      "|  jit_fp32_ipex_channels_last   |      successful      |    13.622    |        0.656*        |\n",
      "|         jit_bf16_ipex          |      successful      |    8.525     |        0.656         |\n",
      "|  jit_bf16_ipex_channels_last   |      successful      |    7.073     |        0.656         |\n",
      "|         openvino_fp32          |      successful      |    3.839     |        0.656*        |\n",
      "|         openvino_int8          |      successful      |    4.272     |        0.619         |\n",
      "|        onnxruntime_fp32        |      successful      |    6.594     |        0.656*        |\n",
      "|    onnxruntime_int8_qlinear    |      successful      |    7.269     |         0.0          |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "* means we assume the precision of the traced model does not change, so we don't recompute accuracy to save time.\n",
      "Optimization cost 171.1s in total.\n"
     ]
    }
   ],
   "source": [
    "optimizer.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. PoolFormer\n",
    "\n",
    "[PoolFormer](https://arxiv.org/abs/2111.11418) verifys that the general architecture of the Transformers, instead of the specific token mixer module, is more essential to the model's performance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "from configparser import Interpolation\n",
    "from timm.data.loader import create_loader\n",
    "\n",
    "fake_train_dataset = create_imagenet_val_dataset()\n",
    "faked_train_dataloader = create_loader(fake_train_dataset,\n",
    "                               input_size=224,\n",
    "                               batch_size=1,\n",
    "                               use_prefetcher=False,\n",
    "                               no_aug=True,\n",
    "                               crop_pct=0.9,\n",
    "                               interpolation=\"bicubic\",\n",
    "                               mean=(0.485, 0.456, 0.406),\n",
    "                               std=(0.229, 0.224, 0.225),\n",
    "                               persistent_workers=False)\n",
    "val_dataset = create_imagenet_val_dataset(limit_num_samples=320)\n",
    "val_dataloader = create_loader(val_dataset,\n",
    "                               input_size=224,\n",
    "                               batch_size=32,\n",
    "                               use_prefetcher=False,\n",
    "                               no_aug=True,\n",
    "                               crop_pct=0.9,\n",
    "                               interpolation=\"bicubic\",\n",
    "                               mean=(0.485, 0.456, 0.406),\n",
    "                               std=(0.229, 0.224, 0.225),\n",
    "                               persistent_workers=False)\n",
    "val_dataloader.dataset.dataset.transform = val_dataloader.dataset.transform"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### calculate latency using 1 thread"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "model = timm.create_model(\"poolformer_s12\", pretrained=True)\n",
    "\n",
    "optimizer = InferenceOptimizer()\n",
    "optimizer.optimize(model,\n",
    "                   training_data=faked_train_dataloader,\n",
    "                   validation_data=val_dataloader,\n",
    "                   metric=MulticlassAccuracy(num_classes=1000),\n",
    "                   direction=\"max\",\n",
    "                   thread_num=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|             method             |        status        | latency(ms)  |       accuracy       |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|            original            |      successful      |    54.142    |        0.781         |\n",
      "|              bf16              |      successful      |    50.384    |        0.784         |\n",
      "|          static_int8           |   fail to convert    |     None     |         None         |\n",
      "|         jit_fp32_ipex          |      successful      |    52.585    |        0.781*        |\n",
      "|  jit_fp32_ipex_channels_last   |      successful      |    30.379    |        0.781*        |\n",
      "|         jit_bf16_ipex          |      successful      |    20.488    |        0.772         |\n",
      "|  jit_bf16_ipex_channels_last   |      successful      |    19.908    |        0.772         |\n",
      "|         openvino_fp32          |      successful      |    31.903    |        0.781*        |\n",
      "|         openvino_int8          |      successful      |    15.23     |        0.722         |\n",
      "|        onnxruntime_fp32        |      successful      |    34.376    |        0.781*        |\n",
      "|    onnxruntime_int8_qlinear    |      successful      |    31.821    |        0.741         |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "* means we assume the precision of the traced model does not change, so we don't recompute accuracy to save time.\n",
      "Optimization cost 179.1s in total.\n"
     ]
    }
   ],
   "source": [
    "optimizer.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### calculate latency using 4 threads"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "model = timm.create_model(\"poolformer_s12\", pretrained=True)\n",
    "\n",
    "optimizer = InferenceOptimizer()\n",
    "optimizer.optimize(model,\n",
    "                   training_data=faked_train_dataloader,\n",
    "                   validation_data=val_dataloader,\n",
    "                   metric=MulticlassAccuracy(num_classes=1000),\n",
    "                   direction=\"max\",\n",
    "                   thread_num=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|             method             |        status        | latency(ms)  |       accuracy       |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|            original            |      successful      |    29.872    |        0.781         |\n",
      "|              bf16              |      successful      |    29.415    |        0.784         |\n",
      "|          static_int8           |   fail to convert    |     None     |         None         |\n",
      "|         jit_fp32_ipex          |      successful      |    29.843    |        0.781*        |\n",
      "|  jit_fp32_ipex_channels_last   |      successful      |    13.977    |        0.781*        |\n",
      "|         jit_bf16_ipex          |      successful      |    9.524     |        0.772         |\n",
      "|  jit_bf16_ipex_channels_last   |      successful      |    7.483     |        0.772         |\n",
      "|         openvino_fp32          |      successful      |    11.318    |        0.781*        |\n",
      "|         openvino_int8          |      successful      |     6.37     |        0.725         |\n",
      "|        onnxruntime_fp32        |      successful      |    15.237    |        0.781*        |\n",
      "|    onnxruntime_int8_qlinear    |      successful      |    17.06     |        0.741         |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "* means we assume the precision of the traced model does not change, so we don't recompute accuracy to save time.\n",
      "Optimization cost 149.9s in total.\n"
     ]
    }
   ],
   "source": [
    "optimizer.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Swin Transformer\n",
    "\n",
    "[Swin Transformer](https://arxiv.org/abs/2103.14030) proposes hierarchical vision Transformer using shifted windows."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> ⚠️ **Warning**\n",
    ">\n",
    "> Swin don't support dynamic batch, so the batch_size of faked_train_dataloader must be the same with val_dataloader.\n",
    ">\n",
    "> Otherwise the accuracy will be very low."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "from configparser import Interpolation\n",
    "from timm.data.loader import create_loader\n",
    "\n",
    "fake_train_dataset = create_imagenet_val_dataset()\n",
    "faked_train_dataloader = create_loader(fake_train_dataset,\n",
    "                               input_size=224,\n",
    "                               batch_size=1,\n",
    "                               use_prefetcher=False,\n",
    "                               no_aug=True,\n",
    "                               crop_pct=0.9,\n",
    "                               interpolation=\"bicubic\",\n",
    "                               mean=(0.485, 0.456, 0.406),\n",
    "                               std=(0.229, 0.224, 0.225),\n",
    "                               persistent_workers=False)\n",
    "val_dataset = create_imagenet_val_dataset(limit_num_samples=20)\n",
    "val_dataloader = create_loader(val_dataset,\n",
    "                               input_size=224,\n",
    "                               batch_size=1,\n",
    "                               use_prefetcher=False,\n",
    "                               no_aug=True,\n",
    "                               crop_pct=0.9,\n",
    "                               interpolation=\"bicubic\",\n",
    "                               mean=(0.485, 0.456, 0.406),\n",
    "                               std=(0.229, 0.224, 0.225),\n",
    "                               persistent_workers=False)\n",
    "val_dataloader.dataset.dataset.transform = val_dataloader.dataset.transform"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### calculate latency using 1 thread"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "model = timm.create_model(\"swin_base_patch4_window7_224\", pretrained=True)\n",
    "\n",
    "optimizer = InferenceOptimizer()\n",
    "optimizer.optimize(model,\n",
    "                   training_data=faked_train_dataloader,\n",
    "                   validation_data=val_dataloader,\n",
    "                   metric=MulticlassAccuracy(num_classes=1000),\n",
    "                   direction=\"max\",\n",
    "                   thread_num=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|             method             |        status        | latency(ms)  |       accuracy       |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|            original            |      successful      |   311.535    |         0.8          |\n",
      "|              bf16              |      successful      |   177.442    |         0.8          |\n",
      "|          static_int8           |      successful      |   203.125    |         0.85         |\n",
      "|         jit_fp32_ipex          |      successful      |   270.109    |         0.8*         |\n",
      "|  jit_fp32_ipex_channels_last   |      successful      |   265.649    |         0.8*         |\n",
      "|         jit_bf16_ipex          |      successful      |   154.466    |         0.8          |\n",
      "|  jit_bf16_ipex_channels_last   |      successful      |   148.976    |         0.8          |\n",
      "|         openvino_fp32          |      successful      |   251.555    |         0.8*         |\n",
      "|         openvino_int8          |      successful      |   171.035    |         0.0          |\n",
      "|        onnxruntime_fp32        |      successful      |   267.994    |         0.8*         |\n",
      "|    onnxruntime_int8_qlinear    |      successful      |   142.155    |         0.5          |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "* means we assume the precision of the traced model does not change, so we don't recompute accuracy to save time.\n",
      "Optimization cost 772.6s in total.\n"
     ]
    }
   ],
   "source": [
    "optimizer.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### calculate latency using 8 threads"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "model = timm.create_model(\"swin_base_patch4_window7_224\", pretrained=True)\n",
    "\n",
    "optimizer = InferenceOptimizer()\n",
    "optimizer.optimize(model,\n",
    "                   training_data=faked_train_dataloader,\n",
    "                   validation_data=val_dataloader,\n",
    "                   metric=MulticlassAccuracy(num_classes=1000),\n",
    "                   direction=\"max\",\n",
    "                   thread_num=8)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|             method             |        status        | latency(ms)  |       accuracy       |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "|            original            |      successful      |   105.859    |         0.8          |\n",
      "|              bf16              |      successful      |    73.445    |         0.8          |\n",
      "|          static_int8           |      successful      |   105.891    |         0.85         |\n",
      "|         jit_fp32_ipex          |      successful      |    87.042    |         0.8*         |\n",
      "|  jit_fp32_ipex_channels_last   |      successful      |    87.928    |         0.8*         |\n",
      "|         jit_bf16_ipex          |      successful      |   190.623    |         0.8          |\n",
      "|  jit_bf16_ipex_channels_last   |      successful      |   170.537    |         0.8          |\n",
      "|         openvino_fp32          |      successful      |    47.405    |         0.8*         |\n",
      "|         openvino_int8          |      successful      |    37.394    |         0.0          |\n",
      "|        onnxruntime_fp32        |      successful      |    99.722    |         0.8*         |\n",
      "|    onnxruntime_int8_qlinear    |      successful      |    91.991    |         0.5          |\n",
      " -------------------------------- ---------------------- -------------- ----------------------\n",
      "* means we assume the precision of the traced model does not change, so we don't recompute accuracy to save time.\n",
      "Optimization cost 753.4s in total.\n"
     ]
    }
   ],
   "source": [
    "optimizer.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# step4 : save and load model (optional)\n",
    "After you get an accelerated model, you can save it as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "InferenceOptimizer.save(acc_model, path=\"ckpt\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then load it by `InferenceOptimizer.load`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = InferenceOptimizer.load(\"ckpt\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  },
  "vscode": {
   "interpreter": {
    "hash": "d347a5dca25745bedb029e46e41f7d6c8c9b5181ecb97033e2e81a7538459254"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
