{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9d63df7c",
   "metadata": {},
   "source": [
    "## ⚠️ **DEPRECATED**\n",
    "\n",
    "This notebook is deprecated and may no longer be maintained.\n",
    "Please use it with caution or refer to updated resources.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6630f47-ca2e-4423-b862-778606165dcc",
   "metadata": {},
   "source": [
    "# Quick Get Started Notebook of Intel® Neural Compressor for ONNXRuntime\n",
    "\n",
    "\n",
    "This notebook is designed to provide an easy-to-follow guide for getting started with the [Intel® Neural Compressor](https://github.com/intel/neural-compressor) (INC) library for [ONNXRuntime](https://github.com/microsoft/onnxruntime) framework.\n",
    "\n",
    "In the following sections, we are going to use a DistilBert model fine-tuned on SST-2 as an example to show how to apply post-training quantization on [ONNX](https://github.com/onnx/onnx) models using the INC library.\n",
    "\n",
    "\n",
    "The main objectives of this notebook are:\n",
    "\n",
    "1. Prerequisite: Prepare necessary environment, model and dataset.\n",
    "2. Quantization with INC: Walk through the step-by-step process of applying post-training quantization.\n",
    "3. Benchmark with INC: Evaluate and compare the performance of the FP32 and INT8 models.\n",
    "\n",
    "\n",
    "## 1. Prerequisite\n",
    "\n",
    "### 1.1 Environment\n",
    "\n",
    "If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [neural-compressor](https://github.com/intel/neural-compressor), [onnxruntime](https://github.com/microsoft/onnxruntime) and other required packages.\n",
    "\n",
    "Otherwise, you can setup a new environment. First, we install [Anaconda](https://www.anaconda.com/distribution/). Then open an Anaconda prompt window and run the following commands:\n",
    "\n",
    "```shell\n",
    "conda create -n inc_notebook python==3.8\n",
    "conda activate inc_notebook\n",
    "pip install jupyter\n",
    "jupyter notebook\n",
    "```\n",
    "The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.\n",
    "\n",
    "Then, let's install necessary packages."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bbae42a4-3b3f-40ed-b79f-893a9a82aa59",
   "metadata": {},
   "outputs": [],
   "source": [
    "# install neural-compressor from source\n",
    "import sys\n",
    "!git clone https://github.com/intel/neural-compressor.git\n",
    "%cd ./neural-compressor\n",
    "!{sys.executable} -m pip install -r requirements.txt\n",
    "!{sys.executable} setup.py install\n",
    "%cd ..\n",
    "# or install stable basic version from pypi\n",
    "# pip install neural-compressor\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bc8c7671-503d-4a0c-a6dd-26e6b2147e41",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# install required packages\n",
    "!{sys.executable} install -r requirements.txt\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e7173ce-a837-4aa6-a359-0a2a75e53e6c",
   "metadata": {},
   "source": [
    "### 1.2 Prepare model\n",
    "\n",
    "Export [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model to ONNX with [Optimum](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model) command-line.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "1e7390fa-ce4c-4997-ab5d-c9e83b146c44",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Framework not specified. Using pt to export to ONNX.\n",
      "Using the export variant default. Available variants are:\n",
      "\t- default: The default ONNX variant.\n",
      "Using framework PyTorch: 2.0.1+cu117\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py:223: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\n",
      "  mask, torch.tensor(torch.finfo(scores.dtype).min)\n",
      "============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============\n",
      "verbose: False, log level: Level.ERROR\n",
      "======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================\n",
      "\n",
      "Post-processing the exported models...\n",
      "Deduplicating shared (tied) weights...\n",
      "Validating models in subprocesses...\n",
      "Validating ONNX model onnx-model/model.onnx...\n",
      "\t-[✓] ONNX model output names match reference model (logits)\n",
      "\t- Validating ONNX Model output \"logits\":\n",
      "\t\t-[✓] (2, 2) matches (2, 2)\n",
      "\t\t-[✓] all values close (atol: 0.0001)\n",
      "The ONNX export succeeded and the exported model was saved at: onnx-model\n"
     ]
    }
   ],
   "source": [
    "!optimum-cli export onnx --model distilbert-base-uncased-finetuned-sst-2-english --task text-classification onnx-model/"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ff3756a-cfae-4e63-89e0-ddba274e184a",
   "metadata": {},
   "source": [
    "### 1.3 Prepare dataset\n",
    "\n",
    "The General Language Understanding Evaluation (GLUE) benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:\n",
    "\n",
    "- [CoLA](https://nyu-mll.github.io/CoLA/) (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.\n",
    "- [MNLI](https://arxiv.org/abs/1704.05426) (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. This dataset has two versions, one with the validation and test set coming from the same distribution, another called mismatched where the validation and test use out-of-domain data.\n",
    "- [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases from one another or not.\n",
    "- [QNLI](https://rajpurkar.github.io/SQuAD-explorer/) (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. This dataset is built from the SQuAD dataset.\n",
    "- [QQP](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.\n",
    "- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.\n",
    "- [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.\n",
    "- [STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.\n",
    "- [WNLI](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html) (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. This dataset is built from the Winograd Schema Challenge dataset.\n",
    "\n",
    "Here, we download SST-2 task."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "7b2b4d98-2af3-44e8-a602-475dbfed5436",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--2023-10-10 16:56:47--  https://raw.githubusercontent.com/Shimao-Zhang/Download_GLUE_Data/master/download_glue_data.py\n",
      "Resolving proxy-prc.intel.com (proxy-prc.intel.com)... 10.240.252.16\n",
      "Connecting to proxy-prc.intel.com (proxy-prc.intel.com)|10.240.252.16|:913... connected.\n",
      "Proxy request sent, awaiting response... 200 OK\n",
      "Length: 7045 (6.9K) [text/plain]\n",
      "Saving to: ‘download_glue_data.py’\n",
      "\n",
      "100%[======================================>] 7,045       --.-K/s   in 0.002s  \n",
      "\n",
      "2023-10-10 16:56:48 (4.21 MB/s) - ‘download_glue_data.py’ saved [7045/7045]\n",
      "\n",
      "Downloading and extracting SST...\n",
      "\tCompleted!\n"
     ]
    }
   ],
   "source": [
    "!export GLUE_DIR=./glue_data\n",
    "!wget https://raw.githubusercontent.com/Shimao-Zhang/Download_GLUE_Data/master/download_glue_data.py\n",
    "!{sys.executable} download_glue_data.py --data_dir=GLUE_DIR --tasks=SST\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffbea123-c71f-4a86-aec3-31408888802d",
   "metadata": {},
   "source": [
    "## 2. Quantization with Intel® Neural Compressor\n",
    "\n",
    "Define the variables that will be used."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "7513ab6c-2e1b-40af-b492-6af146c4acae",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_name_or_path = \"distilbert-base-uncased-finetuned-sst-2-english\"\n",
    "fp32_model_path = \"onnx-model/model.onnx\"\n",
    "int8_model_path = \"onnx-model/int8-model.onnx\"\n",
    "data_path = \"./GLUE_DIR/SST-2\"\n",
    "task = \"sst-2\"\n",
    "batch_size = 8\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59f28a64-c8d9-424d-9efb-932501a63d0f",
   "metadata": {},
   "source": [
    "### 2.1 Define dataset and dataloader\n",
    "\n",
    "In this part, we define a GLUE dataset and register it as an INC dataloader.\n",
    "\n",
    "Refer to doc [dataset.md](https://github.com/intel/neural-compressor/blob/master/docs/source/dataset.md#user-specific-dataset) and [dataloader.md](https://github.com/intel/neural-compressor/blob/master/docs/source/dataloader.md#build-custom-dataloader-with-python-apiapi) for how to build your own dataset and dataloader.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "d8debb8b-42c9-412b-b055-37c57425f004",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import onnx\n",
    "import torch\n",
    "import logging\n",
    "import numpy as np\n",
    "import transformers\n",
    "from transformers.data import InputFeatures\n",
    "\n",
    "logger = logging.getLogger(__name__)\n",
    "logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s -   %(message)s',\n",
    "                    datefmt = '%m/%d/%Y %H:%M:%S',\n",
    "                    level = logging.WARN)\n",
    "\n",
    "class GLUEDataset:\n",
    "    \"\"\"Dataset used for GLUE.\"\"\"\n",
    "    def __init__(self, model, data_dir, model_name_or_path, max_seq_length=128,\\\n",
    "                do_lower_case=True, task='mrpc', model_type='bert', dynamic_length=False,\\\n",
    "                evaluate=True, transform=None, filter=None):\n",
    "        self.inputs = [inp.name for inp in onnx.load(model).graph.input]\n",
    "        task = task.lower()\n",
    "        model_type = model_type.lower()\n",
    "        assert task in ['mrpc', 'qqp', 'qnli', 'rte', 'sts-b', 'cola', 'mnli', 'wnli', 'sst-2'], 'Unsupported task type'\n",
    "        assert model_type in ['distilbert', 'bert', 'mobilebert', 'roberta'], 'Unsupported model type'\n",
    "\n",
    "        tokenizer = transformers.AutoTokenizer.from_pretrained(model_name_or_path, do_lower_case=do_lower_case)\n",
    "        self.dataset = load_and_cache_examples(data_dir, model_name_or_path, \\\n",
    "            max_seq_length, task, model_type, tokenizer, evaluate)\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.dataset)\n",
    "\n",
    "    def __getitem__(self, index):\n",
    "        batch = tuple(t.detach().cpu().numpy() if not isinstance(t, np.ndarray) else t for t in self.dataset[index])\n",
    "        return batch[:len(self.inputs)], batch[-1]\n",
    "\n",
    "def load_and_cache_examples(data_dir, model_name_or_path, max_seq_length, task, model_type, tokenizer, evaluate):\n",
    "    from torch.utils.data import TensorDataset\n",
    "\n",
    "    processor = transformers.glue_processors[task]()\n",
    "    output_mode = transformers.glue_output_modes[task]\n",
    "    # Load data features from cache or dataset file\n",
    "    if not os.path.exists(\"./dataset_cached\"):\n",
    "        os.makedirs(\"./dataset_cached\")\n",
    "    cached_features_file = os.path.join(\"./dataset_cached\", 'cached_{}_{}_{}_{}'.format(\n",
    "        'dev' if evaluate else 'train',\n",
    "        list(filter(None, model_name_or_path.split('/'))).pop(),\n",
    "        str(max_seq_length),\n",
    "        str(task)))\n",
    "    if os.path.exists(cached_features_file):\n",
    "        logger.info(\"Load features from cached file {}.\".format(cached_features_file))\n",
    "        features = torch.load(cached_features_file)\n",
    "    else:\n",
    "        logger.info(\"Create features from dataset file at {}.\".format(data_dir))\n",
    "        label_list = processor.get_labels()\n",
    "        examples = processor.get_dev_examples(data_dir) if evaluate else \\\n",
    "            processor.get_train_examples(data_dir)\n",
    "        features = convert_examples_to_features(examples,\n",
    "                                                tokenizer,\n",
    "                                                task=task,\n",
    "                                                label_list=label_list,\n",
    "                                                max_length=max_seq_length,\n",
    "                                                output_mode=output_mode,\n",
    "        )\n",
    "        logger.info(\"Save features into cached file {}.\".format(cached_features_file))\n",
    "        torch.save(features, cached_features_file)\n",
    "    # Convert to Tensors and build dataset\n",
    "    all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)\n",
    "    all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)\n",
    "    all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)\n",
    "    # all_seq_lengths = torch.tensor([f.seq_length for f in features], dtype=torch.long)\n",
    "    if output_mode == \"classification\":\n",
    "        all_labels = torch.tensor([f.label for f in features], dtype=torch.long)\n",
    "    elif output_mode == \"regression\":\n",
    "        all_labels = torch.tensor([f.label for f in features], dtype=torch.float)\n",
    "    dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)\n",
    "    return dataset\n",
    "\n",
    "def convert_examples_to_features(examples, tokenizer, max_length=128, task=None, label_list=None, \n",
    "                                 output_mode=\"classification\", pad_token=0, pad_token_segment_id=0, \n",
    "                                 mask_padding_with_zero=True,):\n",
    "    processor = transformers.glue_processors[task]()\n",
    "    if label_list is None:\n",
    "        label_list = processor.get_labels()\n",
    "        logger.info(\"Use label list {} for task {}.\".format(label_list, task))\n",
    "    label_map = {label: i for i, label in enumerate(label_list)}\n",
    "    features = []\n",
    "    for (ex_index, example) in enumerate(examples):\n",
    "        inputs = tokenizer.encode_plus(\n",
    "            example.text_a,\n",
    "            example.text_b,\n",
    "            add_special_tokens=True,\n",
    "            max_length=max_length,\n",
    "            return_token_type_ids=True,\n",
    "            truncation=True,\n",
    "        )\n",
    "        input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n",
    "        # The mask has 1 for real tokens and 0 for padding tokens. Only real\n",
    "        # tokens are attended to.\n",
    "        attention_mask = [1 if mask_padding_with_zero else 0] * len(input_ids)\n",
    "\n",
    "        # Zero-pad up to the sequence length.\n",
    "        seq_length = len(input_ids)\n",
    "        padding_length = max_length - len(input_ids)\n",
    "\n",
    "        input_ids = input_ids + ([pad_token] * padding_length)\n",
    "        attention_mask = attention_mask + ([0 if mask_padding_with_zero else 1] * padding_length)\n",
    "        token_type_ids = token_type_ids + ([pad_token_segment_id] * padding_length)\n",
    "\n",
    "        assert len(input_ids) == max_length, \\\n",
    "            \"Error with input_ids length {} vs {}\".format(len(input_ids), max_length)\n",
    "        assert len(attention_mask) == max_length, \\\n",
    "            \"Error with attention_mask length {} vs {}\".format(len(attention_mask), max_length)\n",
    "        assert len(token_type_ids) == max_length, \\\n",
    "            \"Error with token_type_ids length {} vs {}\".format(len(token_type_ids), max_length)\n",
    "        if output_mode == \"classification\":\n",
    "            label = label_map[example.label]\n",
    "        elif output_mode == \"regression\":\n",
    "            label = float(example.label)\n",
    "        else:\n",
    "            raise KeyError(output_mode)\n",
    "\n",
    "        feats = InputFeatures(\n",
    "            input_ids=input_ids,\n",
    "            attention_mask=attention_mask,\n",
    "            token_type_ids=token_type_ids,\n",
    "            label=label\n",
    "        )\n",
    "        features.append(feats)\n",
    "    return features\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4536f530-485b-449d-aea4-8afc94f36004",
   "metadata": {},
   "source": [
    "INC provides an unified `DataLoader` API which takes a dataset as the input parameter and loads data from the dataset when needed. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "d7f4abb4-3cb2-469a-ae69-6f6f193ff9f7",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/processors/glue.py:330: FutureWarning: This processor will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING.format(\"processor\"), FutureWarning)\n"
     ]
    }
   ],
   "source": [
    "from neural_compressor.data import DataLoader\n",
    "\n",
    "dataset = GLUEDataset(fp32_model_path,\n",
    "                      data_dir=data_path,\n",
    "                      model_name_or_path=model_name_or_path,\n",
    "                      model_type=\"distilbert\",\n",
    "                      task=task)\n",
    "dataloader = DataLoader(framework=\"onnxruntime\", dataset=dataset, batch_size=batch_size)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76788105-f3d6-43b1-bf46-715d6add725c",
   "metadata": {},
   "source": [
    "### 2.2 Define metric and evaluate function\n",
    "\n",
    "In this part, we define a GLUE metirc and use it to generate an evaluate function for INC.\n",
    "\n",
    "Refer to doc [metric.md](https://github.com/intel/neural-compressor/blob/master/docs/source/metric.md#build-custom-metric-with-python-api) for how to build your own metric."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "5fa64ce9-82a3-44fa-8136-8b09839c8439",
   "metadata": {},
   "outputs": [],
   "source": [
    "class GLUEMetric:\n",
    "    \"\"\"Computes GLUE score.\"\"\"\n",
    "    def __init__(self, task='mrpc'):\n",
    "        assert task in ['mrpc', 'qqp', 'qnli', 'rte', 'sts-b', 'cola', 'mnli', 'wnli', 'sst-2'], 'Unsupported task type'\n",
    "        self.pred_list = None\n",
    "        self.label_list = None\n",
    "        self.task = task\n",
    "        self.return_key = {\n",
    "            \"cola\": \"mcc\",\n",
    "            \"mrpc\": \"f1\",\n",
    "            \"sts-b\": \"corr\",\n",
    "            \"qqp\": \"acc\",\n",
    "            \"mnli\": \"mnli/acc\",\n",
    "            \"qnli\": \"acc\",\n",
    "            \"rte\": \"acc\",\n",
    "            \"wnli\": \"acc\",\n",
    "            \"sst-2\": \"acc\"\n",
    "        }\n",
    "\n",
    "    def update(self, preds, labels):\n",
    "        \"\"\"add preds and labels to storage\"\"\"\n",
    "        if isinstance(preds, list) and len(preds) == 1:\n",
    "            preds = preds[0]\n",
    "        if isinstance(labels, list) and len(labels) == 1:\n",
    "            labels = labels[0]\n",
    "        if self.pred_list is None:\n",
    "            self.pred_list = preds\n",
    "            self.label_list = labels\n",
    "        else:\n",
    "            self.pred_list = np.append(self.pred_list, preds, axis=0)\n",
    "            self.label_list = np.append(self.label_list, labels, axis=0)\n",
    "\n",
    "    def reset(self):\n",
    "        \"\"\"clear preds and labels storage\"\"\"\n",
    "        self.pred_list = None\n",
    "        self.label_list = None\n",
    "\n",
    "    def result(self):\n",
    "        \"\"\"calculate metric\"\"\"\n",
    "        assert self.pred_list is not None, \"Predict list in GLUE metric is None.\"\n",
    "        assert self.label_list is not None, \"Label list in GLUE metric is None.\"\n",
    "        \n",
    "        output_mode = transformers.glue_output_modes[self.task]\n",
    "\n",
    "        if output_mode == \"classification\":\n",
    "            processed_preds = np.argmax(self.pred_list, axis=1)\n",
    "        elif output_mode == \"regression\":\n",
    "            processed_preds = np.squeeze(self.pred_list)\n",
    "        result = transformers.glue_compute_metrics(self.task, processed_preds, self.label_list)\n",
    "        return result[self.return_key[self.task]]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36a514d5-32a6-40bd-999c-d0ecacdd81e3",
   "metadata": {},
   "source": [
    "The evaluate function for INC takes model as parameter, and outputs an accuracy scalar value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "080c5432-da45-4456-a713-92884463dd2a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import onnxruntime as ort\n",
    "from onnx import ModelProto\n",
    "\n",
    "metric = GLUEMetric(task)\n",
    "\n",
    "def eval_func(model: ModelProto):\n",
    "    metric.reset()\n",
    "    session = ort.InferenceSession(model.SerializeToString(), \n",
    "                                   providers=ort.get_available_providers())\n",
    "    ort_inputs = {}\n",
    "    len_inputs = len(session.get_inputs())\n",
    "    inputs_names = [session.get_inputs()[i].name for i in range(len_inputs)]\n",
    "    for idx, (inputs, labels) in enumerate(dataloader):\n",
    "        if not isinstance(labels, list):\n",
    "            labels = [labels]\n",
    "        inputs = inputs[:len_inputs]\n",
    "        for i in range(len_inputs):\n",
    "            ort_inputs.update({inputs_names[i]: inputs[i]})\n",
    "        predictions = session.run(None, ort_inputs)\n",
    "        metric.update(predictions[0], labels)\n",
    "    return metric.result()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d724f61-661a-427a-8327-1f48446b6a4a",
   "metadata": {},
   "source": [
    "### 2.3 Optimize the model\n",
    "\n",
    "It is recommended to try [OnnxRuntime Transformer Model Optimization Tool](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers) on the FP32 ONNX models. It could help verify whether the model can be fully optimized, and get performance results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "7d5f60be-6fee-4e39-91f4-90f950663d20",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "10/10/2023 17:07:30 - WARNING - onnx_model -   Failed to remove node input: \"/distilbert/transformer/layer.0/attention/Transpose_output_0\"\n",
      "input: \"/distilbert/transformer/layer.0/attention/Constant_11_output_0\"\n",
      "output: \"/distilbert/transformer/layer.0/attention/Div_output_0\"\n",
      "name: \"/distilbert/transformer/layer.0/attention/Div\"\n",
      "op_type: \"Div\"\n",
      "\n",
      "10/10/2023 17:07:30 - WARNING - onnx_model -   Failed to remove node input: \"/distilbert/transformer/layer.1/attention/Transpose_output_0\"\n",
      "input: \"/distilbert/transformer/layer.1/attention/Constant_11_output_0\"\n",
      "output: \"/distilbert/transformer/layer.1/attention/Div_output_0\"\n",
      "name: \"/distilbert/transformer/layer.1/attention/Div\"\n",
      "op_type: \"Div\"\n",
      "\n",
      "10/10/2023 17:07:30 - WARNING - onnx_model -   Failed to remove node input: \"/distilbert/transformer/layer.2/attention/Transpose_output_0\"\n",
      "input: \"/distilbert/transformer/layer.2/attention/Constant_11_output_0\"\n",
      "output: \"/distilbert/transformer/layer.2/attention/Div_output_0\"\n",
      "name: \"/distilbert/transformer/layer.2/attention/Div\"\n",
      "op_type: \"Div\"\n",
      "\n",
      "10/10/2023 17:07:30 - WARNING - onnx_model -   Failed to remove node input: \"/distilbert/transformer/layer.3/attention/Transpose_output_0\"\n",
      "input: \"/distilbert/transformer/layer.3/attention/Constant_11_output_0\"\n",
      "output: \"/distilbert/transformer/layer.3/attention/Div_output_0\"\n",
      "name: \"/distilbert/transformer/layer.3/attention/Div\"\n",
      "op_type: \"Div\"\n",
      "\n",
      "10/10/2023 17:07:30 - WARNING - onnx_model -   Failed to remove node input: \"/distilbert/transformer/layer.4/attention/Transpose_output_0\"\n",
      "input: \"/distilbert/transformer/layer.4/attention/Constant_11_output_0\"\n",
      "output: \"/distilbert/transformer/layer.4/attention/Div_output_0\"\n",
      "name: \"/distilbert/transformer/layer.4/attention/Div\"\n",
      "op_type: \"Div\"\n",
      "\n",
      "10/10/2023 17:07:30 - WARNING - onnx_model -   Failed to remove node input: \"/distilbert/transformer/layer.5/attention/Transpose_output_0\"\n",
      "input: \"/distilbert/transformer/layer.5/attention/Constant_11_output_0\"\n",
      "output: \"/distilbert/transformer/layer.5/attention/Div_output_0\"\n",
      "name: \"/distilbert/transformer/layer.5/attention/Div\"\n",
      "op_type: \"Div\"\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from onnxruntime.transformers import optimizer\n",
    "from onnxruntime.transformers.fusion_options import FusionOptions\n",
    "\n",
    "model_type = 'bert'\n",
    "num_heads = 12\n",
    "hidden_size = 768\n",
    "\n",
    "opt_options = FusionOptions(model_type)\n",
    "opt_options.enable_embed_layer_norm = False\n",
    "\n",
    "model_optimizer = optimizer.optimize_model(\n",
    "    fp32_model_path,\n",
    "    model_type,\n",
    "    num_heads=num_heads,\n",
    "    hidden_size=hidden_size,\n",
    "    optimization_options=opt_options)\n",
    "model = model_optimizer.model\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "567f3d6d-c61d-43b0-99b6-d890d31df24f",
   "metadata": {},
   "source": [
    "### 2.4 Run quantization\n",
    "\n",
    "So far, we can finally start to quantize the model. \n",
    "\n",
    "To start, we need to set the configuration for post-training quantization using `PostTrainingQuantConfig` class. Once the configuration is set, we can proceed to the next step by calling the `quantization.fit()` function. This function performs the quantization process on the model and will return the best quantized model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "1c637f15-17ca-4bfc-bc04-e50c56b54f6d",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-10-10 17:07:33 [INFO] Start auto tuning.\n",
      "2023-10-10 17:07:33 [INFO] Execute the tuning process due to detect the evaluation function.\n",
      "2023-10-10 17:07:33 [INFO] Adaptor has 5 recipes.\n",
      "2023-10-10 17:07:33 [INFO] 0 recipes specified by user.\n",
      "2023-10-10 17:07:33 [INFO] 3 recipes require future tuning.\n",
      "2023-10-10 17:07:33 [INFO] *** Initialize auto tuning\n",
      "2023-10-10 17:07:33 [INFO] {\n",
      "2023-10-10 17:07:33 [INFO]     'PostTrainingQuantConfig': {\n",
      "2023-10-10 17:07:33 [INFO]         'AccuracyCriterion': {\n",
      "2023-10-10 17:07:33 [INFO]             'criterion': 'relative',\n",
      "2023-10-10 17:07:33 [INFO]             'higher_is_better': True,\n",
      "2023-10-10 17:07:33 [INFO]             'tolerable_loss': 0.01,\n",
      "2023-10-10 17:07:33 [INFO]             'absolute': None,\n",
      "2023-10-10 17:07:33 [INFO]             'keys': <bound method AccuracyCriterion.keys of <neural_compressor.config.AccuracyCriterion object at 0x7fd9e7e53ee0>>,\n",
      "2023-10-10 17:07:33 [INFO]             'relative': 0.01\n",
      "2023-10-10 17:07:33 [INFO]         },\n",
      "2023-10-10 17:07:33 [INFO]         'approach': 'post_training_static_quant',\n",
      "2023-10-10 17:07:33 [INFO]         'backend': 'default',\n",
      "2023-10-10 17:07:33 [INFO]         'calibration_sampling_size': [\n",
      "2023-10-10 17:07:33 [INFO]             100\n",
      "2023-10-10 17:07:33 [INFO]         ],\n",
      "2023-10-10 17:07:33 [INFO]         'device': 'cpu',\n",
      "2023-10-10 17:07:33 [INFO]         'diagnosis': False,\n",
      "2023-10-10 17:07:33 [INFO]         'domain': 'auto',\n",
      "2023-10-10 17:07:33 [INFO]         'example_inputs': None,\n",
      "2023-10-10 17:07:33 [INFO]         'excluded_precisions': [\n",
      "2023-10-10 17:07:33 [INFO]         ],\n",
      "2023-10-10 17:07:33 [INFO]         'framework': 'onnxruntime',\n",
      "2023-10-10 17:07:33 [INFO]         'inputs': [\n",
      "2023-10-10 17:07:33 [INFO]         ],\n",
      "2023-10-10 17:07:33 [INFO]         'model_name': '',\n",
      "2023-10-10 17:07:33 [INFO]         'ni_workload_name': 'quantization',\n",
      "2023-10-10 17:07:33 [INFO]         'op_name_dict': None,\n",
      "2023-10-10 17:07:33 [INFO]         'op_type_dict': None,\n",
      "2023-10-10 17:07:33 [INFO]         'outputs': [\n",
      "2023-10-10 17:07:33 [INFO]         ],\n",
      "2023-10-10 17:07:33 [INFO]         'quant_format': 'default',\n",
      "2023-10-10 17:07:33 [INFO]         'quant_level': 'auto',\n",
      "2023-10-10 17:07:33 [INFO]         'recipes': {\n",
      "2023-10-10 17:07:33 [INFO]             'smooth_quant': False,\n",
      "2023-10-10 17:07:33 [INFO]             'smooth_quant_args': {\n",
      "2023-10-10 17:07:33 [INFO]             },\n",
      "2023-10-10 17:07:33 [INFO]             'layer_wise_quant': False,\n",
      "2023-10-10 17:07:33 [INFO]             'layer_wise_quant_args': {\n",
      "2023-10-10 17:07:33 [INFO]             },\n",
      "2023-10-10 17:07:33 [INFO]             'fast_bias_correction': False,\n",
      "2023-10-10 17:07:33 [INFO]             'weight_correction': False,\n",
      "2023-10-10 17:07:33 [INFO]             'gemm_to_matmul': True,\n",
      "2023-10-10 17:07:33 [INFO]             'graph_optimization_level': None,\n",
      "2023-10-10 17:07:33 [INFO]             'first_conv_or_matmul_quantization': True,\n",
      "2023-10-10 17:07:33 [INFO]             'last_conv_or_matmul_quantization': True,\n",
      "2023-10-10 17:07:33 [INFO]             'pre_post_process_quantization': True,\n",
      "2023-10-10 17:07:33 [INFO]             'add_qdq_pair_to_weight': False,\n",
      "2023-10-10 17:07:33 [INFO]             'optypes_to_exclude_output_quant': [\n",
      "2023-10-10 17:07:33 [INFO]             ],\n",
      "2023-10-10 17:07:33 [INFO]             'dedicated_qdq_pair': False,\n",
      "2023-10-10 17:07:33 [INFO]             'rtn_args': {\n",
      "2023-10-10 17:07:33 [INFO]             },\n",
      "2023-10-10 17:07:33 [INFO]             'awq_args': {\n",
      "2023-10-10 17:07:33 [INFO]             },\n",
      "2023-10-10 17:07:33 [INFO]             'gptq_args': {\n",
      "2023-10-10 17:07:33 [INFO]             },\n",
      "2023-10-10 17:07:33 [INFO]             'teq_args': {\n",
      "2023-10-10 17:07:33 [INFO]             }\n",
      "2023-10-10 17:07:33 [INFO]         },\n",
      "2023-10-10 17:07:33 [INFO]         'reduce_range': None,\n",
      "2023-10-10 17:07:33 [INFO]         'TuningCriterion': {\n",
      "2023-10-10 17:07:33 [INFO]             'max_trials': 100,\n",
      "2023-10-10 17:07:33 [INFO]             'objective': [\n",
      "2023-10-10 17:07:33 [INFO]                 'performance'\n",
      "2023-10-10 17:07:33 [INFO]             ],\n",
      "2023-10-10 17:07:33 [INFO]             'strategy': 'basic',\n",
      "2023-10-10 17:07:33 [INFO]             'strategy_kwargs': None,\n",
      "2023-10-10 17:07:33 [INFO]             'timeout': 0\n",
      "2023-10-10 17:07:33 [INFO]         },\n",
      "2023-10-10 17:07:33 [INFO]         'use_bf16': True\n",
      "2023-10-10 17:07:33 [INFO]     }\n",
      "2023-10-10 17:07:33 [INFO] }\n",
      "2023-10-10 17:07:33 [WARNING] [Strategy] Please install `mpi4py` correctly if using distributed tuning; otherwise, ignore this warning.\n",
      "2023-10-10 17:07:33 [WARNING] The model is automatically detected as an NLP model. You can use 'domain' argument in 'PostTrainingQuantConfig' to overwrite it\n",
      "2023-10-10 17:07:33 [WARNING] Graph optimization level is automatically set to ENABLE_EXTENDED. You can use 'recipe' argument in 'PostTrainingQuantConfig'to overwrite it\n",
      "2023-10-10 17:07:47 [INFO] Get FP32 model baseline.\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/metrics/__init__.py:61: FutureWarning: This metric will be removed from the library soon, metrics should be handled with the 🤗 Evaluate library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING, FutureWarning)\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/metrics/__init__.py:31: FutureWarning: This metric will be removed from the library soon, metrics should be handled with the 🤗 Evaluate library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING, FutureWarning)\n",
      "2023-10-10 17:07:55 [INFO] Save tuning history to /home/yuwenzho/refine_notebook/neural-compressor/examples/notebook/onnxruntime/nc_workspace/2023-10-10_17-07-07/./history.snapshot.\n",
      "2023-10-10 17:07:55 [INFO] FP32 baseline is: [Accuracy: 0.9106, Duration (seconds): 8.2803]\n",
      "2023-10-10 17:07:55 [INFO] Quantize the model with default config.\n",
      "2023-10-10 17:07:55 [WARNING] Reset `calibration.dataloader.batch_size` field to 5 to make sure the sampling_size is divisible exactly by batch size\n",
      "2023-10-10 17:08:22 [INFO] |*******Mixed Precision Statistics*******|\n",
      "2023-10-10 17:08:22 [INFO] +------------------+-------+------+------+\n",
      "2023-10-10 17:08:22 [INFO] |     Op Type      | Total | INT8 | FP32 |\n",
      "2023-10-10 17:08:22 [INFO] +------------------+-------+------+------+\n",
      "2023-10-10 17:08:22 [INFO] |      Gather      |   4   |  2   |  2   |\n",
      "2023-10-10 17:08:22 [INFO] |      MatMul      |   19  |  19  |  0   |\n",
      "2023-10-10 17:08:22 [INFO] |    Attention     |   6   |  6   |  0   |\n",
      "2023-10-10 17:08:22 [INFO] |       Add        |   2   |  2   |  0   |\n",
      "2023-10-10 17:08:22 [INFO] |    Unsqueeze     |   1   |  0   |  1   |\n",
      "2023-10-10 17:08:22 [INFO] |      Slice       |   1   |  0   |  1   |\n",
      "2023-10-10 17:08:22 [INFO] |  QuantizeLinear  |   25  |  25  |  0   |\n",
      "2023-10-10 17:08:22 [INFO] | DequantizeLinear |   27  |  27  |  0   |\n",
      "2023-10-10 17:08:22 [INFO] +------------------+-------+------+------+\n",
      "2023-10-10 17:08:22 [INFO] Pass quantize model elapsed time: 27549.4 ms\n",
      "2023-10-10 17:08:30 [INFO] Tune 1 result is: [Accuracy (int8|fp32): 0.9071|0.9106, Duration (seconds) (int8|fp32): 7.1545|8.2803], Best tune result is: [Accuracy: 0.9071, Duration (seconds): 7.1545]\n",
      "2023-10-10 17:08:30 [INFO] |**********************Tune Result Statistics**********************|\n",
      "2023-10-10 17:08:30 [INFO] +--------------------+----------+---------------+------------------+\n",
      "2023-10-10 17:08:30 [INFO] |     Info Type      | Baseline | Tune 1 result | Best tune result |\n",
      "2023-10-10 17:08:30 [INFO] +--------------------+----------+---------------+------------------+\n",
      "2023-10-10 17:08:30 [INFO] |      Accuracy      | 0.9106   |    0.9071     |     0.9071       |\n",
      "2023-10-10 17:08:30 [INFO] | Duration (seconds) | 8.2803   |    7.1545     |     7.1545       |\n",
      "2023-10-10 17:08:30 [INFO] +--------------------+----------+---------------+------------------+\n",
      "2023-10-10 17:08:30 [INFO] [Strategy] Found a model that meets the accuracy requirements.\n",
      "2023-10-10 17:08:30 [INFO] Save tuning history to /home/yuwenzho/refine_notebook/neural-compressor/examples/notebook/onnxruntime/nc_workspace/2023-10-10_17-07-07/./history.snapshot.\n",
      "2023-10-10 17:08:30 [INFO] [Strategy] Found the model meets accuracy requirements, ending the tuning process.\n",
      "2023-10-10 17:08:30 [INFO] Specified timeout or max trials is reached! Found a quantized model which meet accuracy goal. Exit.\n",
      "2023-10-10 17:08:30 [INFO] Save deploy yaml to /home/yuwenzho/refine_notebook/neural-compressor/examples/notebook/onnxruntime/nc_workspace/2023-10-10_17-07-07/deploy.yaml\n"
     ]
    }
   ],
   "source": [
    "from neural_compressor import quantization, PostTrainingQuantConfig\n",
    "\n",
    "config = PostTrainingQuantConfig(approach='static')\n",
    "q_model = quantization.fit(model, \n",
    "                           config,\n",
    "                           eval_func=eval_func,\n",
    "                           calib_dataloader=dataloader)\n",
    "q_model.save(int8_model_path)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c10d5ba1-fa67-49dc-ad0b-7dab272983f6",
   "metadata": {},
   "source": [
    "## 3. Benchmark with Intel® Neural Compressor\n",
    "\n",
    "INC provides a benchmark feature to measure the model performance with the objective settings.\n",
    "Now we can see that we have two models under the `onnx-model` directory: the original fp32 model `model.onnx` and the quantized int8 model `int8-model.onnx`, and then we are going to do performance comparisons between them.\n",
    "\n",
    "To avoid the conflicts of jupyter notebook kernel to our benchmark process. We create a `benchmark.py` and run it directly to do the benchmarks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "34e4a9c7-f685-4d19-bb62-f90ff24ab14a",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/processors/glue.py:330: FutureWarning: This processor will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING.format(\"processor\"), FutureWarning)\n",
      "You are using a model of type distilbert to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\n",
      "2023-10-10 17:08:36 [INFO] Start to run Benchmark.\n",
      "2023-10-10 17:08:36 [INFO] num of instance: 1\n",
      "2023-10-10 17:08:36 [INFO] cores per instance: 4\n",
      "2023-10-10 17:08:37 [INFO] Running command is\n",
      "OMP_NUM_THREADS=4 numactl --localalloc --physcpubind=0,1,2,3 /home/yuwenzho/miniconda3/envs/example/bin/python benchmark.py --input_model ./onnx-model/model.onnx 2>&1|tee 1_4_0.log & \\\n",
      "wait\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/processors/glue.py:330: FutureWarning: This processor will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING.format(\"processor\"), FutureWarning)\n",
      "You are using a model of type distilbert to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\n",
      "2023-10-10 17:08:41 [INFO] Start to run Benchmark.\n",
      "2023-10-10 17:08:45 [INFO] \n",
      "benchmark result:\n",
      "2023-10-10 17:08:45 [INFO] Batch size = 1\n",
      "2023-10-10 17:08:45 [INFO] Latency: 24.599 ms\n",
      "2023-10-10 17:08:45 [INFO] Throughput: 40.652 images/sec\n",
      "2023-10-10 17:08:45 [INFO] ********************************************\n",
      "2023-10-10 17:08:45 [INFO] |****Multiple Instance Benchmark Summary*****|\n",
      "2023-10-10 17:08:45 [INFO] +---------------------------------+----------+\n",
      "2023-10-10 17:08:45 [INFO] |              Items              |  Result  |\n",
      "2023-10-10 17:08:45 [INFO] +---------------------------------+----------+\n",
      "2023-10-10 17:08:45 [INFO] | Latency average [second/sample] | 0.024599 |\n",
      "2023-10-10 17:08:45 [INFO] | Throughput sum [samples/second] |  40.652  |\n",
      "2023-10-10 17:08:45 [INFO] +---------------------------------+----------+\n"
     ]
    }
   ],
   "source": [
    "# FP32 benchmark\n",
    "!python benchmark.py --input_model ./onnx-model/model.onnx 2>&1|tee fp32_benchmark.log"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "a39b068e-0b31-45dc-84e8-18d3942028f5",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/processors/glue.py:330: FutureWarning: This processor will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING.format(\"processor\"), FutureWarning)\n",
      "You are using a model of type distilbert to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\n",
      "2023-10-10 17:08:51 [INFO] Start to run Benchmark.\n",
      "2023-10-10 17:08:51 [INFO] num of instance: 1\n",
      "2023-10-10 17:08:51 [INFO] cores per instance: 4\n",
      "2023-10-10 17:08:51 [INFO] Running command is\n",
      "OMP_NUM_THREADS=4 numactl --localalloc --physcpubind=0,1,2,3 /home/yuwenzho/miniconda3/envs/example/bin/python benchmark.py --input_model ./onnx-model/int8-model.onnx 2>&1|tee 1_4_0.log & \\\n",
      "wait\n",
      "/home/yuwenzho/miniconda3/envs/example/lib/python3.8/site-packages/transformers/data/processors/glue.py:330: FutureWarning: This processor will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py\n",
      "  warnings.warn(DEPRECATION_WARNING.format(\"processor\"), FutureWarning)\n",
      "You are using a model of type distilbert to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\n",
      "2023-10-10 17:08:56 [INFO] Start to run Benchmark.\n",
      "2023-10-10 17:08:58 [INFO] \n",
      "benchmark result:\n",
      "2023-10-10 17:08:58 [INFO] Batch size = 1\n",
      "2023-10-10 17:08:58 [INFO] Latency: 10.204 ms\n",
      "2023-10-10 17:08:58 [INFO] Throughput: 98.000 images/sec\n",
      "2023-10-10 17:08:58 [INFO] ********************************************\n",
      "2023-10-10 17:08:58 [INFO] |****Multiple Instance Benchmark Summary*****|\n",
      "2023-10-10 17:08:58 [INFO] +---------------------------------+----------+\n",
      "2023-10-10 17:08:58 [INFO] |              Items              |  Result  |\n",
      "2023-10-10 17:08:58 [INFO] +---------------------------------+----------+\n",
      "2023-10-10 17:08:58 [INFO] | Latency average [second/sample] | 0.010204 |\n",
      "2023-10-10 17:08:58 [INFO] | Throughput sum [samples/second] |  98.000  |\n",
      "2023-10-10 17:08:58 [INFO] +---------------------------------+----------+\n"
     ]
    }
   ],
   "source": [
    "# INT8 benchmark\n",
    "!python benchmark.py --input_model ./onnx-model/int8-model.onnx 2>&1|tee int8_benchmark.log"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1d3aff90-798f-44f1-b430-6e3952cc6d65",
   "metadata": {},
   "source": [
    "As shown in the logs, the int8/fp32 performance gain is about 98.000/40.652 = 2.41x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "96589f10-333a-4100-9232-21cb61c1bbfe",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
