{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h1> <FONT COLOR=\"\"> Quantization and benchmarking of deep learning models using ONNX Runtime and STM32Cube.AI Developer Cloud : </h1>\n",
    "    \n",
    "    \n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>\n",
    "The process of quantization involves the convertion the original floating-point parameters and intermediate activations of a model into lower precision integer representations. This reduction in precision can significantly decrease the memory footprint and computational cost of the model, making it more efficient to deploy on STM32 board using STM32Cube.AI or any other resource-constrained devices.\n",
    "\n",
    "ONNX Runtime Quantization is a feature the ONNX Runtime that allows efficient execution of quantized models. It provides tools and techniques to quantize the ONNX format models. It includes methods for quantizing weights and activations.\n",
    "\n",
    "\n",
    "**This notebook demonstrates the process of static post-training quantization for deep learning models using the ONNX runtime. It covers the model quantization with calibration dataset or with fake data, the evaluation of the full precision model and the quantized model, and then the STM32Cube.AI Developer Cloud is used to benchmark the models and to generate the model C code to be deployed on your STM32 board.** \n",
    "</p>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## License of the Jupyter Notebook\n",
    "\n",
    "This software component is licensed by ST under BSD-3-Clause license,\n",
    "the \"License\"; \n",
    "\n",
    "You may not use this file except in compliance with the\n",
    "License. \n",
    "\n",
    "You may obtain a copy of the License at: https://opensource.org/licenses/BSD-3-Clause\n",
    "\n",
    "Copyright (c) 2023 STMicroelectronics. All rights reserved"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"border-bottom: 3px solid #273B5F\">\n",
    "<h2>Table of content</h2>\n",
    "<ul style=\"list-style-type: none\">\n",
    "  <li><a href=\"#settings\">1. Settings</a>\n",
    "  <ul style=\"list-style-type: none\">\n",
    "    <li><a href=\"#install\">1.1 Install and import necessary packages</a></li>\n",
    "    <li><a href=\"#select\">1.2 Select input model filename and dataset folder</a></li>\n",
    "  </ul>\n",
    "</li>\n",
    "<li><a href=\"#quantization\">2.Quantization</a></li>\n",
    "      <ul style=\"list-style-type: none\">\n",
    "    <li><a href=\"#opset\">2.1 Opset conversion</a></li>\n",
    "    <li><a href=\"#dataset\">2.2 Creating calibration dataset</a></li>\n",
    "    <li><a href=\"#quantize\">2.3 Quantize the model using QDQ quantization to int8 weights and activations</a></li>\n",
    "  </ul>\n",
    "<li><a href=\"#Model validation\">3. Model validation </a></li>\n",
    "<li><a href=\"#benchmark_both\">4. Benchmarking the Models on the STM32Cube.AI Developer Cloud</a></li>\n",
    "      <ul style=\"list-style-type: none\">\n",
    "    <li><a href=\"#proxy\">4.1 Proxy setting and connection to the STM32Cube.AI Developer Cloud</a></li>\n",
    "    <li><a href=\"#Benchmark_both\">4.2 Benchmark the models on a STM32 target</a></li>\n",
    "    <li><a href=\"#generate\">4.2 Generate the model optimized C code for STM32</a></li>\n",
    "         \n",
    "\n",
    "  </ul>\n",
    "</ul>\n",
    "</div>\n",
    "\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "<div id=\"settings\">\n",
    "    <h2>1. Settings</h2>\n",
    "</div>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "<div id=\"install\">\n",
    "    <h3>1.1 Install and import necessary packages </h3>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "\n",
    "!{sys.executable} -m pip install numpy==1.23.5\n",
    "!{sys.executable} -m pip install onnx==1.15.0\n",
    "!{sys.executable} -m pip install onnxruntime==1.18.1\n",
    "!{sys.executable} -m pip install tensorflow==2.15.0\n",
    "!{sys.executable} -m pip install scikit-learn\n",
    "!{sys.executable} -m pip install Pillow==9.4.0\n",
    "!{sys.executable} -m pip install matplotlib\n",
    "!{sys.executable} -m pip install tqdm\n",
    "!{sys.executable} -m pip install marshmallow\n",
    "\n",
    "# for the cloud service\n",
    "!{sys.executable} -m pip install gitdir\n",
    "# for the cloud service\n",
    "!{sys.executable} -m pip install gitdir"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import glob\n",
    "import os\n",
    "import random\n",
    "import shutil\n",
    "\n",
    "import numpy as np \n",
    "import tensorflow as tf\n",
    "from datetime import datetime\n",
    "from tqdm import tqdm\n",
    "from typing import Tuple, Optional, List, Dict\n",
    "\n",
    "import onnx\n",
    "import onnxruntime\n",
    "from onnx import version_converter\n",
    "from onnxruntime import quantization\n",
    "from onnxruntime.quantization import (CalibrationDataReader, CalibrationMethod,\n",
    "                                      QuantFormat, QuantType, quantize_static)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "<div id=\"select\">\n",
    "    <h3>1.2 Select input model filename and dataset folder</h3>\n",
    "</div>\n",
    "\n",
    "The code section below is to set the paths of the model and the dataset for the following notebook, the model is expected to be in Open Neural Network Exchange (ONNX) format, in the conducted experience we are using the mobilenet_v2_0.35_128 model as an exemple with the modified version of COCO2014 dataset. To find more details please visit this [link](https://pjreddie.com/projects/coco-mirror/). \n",
    "\n",
    "The quantization set is a directory containing a sub-directory per class, For instance:\n",
    "\n",
    "```bash\n",
    " quantization_set/\n",
    " ..class_a:person/\n",
    " ....a_image_1.jpg\n",
    " ....a_image_2.jpg\n",
    " ..class_b:not_person/\n",
    " ....b_image_1.jpg\n",
    " ....b_image_2.jpg\n",
    "\n",
    "```\n",
    "\n",
    "To ensure proper quantization, ``quantization_dataset_path`` must point to the quantization set or the training set to create the calibration dataset later.\n",
    "\n",
    "For fake quantization, set ``quantization_dataset_path`` to ``None``."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_model =\"models/mobilenet_v2_128_0.5.onnx\"\n",
    "quantization_dataset_path=os.path.join(\"path/to/quantization_set\")\n",
    "#quantization_dataset_path=None"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "<div id=\"quantization\">\n",
    "    <h2>2. Quantization</h2>\n",
    "</div>\n",
    "\n",
    "<div id=\"opset\">\n",
    "    <h3>2.1. Opset conversion  </h3>\n",
    "</div>\n",
    "\n",
    "In this section, we are upgrading the model's opset to version 15 to take advantage of advanced optimizations such as Batch normalization folding and ensure compatibility with the latest versions of ONNX and ONNX runtime. To do this, we run the code below.\n",
    "\n",
    "To ensure compatibility between the ONNX runtime version and the opset number, please refer to [the official documentation of ONNX Runtime](https://onnxruntime.ai/docs/reference/compatibility.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def change_opset(input_model: str, new_opset: int) -> str:\n",
    "    \"\"\"\n",
    "    Converts the opset version of an ONNX model to a new opset version.\n",
    "\n",
    "    Args:\n",
    "        input_model (str): The path to the input ONNX model.\n",
    "        new_opset (int): The new opset version to convert the model to.\n",
    "\n",
    "    Returns:\n",
    "        str: The path to the converted ONNX model.\n",
    "    \"\"\"\n",
    "    if not input_model.endswith('.onnx'):\n",
    "        raise Exception(\"Error! The model must be in onnx format\")    \n",
    "    model = onnx.load(input_model)\n",
    "    # Check the current opset version\n",
    "    current_opset = model.opset_import[0].version\n",
    "    if current_opset == new_opset:\n",
    "        print(f\"The model is already using opset {new_opset}\")\n",
    "        return input_model\n",
    "\n",
    "    # Modify the opset version in the model\n",
    "    converted_model = version_converter.convert_version(model, new_opset)\n",
    "    temp_model_path = input_model+ '.temp'\n",
    "    onnx.save(converted_model, temp_model_path)\n",
    "\n",
    "    # Load the modified model using ONNX Runtime Check if the model is valid\n",
    "    session = onnxruntime.InferenceSession(temp_model_path)\n",
    "    try:\n",
    "        session.get_inputs()\n",
    "    except Exception as e:\n",
    "        print(f\"An error occurred while loading the modified model: {e}\")\n",
    "        return\n",
    "\n",
    "    # Replace the original model file with the modified model\n",
    "    os.replace(temp_model_path, input_model)\n",
    "    print(f\"The model has been converted to opset {new_opset} and saved at the same location.\")\n",
    "    return input_model\n",
    "\n",
    "change_opset(input_model, new_opset=15)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div id=\"dataset\">\n",
    "    <h3> 2.2 Creating the calibration dataset </h3>\n",
    "</div>\n",
    "\n",
    "During ONNX runtime quantization, the model is run on the calibration data to provide statistics about the dynamic and characteristics of each input and output. These statistics are then used to determine the main quantization parameters, which are the scale factor and a zero-point or offset to map the floating-point values to integers.\n",
    "\n",
    "The next three code sections below contain:\n",
    "\n",
    "* The `create_calibration_dataset` function to create the calibration set from the original directory by taking a specific number of samples from each class, and the `preprocess_image_batch` function to load the batch and process it.\n",
    "* The `preprocess_random_images` function to generate random images for fake quantization and preprocess them.\n",
    "* The `ImageNetDataReader` class that inherits from the ONNX Runtime calibration data readers and implements the `get_next` method to generate and provide input data dictionaries for the calibration process.\n",
    "\n",
    "**Note:** Using a different normalization method during quantization than during training can affect the scale of the data and lead to a loss of accuracy in the quantized model. For example, if you used TensorFlow's normalization method during training, where the data is scaled by dividing each pixel value by 255.0, you should also use this method during quantization. Similarly, if you used Torch's normalization method during training, where the data is scaled by subtracting the mean and dividing by the standard deviation, you should also use this method during quantization.\n",
    "\n",
    "Using the same normalization method for both training and quantization ensures that the quantized model retains the accuracy achieved during training. Therefore, it is important to pay attention to the normalization method used during both training and quantization to ensure the best possible accuracy for your model.\n",
    "\n",
    "To align the preprocessing of the quantization dataset in the section below with the preprocessing of the trained model, adjust the arguments `color_mode`, `interpolation`, and `norm` for normalization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_calibration_dataset(dataset_path: str, samples_per_class: Optional[int] = 100) -> str:\n",
    "    \"\"\"\n",
    "    Creates a calibration dataset for use in quantizing a machine learning model.\n",
    "\n",
    "    Args:\n",
    "        dataset_path (str): The path to the original dataset.\n",
    "        samples_per_class (int, optional): The number of images to include per class in the calibration dataset. Defaults to 100.\n",
    "\n",
    "    Returns:\n",
    "        str: The path to the calibration dataset.\n",
    "    \"\"\"\n",
    "    # the calibration dataset will be find in under the same directory as the dataset \n",
    "    calibration_dataset_path = os.path.join(os.path.dirname(dataset_path), 'calibration_' + os.path.basename(dataset_path))\n",
    "    # List directories\n",
    "    dir_list = next(os.walk(dataset_path))[1]\n",
    "\n",
    "    # Create the target directory if it doesn't exist\n",
    "    if not os.path.exists(calibration_dataset_path):\n",
    "        os.makedirs(calibration_dataset_path)\n",
    "\n",
    "    # For each directory, create a new directory in the target directory\n",
    "    for dir_i in tqdm(dir_list):\n",
    "        img_list = glob.glob(os.path.join(dataset_path, dir_i, '*.jpg')) + \\\n",
    "                   glob.glob(os.path.join(dataset_path, dir_i, '*.png')) + \\\n",
    "                   glob.glob(os.path.join(dataset_path, dir_i, '*.jpeg'))\n",
    "\n",
    "        # Shuffle the data\n",
    "        random.shuffle(img_list)\n",
    "\n",
    "        # Copy a subset of images to the target directory\n",
    "        for j in range(min(samples_per_class, len(img_list))):\n",
    "            shutil.copy2(img_list[j], calibration_dataset_path)\n",
    "    now = datetime.now()\n",
    "    current_time = now.strftime(\"%H:%M:%S\")\n",
    "    print(current_time + ' - ' + f'Done creating calibration dataset.')\n",
    "    return calibration_dataset_path\n",
    "\n",
    "\n",
    "def preprocess_image_batch(images_folder: str, height: int, width: int, size_limit: int = 0) -> np.ndarray:\n",
    "    \"\"\"\n",
    "    Loads a batch of images and preprocess them\n",
    "    :param images_folder: path to folder storing images\n",
    "    :param height: image height in pixels\n",
    "    :param width: image width in pixels\n",
    "    :param size_limit: number of images to load. Default is 0 which means all images are picked.\n",
    "    :return: list of matrices characterizing multiple images\n",
    "    \"\"\"\n",
    "    TORCH_MEANS = [0.485, 0.456, 0.406]\n",
    "    TORCH_STD = [0.224, 0.224, 0.224]\n",
    "\n",
    "    interpolation = 'nearest'\n",
    "    color_mode = 'rgb'\n",
    "    norm = 'tf'\n",
    "\n",
    "    image_names = os.listdir(images_folder)\n",
    "    if size_limit > 0 and len(image_names) >= size_limit:\n",
    "        batch_filenames = [image_names[i] for i in range(size_limit)]\n",
    "    else:\n",
    "        batch_filenames = image_names\n",
    "    unconcatenated_batch_data = []\n",
    "\n",
    "    for image_name in batch_filenames:\n",
    "        image_filepath = os.path.join(images_folder, image_name)\n",
    "        img = tf.keras.utils.load_img(image_filepath, color_mode=color_mode, target_size=(width, height), interpolation=interpolation)\n",
    "        img_array = np.array([tf.keras.utils.img_to_array(img)])\n",
    "        if norm.lower() == 'tf':\n",
    "            img_array = -1 + img_array / 127.5\n",
    "        elif norm.lower() == 'torch':\n",
    "            img_array = img_array / 255.0\n",
    "            img_array = img_array - TORCH_MEANS\n",
    "            img_array = img_array / TORCH_STD\n",
    "        # transpose the data (hwc to chw) to be conform to the expected input data representation\n",
    "        img_array = img_array.transpose((0, 3, 1, 2))\n",
    "        unconcatenated_batch_data.append(img_array)\n",
    "    batch_data = np.stack(unconcatenated_batch_data, axis=0)\n",
    "    return batch_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def preprocess_random_images(height: int, width: int, channel: int, size_limit: int = 400) -> np.ndarray:\n",
    "    \"\"\"\n",
    "    Loads a batch of random images and preprocess them\n",
    "    :param height: Image height in pixels.\n",
    "    :param width: Image width in pixels.\n",
    "    :param channel: Number of channels in the image.\n",
    "    :param size_limit: Number of images to generate. Default is 400.\n",
    "    :return: List of matrices characterizing multiple images.\n",
    "    \"\"\"\n",
    "    unconcatenated_batch_data = []\n",
    "    for i in range(size_limit):\n",
    "        random_vals = np.random.uniform(0, 1, channel*height*width).astype('float32')\n",
    "        random_image = random_vals.reshape(1, channel, height, width)\n",
    "        unconcatenated_batch_data.append(random_image)\n",
    "        batch_data = np.concatenate(np.expand_dims(unconcatenated_batch_data, axis=0), axis=0) \n",
    "    now = datetime.now()\n",
    "    current_time = now.strftime(\"%H:%M:%S\")\n",
    "    print(current_time + ' - ' + 'Random dataset with {} random images.'.format(size_limit))\n",
    "    return batch_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ImageNetDataReader(CalibrationDataReader):\n",
    "    def __init__(self, calibration_image_folder: str, model_path: str):\n",
    "        # Use inference session to get input shape\n",
    "        session = onnxruntime.InferenceSession(model_path, None)\n",
    "        (_, channel, height, width) = session.get_inputs()[0].shape\n",
    "\n",
    "        # Convert image to input data\n",
    "        # Set input normalization based on training normalization \n",
    "        if calibration_image_folder:\n",
    "            self.nhwc_data_list = preprocess_image_batch(\n",
    "                calibration_image_folder, height, width, size_limit=0\n",
    "            )\n",
    "        else:\n",
    "            self.nhwc_data_list = preprocess_random_images(\n",
    "                height, width, channel\n",
    "            )\n",
    "\n",
    "        self.input_name = session.get_inputs()[0].name\n",
    "        self.datasize = len(self.nhwc_data_list)\n",
    "\n",
    "        self.enum_data = None  # Enumerator for calibration data\n",
    "\n",
    "    def get_next(self):\n",
    "        if self.enum_data is None:\n",
    "            # Create an iterator that generates input dictionaries\n",
    "            # with input name and corresponding data\n",
    "            self.enum_data = iter(\n",
    "                [{self.input_name: nhwc_data} for nhwc_data in self.nhwc_data_list]\n",
    "            )\n",
    "        \n",
    "        return next(self.enum_data, None)  # Return next item from enumerator\n",
    "\n",
    "    def rewind(self):\n",
    "        self.enum_data = None  # Reset the enumeration of calibration data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ImageNetDataReader(CalibrationDataReader):\n",
    "    \"\"\"\n",
    "    A class used to read calibration data for a given model.\n",
    "\n",
    "    Attributes\n",
    "    ----------\n",
    "    calibration_image_folder : str\n",
    "        The path to the folder containing calibration images\n",
    "    model_path : str\n",
    "        The path to the ONNX model file\n",
    "\n",
    "    Methods\n",
    "    -------\n",
    "    get_next() -> Dict[str, List[float]]\n",
    "        Returns the next item from the enumerator\n",
    "    rewind() -> None\n",
    "        Resets the enumeration of calibration data\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, calibration_image_folder: str, model_path: str) -> None:\n",
    "        \"\"\"\n",
    "        Initializes the ImageNetDataReader class.\n",
    "\n",
    "        Parameters\n",
    "        ----------\n",
    "        calibration_image_folder : str\n",
    "            The path to the folder containing calibration images\n",
    "        model_path : str\n",
    "            The path to the ONNX model file\n",
    "        \"\"\"\n",
    "\n",
    "        # Use inference session to get input shape\n",
    "        session = onnxruntime.InferenceSession(model_path, None)\n",
    "        (_, channel, height, width) = session.get_inputs()[0].shape\n",
    "\n",
    "        # Convert image to input data\n",
    "        # Set input normalization based on training normalization \n",
    "        if calibration_image_folder:\n",
    "            self.nhwc_data_list = preprocess_image_batch(\n",
    "                calibration_image_folder, height, width, size_limit=0\n",
    "            )\n",
    "        else:\n",
    "            self.nhwc_data_list = preprocess_random_images(\n",
    "                height, width, channel\n",
    "            )\n",
    "\n",
    "        self.input_name = session.get_inputs()[0].name\n",
    "        self.datasize = len(self.nhwc_data_list)\n",
    "\n",
    "        self.enum_data = None  # Enumerator for calibration data\n",
    "\n",
    "    def get_next(self) -> Dict[str, List[float]]:\n",
    "        \"\"\"\n",
    "        Returns the next item from the enumerator.\n",
    "\n",
    "        Returns\n",
    "        -------\n",
    "        Dict[str, List[float]]\n",
    "            A dictionary containing the input name and corresponding data\n",
    "        \"\"\"\n",
    "\n",
    "        if self.enum_data is None:\n",
    "            # Create an iterator that generates input dictionaries\n",
    "            # with input name and corresponding data\n",
    "            self.enum_data = iter(\n",
    "                [{self.input_name: nhwc_data} for nhwc_data in self.nhwc_data_list]\n",
    "            )\n",
    "        \n",
    "        return next(self.enum_data, None)  # Return next item from enumerator\n",
    "\n",
    "    def rewind(self) -> None:\n",
    "        \"\"\"\n",
    "        Resets the enumeration of calibration data.\n",
    "        \"\"\"\n",
    "\n",
    "        self.enum_data = None  # Reset the enumeration of calibration data"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div id=\"quantize\">\n",
    "    <h3> 2.3 Quantize the model using QDQ quantization to int8 weights and activations </h3>\n",
    "</div>\n",
    "\n",
    "The following section quantize the float32 onnx model to int8 quantized onnx model after the preprocessing to prepare it to the qunatization by using the ``quantize_static`` function that we recommand to use with calibration data and with the following supported arguments setting.\n",
    "\n",
    "\n",
    "<table>\n",
    "<tr>\n",
    "<th style=\"text-align: left\">Argument</th>\n",
    "<th style=\"text-align: left\">Description /  CUBE.AI recommendation</th>\n",
    "</tr>\n",
    "    \n",
    "<tr><td style=\"text-align: left\">Quant_format </td>\n",
    "<td style=\"text-align: left\"> <p> QuantFormat.QDQ format: <strong>recommended</strong>, it quantizes the model by inserting QuantizeLinear/DeQuantizeLinear on the tensor. QOperator format: <strong> not recommended </strong>, it quantizes the model with quantized operators directly </p> </td></tr>\n",
    "<tr><td style=\"text-align: left\"> Activation type</td> \n",
    "<td style=\"text-align: left\"> <p> QuantType.QInt8: <strong>recommended</strong>, it quantizes the activations to int8.  QuantType.QUInt8: <strong>not recommended</strong>, to quantize the activations uint8 </p> </td></tr>  \n",
    "<tr><td style=\"text-align: left\">Weight_type </td> \n",
    "<td style=\"text-align: left\"> <p> QuantType.QInt8: <strong>recommended</strong> , it quantizes the weights to int8.  QuantType.QUInt8: <strong>not recommended</strong>, it quantizes the weights to uint8</p> </td></tr> \n",
    "<tr><td style=\"text-align: left\">Per_Channel</td>\n",
    "<td style=\"text-align: left\"> <p>True: <strong>recommended</strong>, it makes the quantization process is carried out individually and separately for each channel based on the characteristics of the data within that specific channel / False: supported and <strong>not recommended</strong>, the quantization process is carried out for each tensor </p> </td>\n",
    "</tr>\n",
    "<tr><td style=\"text-align: left\">ActivationSymmetric</td>\n",
    "<td style=\"text-align: left\"> <p>False: <strong>recommended</strong> it makes the activations in the range of [-128  +127]. True: supported, it makes the  activations in the range of [-128  +127] with the zero_point=0 </p> </td>\n",
    "</tr>\n",
    "<tr>\n",
    "<td style=\"text-align: left\">WeightSymmetric</td>\n",
    "<td style=\"text-align: left\"> <p>True: <strong>Highly recommended</strong>, it makes the weights in the range of [-127  +127] with the zero_point=0.  False: supported and <strong>not recommended</strong>, it makes the weights in the range of [-128  +127]</p> </td>\n",
    "</tr>\n",
    "<td style=\"text-align: left\">reduce_range</td>\n",
    "<td style=\"text-align: left\"> <p>True: <strong>Highly recommended</strong>, it quantizes the weights in 7-bits. It may improve the accuracy for some models, especially for per-channel mode</p> </td>\n",
    "</tr> \n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "if quantization_dataset_path is not None:\n",
    "    calibration_dataset_path = create_calibration_dataset(quantization_dataset_path, samples_per_class=200)\n",
    "else:\n",
    "    calibration_dataset_path = None\n",
    "\n",
    "# Set the data reader pointing to the representative dataset\n",
    "print('Prepare the data reader for the representative dataset...')\n",
    "dr = ImageNetDataReader(calibration_dataset_path, input_model) \n",
    "print('The data reader is ready.')\n",
    "\n",
    "# Preprocess the model to infer shapes of each tensor\n",
    "infer_model = os.path.splitext(input_model)[0] + '_infer' + os.path.splitext(input_model)[1]\n",
    "print('Infer for the model: {}...'.format(os.path.basename(input_model)))\n",
    "quantization.quant_pre_process(input_model_path=input_model, output_model_path=infer_model, skip_optimization=False)\n",
    "\n",
    "# Prepare quantized ONNX model filename\n",
    "if calibration_dataset_path is not None:\n",
    "    quant_model = os.path.splitext(input_model)[0] + '_QDQ_quant' + os.path.splitext(input_model)[1]\n",
    "else:\n",
    "    quant_model = os.path.splitext(input_model)[0] + '_QDQ_fakequant' + os.path.splitext(input_model)[1]\n",
    "print('Quantize the model {}, please wait...'.format(os.path.basename(input_model)))\n",
    "\n",
    "quantize_static(\n",
    "        infer_model,\n",
    "        quant_model,\n",
    "        dr,\n",
    "        calibrate_method=CalibrationMethod.MinMax, \n",
    "        quant_format=QuantFormat.QDQ,\n",
    "        per_channel=True,\n",
    "        weight_type=QuantType.QInt8, \n",
    "        activation_type=QuantType.QInt8, \n",
    "        reduce_range=True,\n",
    "        extra_options={'WeightSymmetric': True, 'ActivationSymmetric': False})\n",
    "\n",
    "now = datetime.now()\n",
    "current_time = now.strftime(\"%H:%M:%S\")\n",
    "print(current_time + ' - ' + '{} model has been created.'.format(os.path.basename(quant_model)))\n",
    "quantized_session = onnxruntime.InferenceSession(quant_model)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "<div id=\"validation\">\n",
    "        <h2> 3. Model validation </h2>\n",
    "</div>\n",
    "\n",
    "The following code section includes functions to evaluate the models on the validation dataset. It's important to note that the preprocessing of the evaluation dataset should match the preprocessing of the data during training and quantization. Therefore, make sure to adjust the arguments ``color_mode``, ``interpolation``, and ``norm`` to correspond to your preprocessing during the training scenario."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from onnx import ModelProto\n",
    "from sklearn.metrics import accuracy_score, confusion_matrix\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "def get_preprocessed_image(image_path: str, height: int, width: int, color_mode: str, interpolation: str, norm: str) -> np.ndarray:\n",
    "    \"\"\"\n",
    "    Preprocesses an image for input to a neural network.\n",
    "\n",
    "    Args:\n",
    "        image_path (str): The path to the image file.\n",
    "        height (int): The desired height of the image.\n",
    "        width (int): The desired width of the image.\n",
    "        color_mode (str): The color mode of the image ('rgb' or 'rgba').\n",
    "        interpolation (str): The interpolation method to use when resizing the image.\n",
    "        norm (str): The normalization method to use ('tf' or 'torch').\n",
    "\n",
    "    Returns:\n",
    "        np.ndarray: The preprocessed image as a numpy array.\n",
    "    \"\"\"\n",
    "    TORCH_MEANS = [0.485,0.456,0.406]\n",
    "    TORCH_STD = [0.224, 0.224, 0.224]\n",
    "\n",
    "    img = tf.keras.utils.load_img(image_path, color_mode = color_mode,\n",
    "     target_size = (width,height), interpolation=interpolation)\n",
    "    img_array = np.array([tf.keras.utils.img_to_array(img)])\n",
    "    if norm.lower() == 'tf':\n",
    "        img_array = -1 + img_array / 127.5\n",
    "    elif norm.lower() == 'torch':\n",
    "        img_array = img_array / 255.0\n",
    "        img_array = img_array - TORCH_MEANS\n",
    "        img_array= img_array/ TORCH_STD\n",
    "    img_array = img_array.transpose((0,3,1,2))\n",
    "    return img_array\n",
    "\n",
    "def predict_onnx(sess: ModelProto, data: np.ndarray) -> np.ndarray:\n",
    "    \"\"\"\n",
    "    Runs inference on an ONNX model.\n",
    "\n",
    "    Args:\n",
    "        sess (ModelProto): The ONNX model.\n",
    "        data (np.ndarray): The input data for the model.\n",
    "\n",
    "    Returns:\n",
    "        np.ndarray: The model's predictions.\n",
    "    \"\"\"\n",
    "    input_name = sess.get_inputs()[0].name\n",
    "    label_name = sess.get_outputs()[0].name\n",
    "    onx_pred = sess.run([label_name], {input_name: data.astype(np.float32)})[0]\n",
    "    return onx_pred\n",
    "\n",
    "def plot_confusion_matrix(cm: np.ndarray, class_labels: List[str], model_name: str, val_accuracy: float = None) -> None:\n",
    "    \"\"\"\n",
    "    Plots a confusion matrix.\n",
    "\n",
    "    Args:\n",
    "        cm (np.ndarray): The confusion matrix.\n",
    "        class_labels (List[str]): The labels for the classes.\n",
    "        model_name (str): The name of the model.\n",
    "        val_accuracy (float, optional): The validation accuracy of the model. Defaults to None.\n",
    "    \"\"\"\n",
    "    cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n",
    "\n",
    "    fig, ax = plt.subplots(figsize=(6, 6))\n",
    "    im = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)\n",
    "    cbar = ax.figure.colorbar(im, ax=ax, pad=0.1)\n",
    "\n",
    "    # Show all ticks\n",
    "    ax.set(xticks=np.arange(cm.shape[1]),\n",
    "           yticks=np.arange(cm.shape[0]),\n",
    "           xticklabels=class_labels, yticklabels=class_labels,\n",
    "           title=f'Model Accuracy: {val_accuracy} %',\n",
    "           ylabel='True label',\n",
    "           xlabel='Predicted label')\n",
    "\n",
    "    # Rotate the tick labels and set their alignment.\n",
    "    plt.setp(ax.get_xticklabels(), rotation=45, ha=\"right\",\n",
    "             rotation_mode=\"anchor\")\n",
    "\n",
    "    # Loop over data dimensions and create text annotations.\n",
    "    fmt = '.2f'\n",
    "    thresh = cm_normalized.max() / 2.\n",
    "    for i in range(cm_normalized.shape[0]):\n",
    "        for j in range(cm_normalized.shape[1]):\n",
    "            ax.text(j, i, format(cm_normalized[i, j], fmt),\n",
    "                    ha=\"center\", va=\"center\",\n",
    "                    color=\"white\" if cm_normalized[i, j] > thresh else \"black\")\n",
    "\n",
    "    fig.tight_layout()\n",
    "    plt.savefig(f'outputs/{model_name}_confusion-matrix.png')\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def evaluate_onnx_model(onnx_model_path: str, val_dir: str, model_name: str, interpolation: str = 'bilinear') -> Tuple[float, np.ndarray]:\n",
    "    \"\"\"\n",
    "    Evaluates an ONNX model on a validation dataset.\n",
    "\n",
    "    Args:\n",
    "        onnx_model_path (str): The path to the ONNX model.\n",
    "        val_dir (str): The path to the validation dataset.\n",
    "        model_name (str): The name of the model.\n",
    "        interpolation (str, optional): The interpolation method to use when resizing images. Defaults to 'bilinear'.\n",
    "\n",
    "    Returns:\n",
    "        Tuple[float, np.ndarray]: The validation accuracy and confusion matrix.\n",
    "    \"\"\"\n",
    "    onx = ModelProto()\n",
    "    with open(onnx_model_path, mode='rb') as f:\n",
    "        content = f.read()\n",
    "        onx.ParseFromString(content)\n",
    "    sess = onnxruntime.InferenceSession(onnx_model_path)\n",
    "    (_, _, img_height, img_width) = sess.get_inputs()[0].shape\n",
    "    gt_labels = []\n",
    "    prd_labels = np.empty((0))\n",
    "    class_labels = sorted(os.listdir(val_dir))\n",
    "    for i in range(len(class_labels)):\n",
    "        class_label = class_labels[i]\n",
    "        \n",
    "        for file in os.listdir(os.path.join(val_dir, class_label)):\n",
    "            gt_labels.append(i)\n",
    "            image_path = os.path.join(val_dir, class_label, file)\n",
    "            # don't forget to adapt the preprocessing schema\n",
    "            img = get_preprocessed_image(image_path, width=img_width, height=img_height, \n",
    "                                          color_mode='rgb',interpolation=interpolation, norm='tf')\n",
    "            # predicting the results on the batch\n",
    "            pred = predict_onnx(sess, img).argmax(axis=1)\n",
    "            prd_labels = np.concatenate((prd_labels, pred))\n",
    "\n",
    "    val_acc = round(accuracy_score(gt_labels, prd_labels) * 100, 2)\n",
    "    print(f'Evaluation Top 1 accuracy: {val_acc} %')\n",
    "    if not os.path.exists(\"outputs\"):\n",
    "        os.makedirs(\"outputs\")\n",
    "    log_file_name = \"outputs/\" + model_name + \".log\"\n",
    "    with open(log_file_name, 'a') as f:\n",
    "        f.write(\"Evaluation Top 1 accuracy: {} %\\n\".format(val_acc))\n",
    "    val_cm = confusion_matrix(gt_labels, prd_labels)\n",
    "    plot_confusion_matrix(val_cm, class_labels, model_name, val_accuracy=val_acc)\n",
    "    \n",
    "    return val_acc, val_cm"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Float model validation:**\n",
    "\n",
    "We evaluate the full precision model to provide a baseline measure of the model's accuracy in its original form when weights, activations, and computations are represented as 32-bit floating-point numbers without any quantization applied.\n",
    "\n",
    "To evaluate the float model, set the `val_set` variable to the path of the evaluation dataset, `input_model` to the path of the float model and the `model_name`. For example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "val_set = \"path/to/val_set\"\n",
    "input_model = \"models/mobilenet_v2_128_0.5.onnx\"\n",
    "model_name = 'mobilenet_v2_128_0.5'\n",
    "evaluate_onnx_model(input_model, val_set, model_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Quantized model validation:**\n",
    "\n",
    "To evaluate the quantized model, set the  `quantized_model_path` to the path of the quantized model and the `model_name`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_model = \"models/mobilenet_v2_128_0.5_QDQ_quant.onnx\"\n",
    "model_name = 'mobilenet_v2_128_0.5_QDQ_quant'\n",
    "evaluate_onnx_model(input_model, val_set, model_name)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div id=\"benchmark\">\n",
    "        <h2> 4. Benchmarking the Models on the STM32Cube.AI Developer Cloud</h2>\n",
    "</div>\n",
    "\n",
    "In this section, we use the [STM32Cube.AI Developer Cloud](https://stedgeai-dc.st.com/home) to optimize and benchmark a quantized neural network on an **STM32** target and generate its code for deployment.\n",
    "\n",
    "<div id=\"proxy\">\n",
    "        <h3> 4.1 Proxy Settings and Connection to the STM32Cube.AI Developer Cloud</h3>\n",
    "</div>\n",
    "\n",
    "If you are behind a proxy, you can uncomment and fill in the following proxy settings.\n",
    "\n",
    "**Note:** If the password contains special characters such as `@`, `:`, etc., they need to be URL-encoded with their ASCII values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# import os\n",
    "# os.environ['http_proxy'] = \"http://user:passwd@ip_address:port\"\n",
    "# os.environ['https_proxy'] = \"https://user:passwd@ip_address:port\"\n",
    "## And eventually disable SSL verification\n",
    "# os.environ['NO_SSL_VERIFY'] = \"1\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To successfully connect to the [STM32Cube.AI Developer Cloud](https://stedgeai-dc.st.com/home) you need to `gitdir` the [`STM32AI Python interface`](https://github.com/STMicroelectronics/stm32ai-modelzoo_services/tree/main/common/stm32ai_dc)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get STM32Cube.AI Developer Cloud\n",
    "!gitdir https://github.com/STMicroelectronics/stm32ai-modelzoo_services/tree/main/common/stm32ai_dc\n",
    "\n",
    "# Reorganize local folders\n",
    "if os.path.exists('./stm32ai_dc'):\n",
    "    shutil.rmtree('./stm32ai_dc')\n",
    "shutil.move('./common/stm32ai_dc', './stm32ai_dc')\n",
    "shutil.rmtree('./common')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys \n",
    "sys.path.append(os.path.abspath('stm32ai'))\n",
    "os.environ['STATS_TYPE'] = 'jupyter_devcloud'\n",
    "\n",
    "from stm32ai_dc import (CliLibraryIde, CliLibrarySerie, CliParameters,\n",
    "                        CloudBackend, Stm32Ai)\n",
    "from stm32ai_dc.errors import BenchmarkServerError"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create an account on **myST** and then sign in to [STM32Cube.AI Developer Cloud](https://stedgeai-dc.st.com/home) to be able access the service and then set the environment variables below with your credentials; the mail adress should be set as a string in username and a popup will appear to enter the password."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import getpass\n",
    "\n",
    "username ='xxx.yyy@st.com'\n",
    "os.environ['stmai_username'] = username\n",
    "print('Enter you password')\n",
    "password = getpass.getpass()\n",
    "os.environ['stmai_password'] = password\n",
    "os.environ['NO_SSL_VERIFY'] = \"1\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#Log in STM32Cube.AI Developer Cloud \n",
    "try:\n",
    "    stmai = Stm32Ai(CloudBackend(str(username), str(password)))\n",
    "    print(\"Successfully Connected!\")\n",
    "except Exception as e:\n",
    "    print(\"ERROR: \", e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div id=\"benchmark_both\">\n",
    "        <h3> 4.2 Benchmark the models on a STM32 target</h3>\n",
    "</div>\n",
    "\n",
    "Then, run the code section below for later usage for the benchmark "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def analyze_footprints_and_inference_time(report: object, model_name: str, board_name: str) -> None:\n",
    "    \"\"\"\n",
    "    Analyzes the inference time of a STM32Cube.AI model and saves the results in a log file.\n",
    "\n",
    "    Args:\n",
    "        report (object): The report object containing the inference time information.\n",
    "        model_name (str): The name of the model being analyzed.\n",
    "        board_name (str): The name of the board on which the model is being analyzed.\n",
    "\n",
    "    Returns:\n",
    "        None\n",
    "    \"\"\"\n",
    "    activations_ram = report.ram_size / 1024\n",
    "    weights_rom = report.rom_size / 1024\n",
    "    macc = report.macc / 1e6\n",
    "    cycles = report.cycles\n",
    "    inference_time = report.duration_ms\n",
    "    fps = 1000.0/inference_time\n",
    "\n",
    "    print(\"[INFO] : Benchmarking the model on the {} board\\n\".format(board_name))\n",
    "    print(\"[INFO] : MACCs : {} (M)\".format(macc))\n",
    "    print(\"[INFO] : Flash Weights  : {0:.1f} (KiB)\".format(weights_rom))\n",
    "    print(\"[INFO] : RAM Activations : {0:.1f} (KiB)\".format(activations_ram))\n",
    "    print(\"[INFO] : Number of cycles : {} \".format(cycles))\n",
    "    print(\"[INFO] : Inference Time : {0:.1f} (ms)\".format(inference_time))\n",
    "    print(\"[INFO] : FPS : {0:.1f}\".format(fps))\n",
    "\n",
    "    # Writing to log file\n",
    "    model_name_without_extension = model_name.replace(\".onnx\", \"\")\n",
    "    log_file_name = \"outputs/\" + model_name_without_extension + \".log\"\n",
    "    with open(log_file_name, 'a') as f:\n",
    "        f.write(\"[INFO] : Benchmarking the model on the {} board\\n\".format(board_name))\n",
    "        f.write(\"[INFO] : Model Name : {}\\n\".format(model_name))\n",
    "        f.write(\"[INFO] : MACCs : {} (M)\\n\".format(macc))\n",
    "        f.write(\"[INFO] : Flash Weights  : {0:.1f} (KiB)\\n\".format(weights_rom))\n",
    "        f.write(\"[INFO] : RAM Activations : {0:.1f} (KiB)\\n\".format(activations_ram))\n",
    "        f.write(\"[INFO] : Number of cycles : {}\\n\".format(cycles))\n",
    "        f.write(\"[INFO] : Inference Time : {0:.1f} (ms)\\n\".format(inference_time))\n",
    "        f.write(\"[INFO] : FPS : {0:.1f}\\n\".format(fps))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Benchmark the float model:** \n",
    "\n",
    "The next step to benchmark the model is to upload the model on STM32Cube.AI Developer Cloud by running the code below\n",
    "\n",
    "The code above is used to upload the model to the STM32Cube.AI Developer Cloud and benchmark it on a specific STM32 board. The `model_path` variable specifies the path to the ONNX model file. The `board_name` variable specifies the name of the STM32 board on which the model will be benchmarked. \n",
    "\n",
    "Then the model is benchmarked on the specified board and generate a report of the inference time and memory footprint. The following table lists the available options the **8.1.0** of STM32Cube.AI and their descriptions for the benchmark on the STM32 boards:\n",
    "<table>\n",
    "<tr>\n",
    "<th style=\"text-align: left\">Option</th>\n",
    "<th style=\"text-align: left\">Description /  CUBE.AI recommendation</th>\n",
    "\n",
    "</tr>\n",
    "<tr>\n",
    "    \n",
    "    \n",
    "<td style=\"text-align: left\">model</td>\n",
    "<td style=\"text-align: left\">model name corresponding to the file name uploaded</td>\n",
    "</tr>\n",
    "    \n",
    "<tr>\n",
    "<td style=\"text-align: left\">optimization</td>\n",
    "<td style=\"text-align: left\">optimization setting \"balanced\", \"time\" or \"ram\"</td>\n",
    "</tr>\n",
    "    \n",
    "<tr>\n",
    "<td style=\"text-align: left\">allocateInputs</td>\n",
    "<td style=\"text-align: left\"><strong>recommended</strong>, activations buffer will be also used to handle the input buffers.True by default</td>\n",
    "</tr>\n",
    " \n",
    "<tr>\n",
    "<td style=\"text-align: left\">allocateOutputs</td>\n",
    "<td style=\"text-align: left\"><strong>recommended</strong>, activations buffer will be also used to handle the output buffers. True by default</td>\n",
    "</tr>\n",
    "\n",
    "<tr>\n",
    "<td style=\"text-align: left\">relocatable</td>\n",
    "<td style=\"text-align: left\"><strong>recommended</strong>, to generate a relocatable binary model. '--binary' option can be used to have a separate binary file with only the data of the weight/bias tensors. True by default</td>\n",
    "</tr>\n",
    "\n",
    "<tr>\n",
    "<td style=\"text-align: left\">noOnnxOptimizer</td>\n",
    "<td style=\"text-align: left\"><strong>not recommended</strong>, allows to disable the ONNX optimizer pass. \"False\" by default. Apply only to ONNX file will be ignored otherwise</td>\n",
    "</tr>\n",
    "\n",
    "<tr>\n",
    "<td style=\"text-align: left\">noOnnxIoTranspose</td>\n",
    "<td style=\"text-align: left\"> <strong>recommended only if</strong> the onnx model has already IO transpose layers to make it expect channel last data, allows to avoid adding a specific transpose layer during the import of a ONNX model, \"False\" by default. Apply only to ONNX file will be ignored otherwise</td>\n",
    "</tr>\n",
    "    \n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_path = \"models/mobilenet_v2_128_0.5.onnx\"\n",
    "model_name = os.path.basename(model_path)\n",
    "try:\n",
    "  stmai.upload_model(model_path)\n",
    "  print(f'Model {model_name} is uploaded !\\n')\n",
    "except Exception as e:\n",
    "    print(\"ERROR: \", e)\n",
    "    \n",
    "board_name = 'STM32H747I-DISCO'\n",
    "result = stmai.benchmark(CliParameters(model=model_name,\n",
    "                                       optimization='balanced',\n",
    "                                       allocateInputs=True,\n",
    "                                       allocateOutputs=True,\n",
    "                                       noOnnxIoTranspose=False,\n",
    "                                       fromModel=model_name),\n",
    "                                       board_name=board_name, timeout=1500)\n",
    "\n",
    "\n",
    "analyze_footprints_and_inference_time(report=result, model_name=model_name, board_name=board_name)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Benchmark the int8 model:**\n",
    "\n",
    "Upload the model on STM32Cube.AI Developer Cloud and benchmark it by running the code below.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_path = \"models/mobilenet_v2_128_0.5_QDQ_quant.onnx\"\n",
    "model_name = os.path.basename(model_path)\n",
    "try:\n",
    "  stmai.upload_model(model_path)\n",
    "  print(f'Model {model_name} is uploaded !\\n')\n",
    "except Exception as e:\n",
    "    print(\"ERROR: \", e)\n",
    "    \n",
    "board_name = 'STM32H747I-DISCO'\n",
    "result = stmai.benchmark(CliParameters(model=model_name,\n",
    "                                       optimization='balanced',\n",
    "                                       allocateInputs=True,\n",
    "                                       allocateOutputs=True,\n",
    "                                       noOnnxIoTranspose=False,\n",
    "                                       fromModel=model_name),\n",
    "                                       board_name=board_name)\n",
    "\n",
    "analyze_footprints_and_inference_time(report=result, model_name=model_name, board_name=board_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Please run the next two code sections to compare the float model and the int8 model. The code will plot a figure that compares the two models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import re\n",
    "\n",
    "\n",
    "def compare_models(log_file_float: str, log_file_int8: str) -> None:\n",
    "    \"\"\"\n",
    "    Generates a comparison graph of two models on various metrics.\n",
    "\n",
    "    Args:\n",
    "        log_file_float: The path to the log file of the first model.\n",
    "        log_file_int8: The path to the log file of the second model.\n",
    "\n",
    "    Returns:\n",
    "        None\n",
    "    \"\"\"\n",
    "\n",
    "    # Read the log files into strings\n",
    "    with open(log_file_float, 'r') as f:\n",
    "        log_float = f.read()\n",
    "    with open(log_file_int8, 'r') as f:\n",
    "        log_int8 = f.read()\n",
    "\n",
    "    # Get the metrics of interest\n",
    "    accuracy_float = float(re.search(r'Evaluation Top 1 accuracy: ([\\d.]+) %', log_float).group(1))\n",
    "    accuracy_int8 = float(re.search(r'Evaluation Top 1 accuracy: ([\\d.]+) %', log_int8).group(1))\n",
    "    flash_float = float(re.search(r'Flash\\s*Weights\\s*:\\s*([\\d.]+)\\s*\\(\\s*KiB\\s*\\)', log_float).group(1))\n",
    "    flash_int8 = float(re.search(r'Flash\\s*Weights\\s*:\\s*([\\d.]+)\\s*\\(\\s*KiB\\s*\\)', log_int8).group(1))\n",
    "    ram_float = float(re.search(r'RAM\\s*Activations\\s*:\\s*([\\d.]+)\\s*\\(\\s*KiB\\s*\\)', log_float).group(1))\n",
    "    ram_int8 = float(re.search(r'RAM\\s*Activations\\s*:\\s*([\\d.]+)\\s*\\(\\s*KiB\\s*\\)', log_int8).group(1))\n",
    "    inference_time_float = float(re.search(r'Inference\\s*Time\\s*:\\s*([\\d.]+)\\s*\\(\\s*ms\\s*\\)', log_float).group(1))\n",
    "    inference_time_int8 = float(re.search(r'Inference\\s*Time\\s*:\\s*([\\d.]+)\\s*\\(\\s*ms\\s*\\)', log_int8).group(1))\n",
    "\n",
    "    # Set the figure size and spacing between subplots\n",
    "    fig, axs = plt.subplots(2, 2, figsize=(10, 8), gridspec_kw={'wspace': 0.3, 'hspace': 0.4})\n",
    "\n",
    "    # Graph 1: Accuracy Comparison\n",
    "    axs[0, 0].bar(['Float model', 'Int8 model'], [accuracy_float, accuracy_int8], color='#03234B')\n",
    "    axs[0, 0].set_title('Accuracy')\n",
    "    axs[0, 0].set_xlabel('Model')\n",
    "    axs[0, 0].set_ylabel('Accuracy (%)')\n",
    "    axs[0, 0].set_ylim([0, 100])\n",
    "    axs[0, 0].text(0, accuracy_float+1, str(round(accuracy_float, 2))+'%')\n",
    "    axs[0, 0].text(1, accuracy_int8+1, str(round(accuracy_int8, 2))+'%')\n",
    "\n",
    "    # Graph 2: RAM Activation Comparison\n",
    "    axs[0, 1].bar(['Float model', 'Int8 model'], [ram_float, ram_int8], color='#03234B')\n",
    "    axs[0, 1].set_title('RAM activation')\n",
    "    axs[0, 1].set_xlabel('Model')\n",
    "    axs[0, 1].set_ylabel('RAM activation (KiB)')\n",
    "    axs[0, 1].set_ylim([0, 1500])\n",
    "    axs[0, 1].text(0, ram_float+20, str(round(ram_float, 2))+' KiB')\n",
    "    axs[0, 1].text(1, ram_int8+20, str(round(ram_int8, 2))+' KiB')\n",
    "\n",
    "    # Graph 3: Flash Weights Comparison\n",
    "    axs[1, 0].bar(['Float model', 'Int8 model'], [flash_float, flash_int8], color='#03234B')\n",
    "    axs[1, 0].set_title('Flash Weights')\n",
    "    axs[1, 0].set_xlabel('Model')\n",
    "    axs[1, 0].set_ylabel('Flash Weights (KiB)')\n",
    "    axs[1, 0].set_ylim([0, 1000])\n",
    "    axs[1, 0].text(0, flash_float+20, str(round(flash_float, 2))+' KiB')\n",
    "    axs[1, 0].text(1, flash_int8+20, str(round(flash_int8, 2))+' KiB')\n",
    "\n",
    "    # Graph 4: Inference Time Comparison\n",
    "    axs[1, 1].bar(['Float model', 'Int8 model'], [inference_time_float, inference_time_int8], color='#03234B')\n",
    "    axs[1, 1].set_title('Inference Time')\n",
    "    axs[1, 1].set_xlabel('Model')\n",
    "    axs[1, 1].set_ylabel('Inference Time (ms)')\n",
    "    axs[1, 1].set_ylim([0, 1000])\n",
    "    axs[1, 1].text(0, inference_time_float+20, str(round(inference_time_float, 2))+' ms')\n",
    "    axs[1, 1].text(1, inference_time_int8+20, str(round(inference_time_int8, 2))+' ms')\n",
    "\n",
    "    # Set the global title\n",
    "    fig.suptitle('Comparison of Two Models on Various Metrics', fontsize=14)\n",
    "\n",
    "    plt.tight_layout()\n",
    "\n",
    "    # Save the figure to a file\n",
    "    plt.savefig('comparison.png')\n",
    "\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "log_file_int8 = 'outputs/mobilenet_v2_128_0.5_QDQ_quant.log'\n",
    "log_file_float = 'outputs/mobilenet_v2_128_0.5.log'\n",
    "\n",
    "compare_models(log_file_float, log_file_int8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div id=\"generate\">\n",
    "        <h3> 4.3 Generate the model optimized C code for STM32 </h3>\n",
    "</div>\n",
    "\n",
    "Here you generate the specialized network and data C-files to make the model ready to be integrated in the **STM32** application."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "code_folder = os.path.join('outputs/code_outputs')\n",
    "os.makedirs(code_folder, exist_ok=True)\n",
    "\n",
    "board_name = 'STM32H7'\n",
    "IDE = 'gcc'\n",
    "print(f'{model_name}\\ngenerating code for {board_name}')\n",
    "\n",
    "# Generate model .c/.h code + Lib/Inc on STM32Cube.AI Developer Cloud\n",
    "result = stmai.generate(CliParameters(model=model_name,\n",
    "                                      output=code_folder,\n",
    "                                      optimization='balanced',\n",
    "                                      allocateInputs=True,\n",
    "                                      allocateOutputs=True,\n",
    "                                      noOnnxIoTranspose=False,\n",
    "                                      includeLibraryForSerie=CliLibrarySerie(board_name),\n",
    "                                      includeLibraryForIde=CliLibraryIde(IDE),\n",
    "                                      fromModel=model_name))\n",
    "\n",
    "print(os.listdir(code_folder))\n",
    "\n",
    "# Print the first 20 lines of the report\n",
    "with open(os.path.join(code_folder, 'network_generate_report.txt'), 'r') as f:\n",
    "    for _ in range(20):\n",
    "        print(next(f))\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
