{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bNkqsp45yZBp"
      },
      "source": [
        "# ST Edge AI Developer Cloud Notebook\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5S5mg3FzBI1M"
      },
      "source": [
        "This notebook shows how to access to the ST Edge AI Developer Cloud through ST Python APIs (based on REST API) instead of using the web application  https://stedgeai-dc.st.com.\n",
        "\n",
        "It allows a seamless integration in your MLOps flow without the need to install STM32Cube.AI, through a simple Python interface.\n",
        "\n",
        "User can either upload its own model or select one of the models from STM32 model zoo accessible also through github: https://github.com/STMicroelectronics/stm32ai-modelzoo-services\n",
        "\n",
        "Then thanks to the ST Edge AI Developer Cloud, the model can be analyzed and benchmarked on a broad range of STM32 boards.\n",
        "\n",
        "And finaly the optimized C-code corresponding to the model can be downloaded to be integrated in the final application.\n",
        "\n",
        "![image.png]()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rKiHGR6AxxQU"
      },
      "source": [
        "## License of the Jupyter Notebook\n",
        "This software component is licensed by ST under BSD-3-Clause license.\n",
        "\n",
        "\n",
        "\"License\";\n",
        "\n",
        "You may not use this file except in compliance with the License.\n",
        "\n",
        "You may obtain a copy of the License at: https://opensource.org/licenses/BSD-3-Clause\n",
        "\n",
        "Copyright (c) 2023 STMicroelectronics. All rights reserved."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "owNhpM9RoZrL"
      },
      "source": [
        "## Proxy setting\n",
        "\n",
        "If you are behind a proxy, you can uncomment and fill the following proxy setting.\n",
        "\n",
        "**NOTE**: If the password contains some special characters like `@`, `:` etc. they need to be url-encoded."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Xyx2h1YVaY8e"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "# os.environ['http_proxy'] = \"http://user:passwd@ip_address:port\"\n",
        "# os.environ['https_proxy'] = \"https://user:passwd@ip_address:port\"\n",
        "# And eventually disable SSL verification\n",
        "# os.environ['NO_SSL_VERIFY'] = 1"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lVMsPaWzUzkI"
      },
      "source": [
        "## Install packages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QPn4Lw7PUysl"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "!{sys.executable} -m pip install pycurl seaborn numpy matplotlib\n",
        "!{sys.executable} -m pip install ipywidgets\n",
        "!{sys.executable} -m pip install gitdir\n",
        "!{sys.executable} -m pip install shutils\n",
        "!{sys.executable} -m pip install marshmallow"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lTFGePih6xdp"
      },
      "source": [
        "## Download the ST Edge AI Developer Cloud Python API package\n",
        "\n",
        "This Python package allows you to access to the ST Edge AI Developer Cloud through REST API."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sH1QixCXsLDP"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import shutil\n",
        "# Get ST Edge AI Developer Cloud\n",
        "!gitdir https://github.com/STMicroelectronics/stm32ai-modelzoo-services/tree/main/common/stm32ai_dc\n",
        "\n",
        "# Reorganize local folders\n",
        "if os.path.exists('./stm32ai_dc'):\n",
        "    shutil.rmtree('./stm32ai_dc')\n",
        "shutil.move('./common/stm32ai_dc', './stm32ai_dc')\n",
        "shutil.rmtree('./common')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wib9sirMT3rk"
      },
      "source": [
        "## Import, helper and UI functions"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kR6BYrPKS4Go"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import sys\n",
        "\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "import ipywidgets as widgets\n",
        "from stm32ai_dc import (CliLibraryIde, CliLibrarySerie, CliParameters, MpuParameters, MpuEngine,\n",
        "                        CloudBackend, Stm32Ai, AtonParameters)\n",
        "\n",
        "sys.path.append(os.path.abspath('stm32ai'))\n",
        "os.environ['STATS_TYPE'] = 'jupyter_devcloud'\n",
        "\n",
        "os.makedirs('models', exist_ok=True)\n",
        "os.makedirs('outputs', exist_ok=True)\n",
        "\n",
        "\n",
        "def get_mpu_options(board_name: str = None) -> tuple:\n",
        "    \"\"\"\n",
        "    Get MPU benchmark options depending on MPU board selected\n",
        "    Each MPU board has different settings,\n",
        "    i.e. different number of cpu_cores or engine (CPU only or HW_Accelerator also)\n",
        "    Input:\n",
        "        board_name:str, name of the mpu board\n",
        "    Returns:\n",
        "        tuple: engine_used and num_cpu_cores.\n",
        "    \"\"\"\n",
        "\n",
        "    #define configuration by MPU board\n",
        "    STM32MP257F_EV1 = {\n",
        "        \"engine\": MpuEngine.HW_ACCELERATOR,\n",
        "        \"cpu_cores\": 2\n",
        "    }\n",
        "\n",
        "    STM32MP157F_DK2 = {\n",
        "        \"engine\": MpuEngine.CPU,\n",
        "        \"cpu_cores\": 2\n",
        "    }\n",
        "\n",
        "    STM32MP135F_DK = {\n",
        "        \"engine\": MpuEngine.CPU,\n",
        "        \"cpu_cores\": 1\n",
        "    }\n",
        "\n",
        "    #recover parameters based on board name:\n",
        "    if board_name == \"STM32MP257F-EV1\":\n",
        "        engine_used = STM32MP257F_EV1.get(\"engine\")\n",
        "        num_cpu_cores = STM32MP257F_EV1.get(\"cpu_cores\")\n",
        "    elif board_name == \"STM32MP157F-DK2\":\n",
        "        engine_used = STM32MP157F_DK2.get(\"engine\")\n",
        "        num_cpu_cores = STM32MP157F_DK2.get(\"cpu_cores\")\n",
        "    elif board_name == \"STM32MP135F-DK\":\n",
        "        engine_used = STM32MP135F_DK.get(\"engine\")\n",
        "        num_cpu_cores = STM32MP135F_DK.get(\"cpu_cores\")\n",
        "    else :\n",
        "        engine_used = MpuEngine.CPU\n",
        "        num_cpu_cores = 1\n",
        "\n",
        "    return engine_used, num_cpu_cores\n",
        "\n",
        "def analyze_footprints(series = \"stm32h7\", report: object = None) -> None:\n",
        "    \"\"\"\n",
        "    Analyzes the memory footprint of a model.\n",
        "\n",
        "    Args:\n",
        "        report: A report object containing information about the model.\n",
        "\n",
        "    Returns:\n",
        "        None\n",
        "    \"\"\"\n",
        "    activations_ram: float = report.ram_size / 1024\n",
        "    runtime_ram: float = report.estimated_library_ram_size / 1024\n",
        "    total_ram: float = activations_ram + runtime_ram\n",
        "    weights_rom: float = report.rom_size / 1024\n",
        "    code_rom: float = report.estimated_library_flash_size / 1024\n",
        "    total_flash: float = weights_rom + code_rom\n",
        "    macc: float = report.macc / 1e6\n",
        "    print(\"[INFO] : STM32Cube.AI model memory footprint\")\n",
        "    if series != \"stm32n6\":\n",
        "      print(\"[INFO] : MACCs : {} (M)\".format(macc))\n",
        "    print(\"[INFO] : Total Flash : {0:.1f} (KiB)\".format(total_flash))\n",
        "    print(\"[INFO] :     Flash Weights  : {0:.1f} (KiB)\".format(weights_rom))\n",
        "    print(\"[INFO] :     Estimated Flash Code : {0:.1f} (KiB)\".format(code_rom))\n",
        "    print(\"[INFO] : Total RAM : {0:.1f} (KiB)\".format(total_ram))\n",
        "    print(\"[INFO] :     RAM Activations : {0:.1f} (KiB)\".format(activations_ram))\n",
        "    print(\"[INFO] :     RAM Runtime : {0:.1f} (KiB)\".format(runtime_ram))\n",
        "\n",
        "def benchmark_model(stmai:object,\n",
        "                    model_path:str,\n",
        "                    model_name:str,\n",
        "                    optimization:str,\n",
        "                    from_model:str,\n",
        "                    board_name:str) -> float:\n",
        "    \"\"\"\n",
        "    Benchmarks the give model to calculate the footprint on a STM32 Target board.\n",
        "\n",
        "    Args:\n",
        "        stmai:object, an object of stm32ai_dc\n",
        "        model_path:str, path to the model file\n",
        "        model_name:str, path to the model file\n",
        "        optimization:str, the way model is to be optimized available options ['balanced', 'time', 'ram']\n",
        "        from_model:str, if the model is coming from zoo or is a custom model from the user\n",
        "        board_name:str, target board name from one of the available boards on the dev cloud.\n",
        "\n",
        "    Returns:\n",
        "        fps: frames per second (1/inference_time)\n",
        "    \"\"\"\n",
        "    print(f\"Benchmarking on: {board_name}\")\n",
        "    if \"mp\" in board_name.lower():\n",
        "        # if mpu is selected as the target\n",
        "        model_extension = os.path.splitext(model_path)[1]\n",
        "        # only supported options are quantized tflite or onnx models\n",
        "        if model_extension in ['.onnx', '.tflite']:\n",
        "            if \"stm32mp2\" in board_name.lower(): # if mp2 is selected as the target board optimize the model to generate a .nbg file\n",
        "                optimized_model_path = os.path.dirname(model_path) + \"/\"\n",
        "                try:\n",
        "                    stmai.upload_model(model_path)\n",
        "                    model = model_name\n",
        "                    res = stmai.generate_nbg(model)\n",
        "                    stmai.download_model(res, optimized_model_path + res)\n",
        "                    model_path=os.path.join(optimized_model_path,res)\n",
        "                    nb_model_name = os.path.splitext(os.path.basename(model_path))[0] + \".nb\"\n",
        "                    rename_model_path=os.path.join(optimized_model_path,nb_model_name)\n",
        "                    os.rename(model_path, rename_model_path)\n",
        "                    model_path = rename_model_path\n",
        "                    print(\"[INFO] : Optimized Model Name:\", model_name)\n",
        "                    print(\"[INFO] : Optimization done ! Model available at :\",optimized_model_path)\n",
        "                    model_name = nb_model_name\n",
        "                except Exception as e:\n",
        "                    print(f\"[FAIL] : Model optimization via Cloud failed : {e}.\")\n",
        "                    print(\"[INFO] : Use default model instead of optimized ...\")\n",
        "        else:\n",
        "            print(\"[ERROR]: Only .tflite or .onnx models can be benchmarked for MPU\")\n",
        "            fps = 0\n",
        "            return fps\n",
        "\n",
        "        engine, nbCores = get_mpu_options(board_name)\n",
        "        stmai_params = MpuParameters(model=model_name,\n",
        "                                     nbCores=nbCores,\n",
        "                                     engine=engine)\n",
        "\n",
        "    elif board_name == \"STM32N6570-DK\":\n",
        "        # target board in mcu, prepare stm32ai parameters\n",
        "        stmai_params = CliParameters(model=model_name,\n",
        "                                     target='stm32n6',\n",
        "                                     stNeuralArt='default',\n",
        "                                     atonnOptions=AtonParameters(enable_epoch_controller=True),\n",
        "                                     fromModel=from_model)\n",
        "    else:\n",
        "        # target board in mcu, prepare stm32ai parameters\n",
        "        stmai_params = CliParameters(model=model_name,\n",
        "                                     optimization=optimization,\n",
        "                                     fromModel=from_model)\n",
        "    # running the benchmarking with prepared params\n",
        "    try:\n",
        "        result = stmai.benchmark(stmai_params, board_name)\n",
        "        fps = analyze_inference_time(report=result,\n",
        "                                     target_mpu=\"mp\" in board_name.lower())\n",
        "\n",
        "        # Save the result in outputs folder\n",
        "        with open(f'./outputs/{model_name}_{board_name}.txt', 'w') as file_benchmark:\n",
        "            file_benchmark.write(f'{result}')\n",
        "        return fps\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Benchmarking failed on board: {board_name}\")\n",
        "        fps = 0\n",
        "        return fps\n",
        "\n",
        "def analyze_inference_time(report: object = None,\n",
        "                           target_mpu = False) -> float:\n",
        "    \"\"\"\n",
        "    Analyzes the inference time of a model, prints the report and return the FPS.\n",
        "    Args:\n",
        "        report: A report object containing information about the model.\n",
        "        target_mpu: a boolean (True: if target is MPU, False: otherwise)\n",
        "\n",
        "    Returns:\n",
        "        The frames per second (FPS) of the model.\n",
        "    \"\"\"\n",
        "\n",
        "    inference_time: float = report.duration_ms\n",
        "    fps: float = 1000.0/inference_time\n",
        "    if not target_mpu:\n",
        "        # in mpu benchmark result report we do not have cycles\n",
        "        cycles: int = report.cycles\n",
        "        print(\"[INFO] : Number of cycles : {} \".format(cycles))\n",
        "    print(\"[INFO] : Inference Time : {0:.1f} (ms)\".format(inference_time))\n",
        "    print(\"[INFO] : Inference/s : {0:.1f}\".format(fps))\n",
        "    return fps\n",
        "\n",
        "\n",
        "# UI widgets\n",
        "\n",
        "# STM32MCU series for analyze command\n",
        "series_analyze_name: list[str] = [\n",
        "    \"stm32f4\", \"stm32f7\", \"stm32h7\", \"stm32l4\", \"stm32g4\",\n",
        "    \"stm32u5\", \"stm32l5\", \"stm32wl\", \"stm32h5\", \"stm32n6\"\n",
        "]\n",
        "series_analyze_dropdown: widgets.Dropdown = widgets.Dropdown(\n",
        "    options=series_analyze_name,\n",
        "    value=series_analyze_name[0],\n",
        "    description='Series:',\n",
        "    disabled=False\n",
        ")\n",
        "\n",
        "# STM32MCU series for code generation target\n",
        "series_name: list[str] = [\n",
        "    \"STM32H7\", \"STM32F7\", \"STM32F4\", \"STM32L4\", \"STM32G4\",\n",
        "    \"STM32F3\", \"STM32U5\", \"STM32L5\", \"STM32F0\", \"STM32L0\",\n",
        "    \"STM32G0\", \"STM32C0\", \"STM32WL\", \"STM32H5\", \"STM32N6\"\n",
        "]\n",
        "series_dropdown: widgets.Dropdown = widgets.Dropdown(\n",
        "    options=series_name,\n",
        "    value=series_name[0],\n",
        "    description='Series:',\n",
        "    disabled=False\n",
        ")\n",
        "\n",
        "# options for the IDE while code generation\n",
        "IDE_name: list[str] = [\"gcc\", \"iar\", \"keil\"]\n",
        "ide_dropdown: widgets.Dropdown = widgets.Dropdown(\n",
        "    options=IDE_name,\n",
        "    value=IDE_name[0],\n",
        "    description='IDE:',\n",
        "    disabled=False\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PFXdwWJHBin-"
      },
      "source": [
        "## Login to ST Edge AI Developer Cloud\n",
        "Set environment variables with your credentials to access ST Edge AI Developer Cloud.\n",
        "\n",
        "If you don't have an account yet go to: https://stedgeai-dc.st.com/home and click on sign in to create an account.\n",
        "\n",
        "Then set the environment variables below with your credentials.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "u-TXBTDpB6WT"
      },
      "outputs": [],
      "source": [
        "import getpass\n",
        "import os\n",
        "\n",
        "# Set environment variables with your credentials to access\n",
        "# ST Edge AI Developer Cloud services\n",
        "# Fill the username with your login address\n",
        "username = 'xxx@xxx.com'\n",
        "os.environ['stmai_username'] = username\n",
        "\n",
        "print('Enter your password')\n",
        "password = getpass.getpass()\n",
        "os.environ['stmai_password'] = 'password'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xdHVOUnnNj6u"
      },
      "outputs": [],
      "source": [
        "# Log in the STM32 Edge AI Developer Cloud using the stm32cube.ai version 10.0.0\n",
        "try:\n",
        "    stmai = Stm32Ai(CloudBackend(str(username), str(password), \"10.0.0\"))\n",
        "    print(f\"Successfully Connected!\")\n",
        "except Exception as e:\n",
        "    print(\"Error: please verify your credentials\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fAYDFtYIej85"
      },
      "source": [
        "## Select a pre-trained model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1oD4ouqvbBv-"
      },
      "source": [
        "### Optional: download models from STM32 model zoo\n",
        "\n",
        "Select from an extract of models available on STM32 model zoo and download it.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "irVcpkEjm0bV"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "\n",
        "!curl -o models_list.json https://stedgeai-dc.st.com/api/file/zoo/models\n",
        "\n",
        "with open('models_list.json') as model_file:\n",
        "  st_models = json.loads(model_file.read())\n",
        "\n",
        "model_server_paths = [d['server_path'] for d in st_models]\n",
        "model_server_readme = [d['readme'] for d in st_models]\n",
        "model_zoo_list = [s.split('/')[1] for s in model_server_paths]\n",
        "model_zoo_display = [(string, index + 1) for index, string in enumerate(model_zoo_list)]\n",
        "\n",
        "modelzoo_dropdown = widgets.Dropdown(\n",
        "    options=model_zoo_display,\n",
        "    value=1,\n",
        "    description='Models: ')\n",
        "\n",
        "print('\\nSTM32 model zoo')\n",
        "display(modelzoo_dropdown)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Bk2LUCuhvkDT"
      },
      "outputs": [],
      "source": [
        "model_name = model_zoo_list[modelzoo_dropdown.value-1]\n",
        "print(f'{model_name} details:\\n',model_server_readme[modelzoo_dropdown.value-1])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "N8H8vjQZERY6"
      },
      "outputs": [],
      "source": [
        "# Dowload the selected model\n",
        "model_server_path = model_server_paths[modelzoo_dropdown.value-1]\n",
        "model_link = f'https://stedgeai-dc.st.com/api/file/zoo/models/{model_server_path}'\n",
        "# stmai.download_model(model_server_path)\n",
        "model_path = f'./models/{model_name}'\n",
        "!curl -o $model_path $model_link\n",
        "print(os.listdir('./models'))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p2jzyJHgMNjL"
      },
      "source": [
        "## Select a model from the models directory\n",
        "\n",
        "You can upload your own model in the models folder.\n",
        "Then select a model among the models stored in the \"models\" folder."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LfdZ71AIMMom"
      },
      "outputs": [],
      "source": [
        "# Get the models available locally\n",
        "model_list = []\n",
        "for entry in os.listdir('./models'):\n",
        "  if os.path.isfile(os.path.join('./models', entry)):\n",
        "    model_list.append(entry)\n",
        "model_sel_dropdown = widgets.Dropdown(\n",
        "    options=model_list,\n",
        "    value=model_list[0],\n",
        "    description='Model:',\n",
        "    disabled=False\n",
        ")\n",
        "display(model_sel_dropdown)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nEJrtMBLG235"
      },
      "source": [
        "## Upload the model on ST Edge AI Developer Cloud"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9s0mPKWEGpng"
      },
      "outputs": [],
      "source": [
        "model_name = model_sel_dropdown.value\n",
        "model_path = f'./models/{model_name}'\n",
        "from_model = 'user'\n",
        "for name in model_zoo_list:\n",
        "    if model_name == name:\n",
        "        from_model = model_name\n",
        "        break\n",
        "try:\n",
        "  stmai.upload_model(model_path)\n",
        "  print(f'Model {model_name} is uploaded !')\n",
        "except Exception as e:\n",
        "    print(\"ERROR: \", e)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mELwD5LlM-Ap"
      },
      "source": [
        "## Analyze your model memory footprints for STM32MCU targets\n",
        "When analyzing the footprints of the model for STM32MCU targets, following parameters can be configured for stmai.analyze callback:\n",
        "CLIParameters (main parameters of ST Edge AI Core):\n",
        "\n",
        "STM32MCU targets without NPU:\n",
        "\n",
        "| Parameter | Description |\n",
        "| --- | --- |\n",
        "| model | Model name corresponding to the file name uploaded. This parameter is __required__. |\n",
        "| optimization | Optimization setting: \"balanced\", \"time\" or \"ram\". This parameter is __optional__ and ignored for devices with NPU. |\n",
        "| fromModel | To identify the origin model when coming from ST model zoo. This parameter is __optional__. Default value is \"user\". |\n",
        "\n",
        "\n",
        "STM32N6 with Neural Art NPU:\n",
        "\n",
        "| Parameter | Description |\n",
        "| --- | --- |\n",
        "| model | Model name corresponding to the file name uploaded. This parameter is __required__. |\n",
        "| target |  \"stm32n6\". This parameter is __required__. |\n",
        "| stNeuralArt | Global NPU option, \"allmems--auto\", \"allmems--O3\" (\"default\"). This parameter is __required__. |\n",
        "| atonnOptions | Neural Art options like enable_epoch_controller. This parameter is __optional__. |\n",
        "| mpool | Memory pool (file name .mpool). This parameter is __optional__. |\n",
        "| fromModel | To identify the origin model when coming from ST model zoo. This parameter is __optional__. Default value is \"user\". |\n"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "display(series_analyze_dropdown)"
      ],
      "metadata": {
        "id": "vpLLI4AqvvXq"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Lxh8G0beNHOD"
      },
      "outputs": [],
      "source": [
        "# Analyze RAM/Flash model memory footprints after optimization by STM32Cube.AI\n",
        "series = series_analyze_dropdown.value\n",
        "print(f'[INFO] : Series {series}')\n",
        "\n",
        "try:\n",
        "    if series != \"stm32n6\":\n",
        "        result = stmai.analyze(CliParameters(model=model_path,\n",
        "                                             optimization=\"balanced\",\n",
        "                                             target=series,\n",
        "                                             fromModel=from_model))\n",
        "    else:\n",
        "        result = stmai.analyze(CliParameters(model=model_path,\n",
        "                                             target=\"stm32n6\",\n",
        "                                             stNeuralArt=\"default\",\n",
        "                                             atonnOptions=AtonParameters(enable_epoch_controller=True),\n",
        "                                             fromModel=from_model))\n",
        "\n",
        "    # analyze and print the summary of footprint report\n",
        "    analyze_footprints(series, report=result)\n",
        "\n",
        "    # Save the result in outputs folder\n",
        "    with open(f'./outputs/{model_name}_analyze.txt', 'w') as file_analyze:\n",
        "        file_analyze.write(f'{result}')\n",
        "\n",
        "except Exception as e:\n",
        "    print(\"Error: \", e)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CunlpaknNHnU"
      },
      "source": [
        "## Benchmark your model on a STM32 target\n",
        "\n",
        "Starting from ST Edge AI dev cloud version 9.0.0 onwards, the models can be benchmarked for STM32MCU as well as for STM32MPU target boards.\n",
        "\n",
        "Here's a table with the main parameters and their descriptions while benchmarking (CLIParameters options of ST Edge AI Core).\n",
        "\n",
        "STM32MCU targets without NPU:\n",
        "\n",
        "| Parameter | Description |\n",
        "| --- | --- |\n",
        "| model | Model name corresponding to the file name uploaded. This parameter is __required__. |\n",
        "| optimization | Optimization setting: \"balanced\", \"time\" or \"ram\". This parameter is __optional__ and ignored for devices with NPU. |\n",
        "| fromModel | To identify the origin model when coming from ST model zoo. This parameter is __optional__. Default value is \"user\". |\n",
        "\n",
        "\n",
        "STM32N6 with Neural Art NPU:\n",
        "\n",
        "| Parameter | Description |\n",
        "| --- | --- |\n",
        "| model | Model name corresponding to the file name uploaded. This parameter is __required__. |\n",
        "| target |  \"stm32n6\". This parameter is __required__. |\n",
        "| stNeuralArt | Global NPU option, \"allmems--auto\", \"allmems--O3\" (\"default\"). This parameter is __required__. |\n",
        "| atonnOptions | Neural Art options like enable_epoch_controller. This parameter is __optional__. |\n",
        "| mpool | Memory pool (file name .mpool). This parameter is __optional__. |\n",
        "| fromModel | To identify the origin model when coming from ST model zoo. This parameter is __optional__. Default value is \"user\". |\n",
        "\n",
        "\n",
        "STM32MPU targets: to take advantage of the NPU acceleration use per-tensor quantization\n",
        "\n",
        "| Parameter | Description |\n",
        "| --- | --- |\n",
        "| model | Model name corresponding to the file name uploaded. This parameter is __required__. |\n",
        "| nbCores | Number of CPU cores used for benchmarking. This parameter is __set by the code__ depending on the type of MPU. The value should be an integer \"1\", or \"2\". |\n",
        "| engine | Choice of the hardware engine used on the board for benchmarking.This parameter is __set by the code__ depending on the target MPU. For STM32MP1X boards it is \"MpuEngine.CPU\" and for STM32MP2X this is \"MpuEngine.HW_ACCELERATOR\". |\n",
        "\n",
        "* Note that the the code section below, the boad_name to benchmark the model on should be a string"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n_QwpLAXmpP-"
      },
      "source": [
        "## Option 1: Benchmark on a selected board"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZbbVAfB3u4V1"
      },
      "outputs": [],
      "source": [
        "# Get the available board on ST Edge AI Developer Cloud\n",
        "boards = stmai.get_benchmark_boards()\n",
        "board_names = [boards[i].name for i in range(len(boards))]\n",
        "print(\"Available boards:\", board_names)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "CBwFyiap_wvL"
      },
      "outputs": [],
      "source": [
        "# Select a board among the available boards\n",
        "board_dropdown = widgets.Dropdown(\n",
        "    options = board_names,\n",
        "    value = board_names[0],\n",
        "    description ='Board:',\n",
        "    disabled = False,)\n",
        "\n",
        "display(board_dropdown)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "676_v09ZD1t_"
      },
      "outputs": [],
      "source": [
        "board_name = board_dropdown.value\n",
        "print(model_name, board_name)\n",
        "fps = benchmark_model(stmai=stmai,\n",
        "                      model_path=model_path,\n",
        "                      model_name=model_name,\n",
        "                      optimization=\"balanced\",\n",
        "                      from_model=from_model,\n",
        "                      board_name=board_name)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QTqciRsqmarQ"
      },
      "source": [
        "## Option 2: Benchmark on a set of STM32 boards"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "h0_F6-PAu70v"
      },
      "outputs": [],
      "source": [
        "# Benchmark the models on ST Edge AI Developer Cloud boards\n",
        "board_names = ['STM32L4R9I-DISCO', 'B-U585I-IOT02A', 'STM32H573I-DK', 'STM32H7B3I-DK', 'STM32H747I-DISCO', 'STM32H735G-DK', 'STM32N6570-DK']\n",
        "print(model_name)\n",
        "fps_array = []\n",
        "# loop through all boards\n",
        "for board_name in board_names:\n",
        "        fps_array.append(benchmark_model(stmai=stmai,\n",
        "                                         model_path=model_path,\n",
        "                                         model_name=model_name,\n",
        "                                         optimization=\"balanced\",\n",
        "                                         from_model=from_model,\n",
        "                                         board_name=board_name))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "K7jWntn1PI6u"
      },
      "outputs": [],
      "source": [
        "plt.xticks(rotation=45, ha='right')\n",
        "plt.xlabel('Board Name', fontsize=15)\n",
        "\n",
        "# Display the Frame per Second benchmark\n",
        "sorted_fps = sorted(fps_array, reverse=True)\n",
        "sorted_boards = [board_names[fps_array.index(i)] for i in sorted_fps]\n",
        "\n",
        "fig = plt.figure(1, figsize=(15, 6), tight_layout=True)\n",
        "colors = sns.color_palette()\n",
        "\n",
        "plt.bar(sorted_boards, sorted_fps, color=colors[:len(boards)], width=0.7)\n",
        "plt.ylabel('inf/s', fontsize=15)\n",
        "plt.yticks(fontsize=12)\n",
        "plt.xticks(rotation=45, ha='right')\n",
        "plt.xlabel('Board Name', fontsize=15)\n",
        "plt.title('STM32 inference/s benchmark')\n",
        "plt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GXClF99_NLOt"
      },
      "source": [
        "## Generate your model optimized C code for STM32MCU targets\n",
        "\n",
        "To deploy the model on an STM32MCU target the user has to generate the C-Code of the optimized model. Here's a table with the parameters and their descriptions for the stmai.generate callback (CLIParameters of ST Edge AI Core):\n",
        "\n",
        "| Parameter | Description |\n",
        "| --- | --- |\n",
        "| model | Model name corresponding to the file name uploaded. This parameter is required. |\n",
        "| optimization | Optimization setting: \"balanced\", \"time\" or \"ram\". This parameter is required. |\n",
        "| includeLibraryForSerie | Include the runtime library for the given STM32 series. This parameter is optional. |\n",
        "| fromModel | To identify the origin model when coming from ST model zoo. This parameter is optional. |\n",
        "\n",
        "\n",
        "\n",
        "### NOTE\n",
        "\n",
        "There is no need for this step if the deployment is intended on the MPU. One can directly deploy the .tflite model on the STM32MPUs. In case of STM32MP2x, an optimized version of the model should be already available in the path where the starting model was placed with the same name as model and extension \".nb\".\n",
        "\n",
        "The feature is not available for the STM32N6."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ax9lsPSh5Cnp"
      },
      "outputs": [],
      "source": [
        "display(series_dropdown)\n",
        "display(ide_dropdown)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "RlkypOfb0V71"
      },
      "outputs": [],
      "source": [
        "series = series_dropdown.value\n",
        "IDE = ide_dropdown.value\n",
        "\n",
        "print(f'{model_name}\\ngenerating code for {series}')\n",
        "\n",
        "os.makedirs('./code_outputs', exist_ok=True)\n",
        "\n",
        "# Generate model .c/.h code + Lib/Inc on ST Edge AI Developer Cloud\n",
        "result = stmai.generate(CliParameters(\n",
        "    model=model_name,\n",
        "    output=\"./code_outputs\",\n",
        "    optimization=\"balanced\",\n",
        "    includeLibraryForSerie=CliLibrarySerie(series),\n",
        "    includeLibraryForIde=CliLibraryIde(IDE),\n",
        "    fromModel=from_model\n",
        "))\n",
        "\n",
        "print(os.listdir(\"./code_outputs\"))\n",
        "\n",
        "# print 20 first lines of the report\n",
        "with open('./code_outputs/network_generate_report.txt', 'r') as f:\n",
        "    for _ in range(20):\n",
        "        print(next(f))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_j762CbUyL5q"
      },
      "source": [
        "## You are ready to integrate your model in your STM32 application !"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3AyUeuRgzaMD"
      },
      "source": [
        "### Delete your model from your ST Edge AI Developer Cloud space"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Oe66tncuzYg6"
      },
      "outputs": [],
      "source": [
        "stmai.delete_model(model_name)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DYmzOMFlzvO1"
      },
      "source": [
        "## If running on Colab zip and download the generated package"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "IdEIb1YczuRg"
      },
      "outputs": [],
      "source": [
        "import shutil\n",
        "shutil.make_archive('code_outputs', 'zip', 'code_outputs')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "n1LUhw2PDbqB"
      },
      "outputs": [],
      "source": [
        "# If running on Colab, run this cell to automatically download the outputs.zip file, else download manually.\n",
        "from google.colab import files\n",
        "files.download('code_outputs.zip')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1hTbrsaCuQBe"
      },
      "outputs": [],
      "source": [
        "!rm -r stm32ai_dc"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "gpuClass": "standard",
    "kernelspec": {
      "display_name": "Python 3 (ipykernel)",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.13"
    },
    "vscode": {
      "interpreter": {
        "hash": "3ca9c95fb3295dba58147778a3f6149a36aba268806f86b68ae4a365fcdcc5ff"
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}