{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "75cb6daf-ecdc-4129-8f28-ad871d3a795c",
   "metadata": {},
   "source": [
    "# Tutorial: Build & Deploy Custom (fine-tuned) LLM Models and Applications\n",
    "\n",
    "In the following tutorial you will learn how to operationalize a LLM using MLRun. We will build **MLOpsPedia** - The MLOps Master Bot, a chatbot for answering all your MLOps questions. We will do so by covering the two main stages in every MLOps project:\n",
    "\n",
    "* **Automated training pipeline** - Build an automated ML pipeline for data collection, data preparation, training and evaluation.\n",
    "* **Serving graph deployment** - Build, deploy and test in a Gradio application the newly trained LLM.\n",
    "\n",
    "**MLRun** is welcoming you to **LLMOps**!\n",
    "\n",
    "> Make sure you went over the basics in MLRun [Quick Start Tutorial](https://docs.mlrun.org/en/stable/tutorial/01-mlrun-basics.html) to understand the MLRun basics.\n",
    "\n",
    "Run the notebook in the following order (you may skip the first step):\n",
    "1. [Test the Pretrained Model](#test-the-pretrained-model)\n",
    "2. [Automated Training Pipeline](#automated-training-pipeline)\n",
    "3. [Application Serving Pipeline](#application-serving-pipeline)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9565c47-7720-47ca-ab0b-ac8a77286f90",
   "metadata": {},
   "source": [
    "But first, please install the following requirements:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cdf6b605-348d-4fd7-958d-d484446b5964",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "906e38b6-168a-47cb-9320-2acdd16b0b37",
   "metadata": {},
   "source": [
    "___\n",
    "<a id=\"test-the-pretrained-model\"></a>\n",
    "## 1. Test the Pretrained Model\n",
    "\n",
    "MLOpsPedia will be based on [falcon-7b](https://huggingface.co/tiiuae/falcon-7b). Before fine-tuning it, we want to see how it performs on some MLOps questions."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d1a9b26-9916-47d8-9c89-1e0e7380bf57",
   "metadata": {},
   "source": [
    "### 1.1. Load `falcon-7b` from HuggingFace's Transformers Hub\n",
    "\n",
    "`falcon-7b` is fully supported by HuggingFace and have its own Model and Tokenizer classes. We will use them in a HuggingFace pipeline and test them out:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "2c763708-f0e5-4a53-b788-64e4c2634973",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "80910a4c2be34f7ab35b193f37c8e0bb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import os\n",
    "from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig, pipeline\n",
    "from transformers import logging\n",
    "logging.set_verbosity(\"CRITICAL\")\n",
    "\n",
    "model_name = \"tiiuae/falcon-7b\"\n",
    "tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
    "generation_config = GenerationConfig.from_pretrained(model_name)\n",
    "generator = pipeline(\"text-generation\", model=model_name, tokenizer=tokenizer, trust_remote_code=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65fa310e-cf4f-4f5b-9f96-aacb9ab3a394",
   "metadata": {},
   "source": [
    "### 1.2. Test it on some MLOps Questions\n",
    "\n",
    "For the good order, we prepared `prompt_to_response` that infer a prompt through the pipeline we initialized and return the response. We'll use it for a couple of questions:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "76ef3714-a0d7-4818-be55-fb17f5c3cf21",
   "metadata": {},
   "outputs": [],
   "source": [
    "def prompt_to_response(prompt: str) -> str:\n",
    "    return generator(prompt, \n",
    "                     generation_config=generation_config,\n",
    "                     max_length=50, pad_token_id=tokenizer.eos_token_id)[0][\"generated_text\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "e1e98ec1-859e-4306-ac0e-be3b714ef5ef",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "What is a serving pipeline?\n",
      "A serving pipeline is a set of tools that help you to create, manage, and deliver your content.\n",
      "What is a serving pipeline?\n",
      "A serving pipeline is a set of tools that help you to create,\n"
     ]
    }
   ],
   "source": [
    "print(prompt_to_response(prompt=\"What is a serving pipeline?\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "186841f5-c681-40bf-8467-68801cfca461",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "What is MLops?\n",
      "MLops is a set of practices that help organizations to build, deploy, and manage machine learning models at scale.\n",
      "MLops is a set of practices that help organizations to build, deploy, and manage machine learning models\n"
     ]
    }
   ],
   "source": [
    "print(prompt_to_response(prompt=\"What is MLops?\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1203c3bc-b3a9-4b30-a7b3-c119a74e0e2d",
   "metadata": {},
   "source": [
    "As expected, `falcon-7b` is not that sharp on MLOps questions, but that's about to change..."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26437209-fa75-496f-8d12-19751ba88530",
   "metadata": {},
   "source": [
    "___\n",
    "<a id=\"automated-training-pipeline\"></a>\n",
    "## 2. Automated Training Pipeline\n",
    "\n",
    "To get a `falcon-7b` that knows MLOps, we will fine tune it on [**Iguazio**'s MLOps blogs](https://www.iguazio.com/blog/). To do so, we will create a fully automated pipeline with the following steps:\n",
    "\n",
    "1. **Collect Data** - Collect all text from given html urls into `.txt` files, meaning we'll be getting all the MLOps blogs as text files.\n",
    "2. **Preprocess Data** - Join the `.txt` files, reformatting the text into our prompt template: \"Subject - Content\". We made every header (`<h>` tags) a *subject* of a prompt, and the text (`<p>` tags) under it as its *content*.\n",
    "3. **Train** - Fine-tune the LLM on the data. We'll run the training on **OpenMPI**, and we will use **DeepSpeed** for distributing the model and data between multiple workers, splitting the work between nodes and GPUs. **MLRun will auto-log the entire training process**.\n",
    "4. **Evaluate** - Evaluate our model using the *Perplexity* metric.\n",
    "\n",
    "<img src=\"./images/training-pipeline.png\" style=\"width: 400px\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6e22570a-3c0e-4278-8385-e0d321bb9067",
   "metadata": {},
   "source": [
    "### 2.1. Define MLRun project and set all the MLRun functions\n",
    "\n",
    "Create or load an MLRun project that holds all your functions and configuration (see [project_setup.py](./src/project_setup.py))\n",
    "\n",
    "The project contains the following files where we'll set the functions from to build the workflow of the pipeline:\n",
    "* [data_collection.py](./src/data_collection.py) - to create an MLRun function with the `collect_html_to_text_files` handler.\n",
    "* [data_preprocess.py](./src/data_preprocess.py) - to create an MLRun function with the `prepare_dataset` handler.\n",
    "* [training]() - to create an MLRun function with the `train` and `evaluate` handlers.\n",
    "* [serving.py](./src/serving.py) - to create an MLRun function with all the serving graph steps (will be covered in section 3).\n",
    "\n",
    "In addition, the training pipeline is set to the project as well. It can be seen at [training_workflow.py](./src/training_workflow.py)\n",
    "\n",
    "The training and evaluation function we will use is [hugging_face_classifier_trainer](https://www.mlrun.org/hub/). It is taken from [**MLRun's Functions Hub**](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html) - a collection of ready to be imported functions for variety of use cases. We import the function during the project setup."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "d771e4ba-43a4-4bcf-8ae0-d35c0f80d259",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "git://github.com/mlrun/demo-llm-tuning.git#main\n"
     ]
    }
   ],
   "source": [
    "import mlrun\n",
    "\n",
    "project = mlrun.load_project(\n",
    "    name=\"mlopspedia-bot\",\n",
    "    context=\"./\",\n",
    "    user_project=True,\n",
    "    parameters={\n",
    "        \"source\": \"git://github.com/mlrun/demo-llm-tuning.git#main\",\n",
    "        \"default_image\": \"yonishelach/mlrun-llm\",\n",
    "    })"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a66deeb6-7472-4642-8a11-996422cd3091",
   "metadata": {},
   "source": [
    "### 2.2. Run full LLM life-cycle workflow\n",
    "\n",
    "Run the training pipeline by using `project.run(workflow name, ...)`. The steps on the piepline inputs and outputs are as follows:\n",
    "\n",
    "1. url link -> `collect_html_to_text_files` -> zip containing all url text files.\n",
    "2. zip containing all url text files -> `prepare_dataset` -> training set, evaluation set.\n",
    "3. training set -> `train` -> model, metrics, plots\n",
    "4. evaluation set, model -> `evaluate` -> metrics, plots"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "b1ea5ec6-cb78-44db-aac7-97e52ce591db",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>Pipeline running (id=2012a80c-500b-43fb-ad03-abffd6ee2a6b), <a href=\"https://dashboard.default-tenant.app.llm2.iguazio-cd0.com/mlprojects/mlopspedia-bot-yonis/jobs/monitor-workflows/workflow/2012a80c-500b-43fb-ad03-abffd6ee2a6b\" target=\"_blank\"><b>click here</b></a> to view the details in MLRun UI</div>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "image/svg+xml": [
       "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n",
       "<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
       " \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
       "<!-- Generated by graphviz version 2.43.0 (0)\n",
       " -->\n",
       "<!-- Title: kfp Pages: 1 -->\n",
       "<svg width=\"186pt\" height=\"260pt\"\n",
       " viewBox=\"0.00 0.00 186.08 260.00\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n",
       "<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 256)\">\n",
       "<title>kfp</title>\n",
       "<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-256 182.08,-256 182.08,4 -4,4\"/>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;1439426288 -->\n",
       "<g id=\"node1\" class=\"node\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;1439426288</title>\n",
       "<ellipse fill=\"green\" stroke=\"black\" cx=\"89.04\" cy=\"-18\" rx=\"50.09\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"89.04\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">evaluate</text>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595 -->\n",
       "<g id=\"node2\" class=\"node\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595</title>\n",
       "<ellipse fill=\"green\" stroke=\"black\" cx=\"89.04\" cy=\"-162\" rx=\"89.08\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"89.04\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">data&#45;preparation</text>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;1439426288 -->\n",
       "<g id=\"edge2\" class=\"edge\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;1439426288</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M84.58,-143.95C82.11,-133.63 79.29,-120.15 78.04,-108 76.41,-92.08 76.41,-87.92 78.04,-72 78.92,-63.46 80.57,-54.26 82.34,-45.96\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"85.79,-46.57 84.58,-36.05 78.96,-45.03 85.79,-46.57\"/>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;930414823 -->\n",
       "<g id=\"node3\" class=\"node\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;930414823</title>\n",
       "<ellipse fill=\"green\" stroke=\"black\" cx=\"120.04\" cy=\"-90\" rx=\"33.29\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"120.04\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">train</text>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;930414823 -->\n",
       "<g id=\"edge1\" class=\"edge\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;930414823</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M96.55,-144.05C100.13,-135.97 104.49,-126.12 108.48,-117.11\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"111.76,-118.35 112.61,-107.79 105.36,-115.52 111.76,-118.35\"/>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;930414823&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;1439426288 -->\n",
       "<g id=\"edge4\" class=\"edge\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;930414823&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;1439426288</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M112.7,-72.41C109.12,-64.34 104.73,-54.43 100.71,-45.35\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"103.8,-43.68 96.55,-35.96 97.4,-46.52 103.8,-43.68\"/>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;915534038 -->\n",
       "<g id=\"node4\" class=\"node\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;915534038</title>\n",
       "<ellipse fill=\"green\" stroke=\"black\" cx=\"89.04\" cy=\"-234\" rx=\"78.79\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"89.04\" y=\"-230.3\" font-family=\"Times,serif\" font-size=\"14.00\">data&#45;collection</text>\n",
       "</g>\n",
       "<!-- mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;915534038&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595 -->\n",
       "<g id=\"edge3\" class=\"edge\">\n",
       "<title>mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;915534038&#45;&gt;mlops&#45;bot&#45;master&#45;pipeline&#45;zsk5k&#45;2897139595</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M89.04,-215.7C89.04,-207.98 89.04,-198.71 89.04,-190.11\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"92.54,-190.1 89.04,-180.1 85.54,-190.1 92.54,-190.1\"/>\n",
       "</g>\n",
       "</g>\n",
       "</svg>\n"
      ],
      "text/plain": [
       "<graphviz.graphs.Digraph at 0x7f1ba746ca00>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<h2>Run Results</h2><h3>[info] Workflow 2012a80c-500b-43fb-ad03-abffd6ee2a6b finished, state=Succeeded</h3><br>click the hyper links below to see detailed results<br><table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th>uid</th>\n",
       "      <th>start</th>\n",
       "      <th>state</th>\n",
       "      <th>name</th>\n",
       "      <th>parameters</th>\n",
       "      <th>results</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td><div title=\"fd71b79950314761876121289eef349d\"><a href=\"https://dashboard.default-tenant.app.llm2.iguazio-cd0.com/mlprojects/mlopspedia-bot-yonis/jobs/monitor/fd71b79950314761876121289eef349d/overview\" target=\"_blank\" >...9eef349d</a></div></td>\n",
       "      <td>Jul 12 05:02:28</td>\n",
       "      <td>completed</td>\n",
       "      <td>evaluate</td>\n",
       "      <td><div class=\"dictlist\">model_path=store://artifacts/mlopspedia-bot-yonis/falcon-7b-mlrun:2012a80c-500b-43fb-ad03-abffd6ee2a6b</div><div class=\"dictlist\">model_name=tiiuae/falcon-7b</div><div class=\"dictlist\">tokenizer_name=tiiuae/falcon-7b</div></td>\n",
       "      <td><div class=\"dictlist\">perplexity=8.5703125</div></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td><div title=\"c14ad50afff5456d9d67a0a280920e39\"><a href=\"https://dashboard.default-tenant.app.llm2.iguazio-cd0.com/mlprojects/mlopspedia-bot-yonis/jobs/monitor/c14ad50afff5456d9d67a0a280920e39/overview\" target=\"_blank\" >...80920e39</a></div></td>\n",
       "      <td>Jul 12 03:56:11</td>\n",
       "      <td>completed</td>\n",
       "      <td>train</td>\n",
       "      <td><div class=\"dictlist\">model_name=falcon-7b-mlrun</div><div class=\"dictlist\">pretrained_tokenizer=tiiuae/falcon-7b</div><div class=\"dictlist\">pretrained_model=tiiuae/falcon-7b</div><div class=\"dictlist\">model_class=transformers.AutoModelForCausalLM</div><div class=\"dictlist\">tokenizer_class=transformers.AutoTokenizer</div><div class=\"dictlist\">TRAIN_num_train_epochs=5</div><div class=\"dictlist\">use_deepspeed=</div></td>\n",
       "      <td><div class=\"dictlist\">loss=2.3346</div><div class=\"dictlist\">learning_rate=0.0</div><div class=\"dictlist\">train_runtime=3898.6792</div><div class=\"dictlist\">train_samples_per_second=0.737</div><div class=\"dictlist\">train_steps_per_second=0.046</div><div class=\"dictlist\">total_flos=2.9304526258176e+16</div></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td><div title=\"3d61bcbef459400c871bb9010ffbf5ab\"><a href=\"https://dashboard.default-tenant.app.llm2.iguazio-cd0.com/mlprojects/mlopspedia-bot-yonis/jobs/monitor/3d61bcbef459400c871bb9010ffbf5ab/overview\" target=\"_blank\" >...0ffbf5ab</a></div></td>\n",
       "      <td>Jul 12 03:55:46</td>\n",
       "      <td>completed</td>\n",
       "      <td>data-preparation</td>\n",
       "      <td></td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td><div title=\"57ad1dde5bd64fe391fff1137dea94d6\"><a href=\"https://dashboard.default-tenant.app.llm2.iguazio-cd0.com/mlprojects/mlopspedia-bot-yonis/jobs/monitor/57ad1dde5bd64fe391fff1137dea94d6/overview\" target=\"_blank\" >...7dea94d6</a></div></td>\n",
       "      <td>Jul 12 03:53:50</td>\n",
       "      <td>completed</td>\n",
       "      <td>data-collection</td>\n",
       "      <td><div class=\"dictlist\">urls_file=/User/demo-llm-tuning/data/html_urls.txt</div></td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "workflow_run = project.run(\n",
    "    name=\"training_workflow\",\n",
    "    arguments={\n",
    "        \"html_links\": \"/User/demo-llm-tuning/data/html_urls.txt\",\n",
    "        \"model_name\": \"falcon-7b-mlrun\",\n",
    "        \"pretrained_tokenizer\": model_name,\n",
    "        \"pretrained_model\": model_name,\n",
    "        \"epochs\": 5,\n",
    "    },\n",
    "    watch=True,\n",
    "    dirty=True,\n",
    "    timeout=60 * 120,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a21444a9-b66b-4539-a1ea-13f745114fbb",
   "metadata": {},
   "source": [
    "#### 2.2.1. Distributed Training\n",
    "\n",
    "In the following image you can see the 16 workers that trained the model as part of an **MPIJob** and **DeepSpeed**.\n",
    "\n",
    "<img src=\"./images/16-workers-training.png\" style=\"width: 1000px\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b92e42a-844a-40c4-a09f-dbb4bfd1e23c",
   "metadata": {},
   "source": [
    "#### 2.2.2. UI Presentation\n",
    "\n",
    "Here we can see how the workflow looks on our UI, we can see the entire pipeline and the loss plot produced by the training step that is highlighted.\n",
    "\n",
    "<img src=\"./images/workflow-train.png\" style=\"width: 1000px\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11910dbc-8c31-4efd-b092-cdb1a649c4f2",
   "metadata": {},
   "source": [
    "___\n",
    "<a id=\"application-serving-pipeline\"></a>\n",
    "## 3. Application Serving Pipeline\n",
    "\n",
    "In this last part we'll serve our LLM using [MLRun Serving](https://docs.mlrun.org/en/stable/serving/serving-graph.html).\n",
    "\n",
    "MLRun serving can produce managed ML application pipelines using real-time auto-scaling [Nuclio](https://nuclio.io/) serverless functions. The application pipeline includes all the steps from accepting events or data, preparing the required model features, inferring results using one or more models, and driving actions.\n",
    "\n",
    "We'll build the following serving graph for chat application:\n",
    "\n",
    "* **Preprocess** (`preprocess`) - Fit the user prompt into out prompt structure (\"Subject - Content\") \n",
    "* **LLM** (`LLMModelServer`) - To serve our trained model and perform inferences to generate answers.\n",
    "* **Postprocess** (`postprocess`) - To see if our model generated text with confidence or not.\n",
    "* **Toxicity Filter** (`ToxicityClassifierModelServer`) - To serve a Hugging Face Evaluate package model and perform inferences to catch toxic prompt and responses.\n",
    "\n",
    "<img src=\"./images/serving-graph.png\" style=\"width: 800px\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ecefcef-3a59-4d32-a046-f11e987d7df4",
   "metadata": {},
   "source": [
    "### 3.1. Build our Serving Graph\n",
    "\n",
    "We'll first get the serving function with the code from our project (it was set in section 2.1.):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "442e2d73-45fd-4264-92d1-9cfea8620066",
   "metadata": {},
   "outputs": [],
   "source": [
    "serving_function = project.get_function(\"serving\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "2fc82891-9f37-4c38-b3d7-84fdeb0abb25",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_args = {\"load_in_8bit\": True, \"device_map\": \"cuda:0\", \"trust_remote_code\": True}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23cc8fcf-f9ff-4175-bdb2-25d0f1b437ef",
   "metadata": {},
   "source": [
    "Now we'll build the serving graph:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "08594367-5e87-4bf3-8598-d72a6759355b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/svg+xml": [
       "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n",
       "<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
       " \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
       "<!-- Generated by graphviz version 2.43.0 (0)\n",
       " -->\n",
       "<!-- Title: mlrun&#45;flow Pages: 1 -->\n",
       "<svg width=\"785pt\" height=\"44pt\"\n",
       " viewBox=\"0.00 0.00 785.45 44.00\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n",
       "<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 40)\">\n",
       "<title>mlrun&#45;flow</title>\n",
       "<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-40 781.45,-40 781.45,4 -4,4\"/>\n",
       "<!-- _start -->\n",
       "<g id=\"node1\" class=\"node\">\n",
       "<title>_start</title>\n",
       "<polygon fill=\"lightgrey\" stroke=\"black\" points=\"38.55,-0.05 40.7,-0.15 42.83,-0.3 44.92,-0.49 46.98,-0.74 48.99,-1.03 50.95,-1.36 52.84,-1.75 54.66,-2.18 56.4,-2.65 58.06,-3.16 59.63,-3.71 61.11,-4.31 62.49,-4.94 63.76,-5.61 64.93,-6.31 65.99,-7.04 66.93,-7.8 67.77,-8.59 68.48,-9.41 69.09,-10.25 69.58,-11.11 69.95,-11.99 70.21,-12.89 70.36,-13.8 70.4,-14.72 70.33,-15.65 70.16,-16.59 69.89,-17.53 69.53,-18.47 69.07,-19.41 68.52,-20.35 67.89,-21.28 67.18,-22.2 66.4,-23.11 65.55,-24.01 64.63,-24.89 63.65,-25.75 62.62,-26.59 61.53,-27.41 60.4,-28.2 59.23,-28.96 58.02,-29.69 56.78,-30.39 55.5,-31.06 54.2,-31.69 52.88,-32.29 51.53,-32.84 50.17,-33.35 48.79,-33.82 47.4,-34.25 46,-34.64 44.59,-34.97 43.17,-35.26 41.75,-35.51 40.32,-35.7 38.89,-35.85 37.45,-35.95 36.02,-36 34.58,-36 33.15,-35.95 31.71,-35.85 30.28,-35.7 28.85,-35.51 27.43,-35.26 26.01,-34.97 24.6,-34.64 23.2,-34.25 21.81,-33.82 20.43,-33.35 19.07,-32.84 17.72,-32.29 16.4,-31.69 15.1,-31.06 13.82,-30.39 12.58,-29.69 11.37,-28.96 10.2,-28.2 9.07,-27.41 7.98,-26.59 6.95,-25.75 5.97,-24.89 5.05,-24.01 4.2,-23.11 3.42,-22.2 2.71,-21.28 2.08,-20.35 1.53,-19.41 1.07,-18.47 0.71,-17.53 0.44,-16.59 0.27,-15.65 0.2,-14.72 0.24,-13.8 0.39,-12.89 0.65,-11.99 1.02,-11.11 1.51,-10.25 2.11,-9.41 2.83,-8.59 3.66,-7.8 4.61,-7.04 5.67,-6.31 6.84,-5.61 8.11,-4.94 9.49,-4.31 10.97,-3.71 12.54,-3.16 14.2,-2.65 15.94,-2.18 17.76,-1.75 19.65,-1.36 21.61,-1.03 23.62,-0.74 25.68,-0.49 27.77,-0.3 29.9,-0.15 32.05,-0.05 34.22,0 36.38,0 38.55,-0.05\"/>\n",
       "<text text-anchor=\"middle\" x=\"35.3\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">start</text>\n",
       "</g>\n",
       "<!-- preprocess -->\n",
       "<g id=\"node2\" class=\"node\">\n",
       "<title>preprocess</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"168.34\" cy=\"-18\" rx=\"61.99\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"168.34\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">preprocess</text>\n",
       "</g>\n",
       "<!-- _start&#45;&gt;preprocess -->\n",
       "<g id=\"edge1\" class=\"edge\">\n",
       "<title>_start&#45;&gt;preprocess</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M70.05,-18C78.23,-18 87.29,-18 96.49,-18\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"96.5,-21.5 106.5,-18 96.5,-14.5 96.5,-21.5\"/>\n",
       "</g>\n",
       "<!-- mlopspedia -->\n",
       "<g id=\"node3\" class=\"node\">\n",
       "<title>mlopspedia</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"329.78\" cy=\"-18\" rx=\"63.89\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"329.78\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">mlopspedia</text>\n",
       "</g>\n",
       "<!-- preprocess&#45;&gt;mlopspedia -->\n",
       "<g id=\"edge2\" class=\"edge\">\n",
       "<title>preprocess&#45;&gt;mlopspedia</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M230.53,-18C238.77,-18 247.29,-18 255.71,-18\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"255.91,-21.5 265.91,-18 255.91,-14.5 255.91,-21.5\"/>\n",
       "</g>\n",
       "<!-- postprocess -->\n",
       "<g id=\"node4\" class=\"node\">\n",
       "<title>postprocess</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"495.77\" cy=\"-18\" rx=\"66.09\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"495.77\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">postprocess</text>\n",
       "</g>\n",
       "<!-- mlopspedia&#45;&gt;postprocess -->\n",
       "<g id=\"edge3\" class=\"edge\">\n",
       "<title>mlopspedia&#45;&gt;postprocess</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M393.72,-18C402,-18 410.56,-18 419.03,-18\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"419.29,-21.5 429.29,-18 419.29,-14.5 419.29,-21.5\"/>\n",
       "</g>\n",
       "<!-- toxicity&#45;classifier -->\n",
       "<g id=\"node5\" class=\"node\">\n",
       "<title>toxicity&#45;classifier</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"687.76\" cy=\"-18\" rx=\"89.88\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"687.76\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">toxicity&#45;classifier</text>\n",
       "</g>\n",
       "<!-- postprocess&#45;&gt;toxicity&#45;classifier -->\n",
       "<g id=\"edge4\" class=\"edge\">\n",
       "<title>postprocess&#45;&gt;toxicity&#45;classifier</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M562.15,-18C570.41,-18 578.99,-18 587.63,-18\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"587.76,-21.5 597.76,-18 587.76,-14.5 587.76,-21.5\"/>\n",
       "</g>\n",
       "</g>\n",
       "</svg>\n"
      ],
      "text/plain": [
       "<graphviz.graphs.Digraph at 0x7f1ba7445970>"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Set the topology and get the graph object:\n",
    "graph = serving_function.set_topology(\"flow\", engine=\"async\")\n",
    "\n",
    "# Add the steps:\n",
    "graph.to(handler=\"preprocess\", name=\"preprocess\") \\\n",
    "     .to(\"LLMModelServer\",\n",
    "         name=\"mlopspedia\",\n",
    "         model_args=model_args,\n",
    "         tokenizer_name=model_name,\n",
    "         model_name=model_name,\n",
    "         peft_model=project.get_artifact_uri(\"falcon-7b-mlrun\")) \\\n",
    "     .to(handler=\"postprocess\", name=\"postprocess\") \\\n",
    "     .to(\"ToxicityClassifierModelServer\",\n",
    "         name=\"toxicity-classifier\",\n",
    "         threshold=0.7).respond()\n",
    "\n",
    "# Plot to graph:\n",
    "serving_function.plot(rankdir='LR')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "426b91d1-649e-4ab0-8908-e8f2e2e54ceb",
   "metadata": {},
   "source": [
    "Lastly, we wish to add a GPU and save the configured function in the project:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "2efaff89-86de-4fa1-9bcb-cb97b0a34b7d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<mlrun.projects.project.MlrunProject at 0x7f1aa8d30d00>"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Configure (add a GPU and increase readiness timeout):\n",
    "serving_function.with_limits(gpus=1)\n",
    "serving_function.spec.readiness_timeout = 3000\n",
    "\n",
    "# Save the function to the project:\n",
    "project.set_function(serving_function, with_repo=True)\n",
    "project.save()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3637268-7349-4fdb-9baa-41ca0d41a94a",
   "metadata": {},
   "source": [
    "### 3.2. Deploy and Test the Application\n",
    "\n",
    "We will call the `deploy_function` and wait:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "dce5819e-aea0-48d2-9027-e85ce3b41aa2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2023-07-12 05:03:41,703 [info] Starting remote function deploy\n",
      "2023-07-12 05:03:42  (info) Deploying function\n",
      "2023-07-12 05:03:42  (info) Building\n",
      "2023-07-12 05:03:42  (info) Staging files and preparing base images\n",
      "2023-07-12 05:03:42  (info) Building processor image\n",
      "2023-07-12 05:26:38  (info) Build complete\n",
      "2023-07-12 05:42:21  (info) Function deploy complete\n",
      "> 2023-07-12 05:42:23,182 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-mlopspedia-bot-yonis-serving.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['mlopspedia-bot-yonis-serving-mlopspedia-bot-yonis.default-tenant.app.llm2.iguazio-cd0.com/']}\n"
     ]
    }
   ],
   "source": [
    "# Deploy the serving function:\n",
    "deployment = mlrun.deploy_function(\"serving\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "939ce236-4347-404e-9d61-03e6773fbb28",
   "metadata": {},
   "source": [
    "Let's test the function manually on some prompts:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "47a367e0-5b18-4032-b876-ff83bf5bf3a3",
   "metadata": {},
   "outputs": [],
   "source": [
    "generate_kwargs = {\"max_length\": 150, \"temperature\": 0.9, \"top_p\": 0.5, \"top_k\": 25, \"repetition_penalty\": 1.0}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "4cc39d0d-c32e-43cd-974f-dda458e44d63",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2023-07-12 05:42:23,239 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-mlopspedia-bot-yonis-serving.default-tenant.svc.cluster.local:8080/predict'}\n",
      "MLRun is a complete open source MLOps orchestration platform that provides a single platform for building, training, deploying and managing ML applications at scale. MLRun is built on top of Iguazio’s open source data science platform and provides a unified framework for running data science and ML applications.\n",
      "MLRun provides:\n",
      "\n",
      "A single place to run and manage all ML workloads (from data science to production)\n",
      "A unified framework for running data science and ML applications\n",
      "A single place to run and manage all ML workloads (from data science to production)\n",
      "A unified framework for running data science and ML applications\n",
      "A unified framework for running data science and\n"
     ]
    }
   ],
   "source": [
    "response = serving_function.invoke(path='/predict', body={\"prompt\": \"What is MLRun?\", **generate_kwargs})\n",
    "print(response[\"outputs\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "a2cc218f-e9f1-490a-ab03-b0656f2bc0c1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2023-07-12 05:42:45,916 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-mlopspedia-bot-yonis-serving.default-tenant.svc.cluster.local:8080/predict'}\n",
      "Machine learning is a subfield of artificial intelligence (AI) that focuses on algorithms that can learn from data and improve their performance over time. Machine learning algorithms can be used to build intelligent systems that can make decisions, learn from experience, and adapt to new situations.\n",
      "Machine learning algorithms are used in many areas of our daily lives, such as:\n",
      "\n",
      "Automated driving\n",
      "Speech recognition\n",
      "Image recognition\n",
      "Personalized recommendations\n",
      "\n",
      "Machine learning algorithms are used in the development of autonomous cars. The cars are able to navigate roads and react to situations in real time.\n",
      "Speech recognition algorithms are used in voice assistants like Siri and Alexa. They can recognize your voice\n"
     ]
    }
   ],
   "source": [
    "response = serving_function.invoke(path='/predict', body={\"prompt\": \"What is machine learning?\", **generate_kwargs})\n",
    "print(response[\"outputs\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "6aebc785-931c-4ea9-8cd1-ec11d8dc02b1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2023-07-12 05:43:06,514 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-mlopspedia-bot-yonis-serving.default-tenant.svc.cluster.local:8080/predict'}\n",
      "This bot do not respond to toxicity.\n"
     ]
    }
   ],
   "source": [
    "response = serving_function.invoke(path='/predict', body={\"prompt\": \"You are stupid!\", **generate_kwargs})\n",
    "print(response[\"outputs\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3caa7b5c-eed9-4fb2-b69d-4927a681f25c",
   "metadata": {},
   "source": [
    "Now, we'll set up a Gradio application and launch it:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "4055d2ab-cebc-4456-acb6-80627040416a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "import gradio as gr\n",
    "import requests\n",
    "\n",
    "# Get the serving url to send requests to:\n",
    "serving_url = deployment.outputs[\"endpoint\"]\n",
    "\n",
    "\n",
    "def generate(prompt, temperature, max_length, top_p, top_k, repetition_penalty):\n",
    "    # Build the request for our serving graph:\n",
    "    inputs = {\n",
    "        \"prompt\": prompt,\n",
    "        \"temperature\": temperature,\n",
    "        \"max_length\": max_length,\n",
    "        \"top_p\": top_p,\n",
    "        \"top_k\": top_k,\n",
    "        \"repetition_penalty\": repetition_penalty,\n",
    "    }\n",
    "\n",
    "    # call the serving function with the request:\n",
    "    resp = requests.post(serving_url, data=json.dumps(inputs).encode(\"utf-8\"))\n",
    "\n",
    "    # Return the response:\n",
    "    return resp.json()[\"outputs\"]\n",
    "\n",
    "\n",
    "# Set up a Gradio frontend application:\n",
    "with gr.Blocks(analytics_enabled=False, theme=gr.themes.Soft()) as demo:\n",
    "    gr.Markdown(\n",
    "        \"\"\"# LLM Playground\n",
    "Play with the `generate` configurations and see how they make the LLM's responses better or worse.\n",
    "\"\"\"\n",
    "    )\n",
    "    with gr.Row():\n",
    "        with gr.Column(scale=5):\n",
    "            with gr.Row():\n",
    "                chatbot = gr.Chatbot()\n",
    "            with gr.Row():\n",
    "                prompt = gr.Textbox(label=\"Subject to ask about:\", placeholder=\"Type a question and Enter\")\n",
    "\n",
    "        with gr.Column(scale=1):\n",
    "            temperature = gr.Slider(minimum=0, maximum=1, value=0.9, label=\"Temperature\", info=\"Choose between 0 and 1\")\n",
    "            max_length = gr.Slider(minimum=0, maximum=1500, value=150, label=\"Maximum length\", info=\"Choose between 0 and 1500\")\n",
    "            top_p = gr.Slider(minimum=0, maximum=1, value=0.5, label=\"Top P\", info=\"Choose between 0 and 1\")\n",
    "            top_k = gr.Slider(minimum=0, maximum=500, value=25, label=\"Top k\", info=\"Choose between 0 and 500\")\n",
    "            repetition_penalty = gr.Slider(minimum=0, maximum=1, value=1, label=\"repetition penalty\", info=\"Choose between 0 and 1\")\n",
    "            clear = gr.Button(\"Clear\")\n",
    "\n",
    "    def respond(prompt, chat_history, temperature, max_length, top_p, top_k, repetition_penalty):\n",
    "        bot_message = generate(prompt, temperature, max_length, top_p, top_k, repetition_penalty)\n",
    "        chat_history.append((prompt, bot_message))\n",
    "\n",
    "        return \"\", chat_history\n",
    "\n",
    "    prompt.submit(respond, [prompt, chatbot, temperature, max_length, top_p, top_k, repetition_penalty], [prompt, chatbot])\n",
    "    clear.click(lambda: None, None, chatbot, queue=False)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "0ef771d3-ecb1-4cde-a9bc-6ebf8b76d37e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Running on local URL:  http://127.0.0.1:7860\n",
      "Running on public URL: https://b47d16a4d0489c6dde.gradio.live\n",
      "\n",
      "This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div><iframe src=\"https://b47d16a4d0489c6dde.gradio.live\" width=\"100%\" height=\"685\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": []
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "demo.launch(share=True, height=685)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef6b3b68",
   "metadata": {},
   "source": [
    "<img src=\"./images/gradio.png\" style=\"width: 1000px\"/>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "803b4824",
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mlrun-base",
   "language": "python",
   "name": "conda-env-mlrun-base-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
