{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "446ad033",
   "metadata": {},
   "source": [
    "(genai-01-basic-tutorial)=\n",
    "# Deploying an LLM using MLRun\n",
    "This notebook illustrates deploying an LLM using MLRun: it shows how to grab a dataset, which is a list of articles, scrape those articles, chunk and index them into a vector store, and then deploy an open-source Hugging Face model to an endpoint where you can make requests and get responses with a RAG enrichment using the data that you just downloaded.\n",
    "\n",
    "Since this tutorial is for illustrative purposes, it uses minimal resources &mdash; CPU and not GPU, and a small amount of data.\n",
    "\n",
    "**In this tutorial:**\n",
    "- [MLRun installation and configuration](#mlrun-installation-and-configuration)\n",
    "- [Set up the vector database in the cluster](#set-up-the-vector-database-in-the-cluster)\n",
    "- [Build the vector DB](#build-the-vector-db)\n",
    "- [Serving the function](#serving-the-function)\n",
    "\n",
    "**See also:**\n",
    "- [Model monitoring](https://docs.mlrun.org/en/stable/concepts/model-monitoring.html)\n",
    "- [Alerts-notifications](https://docs.mlrun.org/en/stable/concepts/alerts-notifications.html)\n",
    "\n",
    "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/aoP__SaAO1M\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe><br><br>\n",
    "\n",
    "## MLRun installation and configuration\n",
    "\n",
    "Before running this notebook make sure the `mlrun` packages are installed (`pip install mlrun`) and that you have configured the access to MLRun service. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "da12ecdc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install MLRun if not installed, run this only once. Restart the notebook after the install\n",
    "# %pip install mlrun"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e00e8af1-733e-4806-b994-60693f1b1d44",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import mlrun"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09434730",
   "metadata": {},
   "source": [
    "**Get or create a new project**\n",
    "\n",
    "First create, load or use (get) an [MLRun Project](https://docs.mlrun.org/en/stable/projects/project.html). The [get_or_create_project](https://docs.mlrun.org/en/stable/api/mlrun.projects/index.html#mlrun.projects.get_or_create_project) method tries to load the project from the MLRun DB. If the project does not exist, it creates a new one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0464f4f9-2861-4cd4-bcf6-26ad81734bc9",
   "metadata": {},
   "outputs": [],
   "source": [
    "project = mlrun.get_or_create_project(\n",
    "    \"genai-tutorial\", \"./\", user_project=True, allow_cross_project=True\n",
    ")\n",
    "project.set_source(\".\", pull_at_runtime=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c74239f",
   "metadata": {},
   "source": [
    "## Set up the vector database in the cluster \n",
    "These two steps imports a pre-defined dataset and load it into a vector database. Then the vector database is stored in the data layer of the cluster.\n",
    "\n",
    "If you're not using Iguazio's Jupyter, download {download}`fetch-vectordb-data.py <src/fetch-vectordb-data.py>`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ee2262a8-098b-4635-b2cb-03d314bd3050",
   "metadata": {},
   "outputs": [],
   "source": [
    "# The model used is the free open-source PHI 2\n",
    "MODEL_ID = \"microsoft/phi-2\"\n",
    "\n",
    "# Define the dataset for the VectorDB\n",
    "DATA_SET = mlrun.get_sample_path(\"data/genai-tutorial/labelled_newscatcher_dataset.csv\")\n",
    "\n",
    "# The location of the VectorDB files\n",
    "CACHE_DIR = mlrun.mlconf.artifact_path\n",
    "CACHE_DIR = (\n",
    "    CACHE_DIR.replace(\"v3io://\", \"/v3io\").replace(\"{{run.project}}\", project.name)\n",
    "    + \"/cache\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6c4da62f-3a43-44c8-a3b8-84e286866ef0",
   "metadata": {},
   "outputs": [],
   "source": [
    "is_ce = CACHE_DIR.startswith(\"s3://\")\n",
    "is_ce"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8f28495-7552-4344-b005-f8feb12f761c",
   "metadata": {},
   "source": [
    "### build an image from mlrun/mlrun to include langchain and torch packages\n",
    "\n",
    "The image can be created with mlrun/mlrun as base:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b7837f97-22ec-44d1-b48e-6e90c5bae978",
   "metadata": {},
   "outputs": [],
   "source": [
    "commands = [\n",
    "    \"pip install chromadb==0.5.0 langchain==0.2.3 langchain-community==0.2.4 langchain-core==0.2.5 langchain-text-splitters==0.2.1 clean-text==0.6.0 transformers==4.41.2\",\n",
    "    \"pip install torch --index-url https://download.pytorch.org/whl/cpu\",\n",
    "    \"pip install --upgrade requests requests-toolbelt\",\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "108539ef-b32c-4d71-a516-f146ab04caad",
   "metadata": {},
   "outputs": [],
   "source": [
    "# need different version of protobuf to work with different python version, python 3.9 -> protobuf 3.20.1, python 3.11 -> protobuf latest\n",
    "import sys\n",
    "\n",
    "minor_version = float(sys.version_info[1])\n",
    "if minor_version >= 11:\n",
    "    commands.append(\"pip install --upgrade --force-reinstall protobuf\")\n",
    "elif minor_version == 9:\n",
    "    commands.append(\"pip install protobuf==3.20.2\")\n",
    "else:\n",
    "    print(f\"minor_version {minor_version} not supported\")\n",
    "commands"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c45c872-31fe-4f1e-b49c-60c59676c129",
   "metadata": {},
   "source": [
    "Run the following command to build the image, once it's successfully built, no need to run the cell again since it takes quite sometime to build an image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f0a5bbe2-fe30-4d1d-8c88-f22488efaeaa",
   "metadata": {},
   "outputs": [],
   "source": [
    "project.build_image(\n",
    "    image=\".llm-demo-data\",\n",
    "    base_image=\"mlrun/mlrun\",\n",
    "    set_as_default=False,\n",
    "    commands=commands,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc0539b2",
   "metadata": {},
   "source": [
    "Fetch the dataset for the Vector DB and save it in cluster:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ae9d428-9ed3-4e66-9817-97d84d89b5a5",
   "metadata": {},
   "outputs": [],
   "source": [
    "fetch = project.set_function(\n",
    "    name=\"fetch-vectordb-data\",\n",
    "    func=\"src/fetch-vectordb-data.py\",\n",
    "    kind=\"job\",\n",
    "    image=\".llm-demo-data\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8fa74f4e-e26d-4826-b9e2-6f165b0203ea",
   "metadata": {},
   "outputs": [],
   "source": [
    "ret = project.run_function(\n",
    "    name=\"fetch-vectordb-data-run\",\n",
    "    function=\"fetch-vectordb-data\",\n",
    "    handler=\"handler\",\n",
    "    params={\"data_set\": DATA_SET},\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dcb6f127-b3eb-4b04-aa92-51e74b9518ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "ret.outputs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1110e1b6-b306-4467-94cd-d8f849f251f7",
   "metadata": {
    "tags": []
   },
   "source": [
    "## Build the vector DB\n",
    "\n",
    "Build the vector DB in the data layer and load the data into it.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "abcf51a8",
   "metadata": {},
   "source": [
    "If you're not using Iguazio's Jupyter, download {download}`the build vector db <./src/build-vector-db.py>`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "075bef9b-dd57-4d5f-8c44-8e45cb5866d6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Build the vector DB using the image\n",
    "build_vectordb = project.set_function(\n",
    "    name=\"build-vectordb\",\n",
    "    func=\"src/build-vector-db.py\",\n",
    "    kind=\"job\",\n",
    "    image=\".llm-demo-data\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dfccc57d-839e-452e-8689-78e83d46cf82",
   "metadata": {},
   "outputs": [],
   "source": [
    "if not is_ce:\n",
    "    build_vectordb.apply(mlrun.auto_mount())\n",
    "    print(\"Applying mlrun.auto_mount!\")\n",
    "else:\n",
    "    print(\"Not applying mlrun.auto_mount!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "607c9d90-bac0-440c-b67a-b020fe6e2f5a",
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "build_vectordb_run = project.run_function(\n",
    "    function=\"build-vectordb\",\n",
    "    inputs={\"df\": ret.outputs[\"vector-db-dataset\"]},\n",
    "    params={\"cache_dir\": CACHE_DIR},\n",
    "    handler=\"handler_chroma\",\n",
    "    outputs=[\"vect_db\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "141c5e47-4152-42d4-97ce-7d90edad0e82",
   "metadata": {},
   "outputs": [],
   "source": [
    "VECTORDB_PATH = build_vectordb_run.outputs[\"vect_db\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6e013855",
   "metadata": {},
   "source": [
    "## Serving the function"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0df69152",
   "metadata": {},
   "source": [
    "If you're not using Iguazio's Jupyter, download {download}`serving.py <./src/serving.py>`.\n",
    "Now you can deploy the the Nuclio function that serves the LLM:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ad65431f",
   "metadata": {},
   "outputs": [],
   "source": [
    "serve_func = project.set_function(\n",
    "    name=\"serve-llm\",\n",
    "    func=\"src/serving.py\",\n",
    "    image=\".llm-demo-data\",\n",
    "    kind=\"nuclio\",\n",
    ")\n",
    "\n",
    "# Transferring the model and VectorDB path to the serving functions\n",
    "serve_func.set_envs(\n",
    "    env_vars={\n",
    "        \"MODEL_ID\": MODEL_ID,\n",
    "        \"CACHE_DIR\": CACHE_DIR,\n",
    "        \"VECTORDB_PATH\": VECTORDB_PATH,\n",
    "    }\n",
    ")\n",
    "\n",
    "# Since the model is stored in memory, use only 1 replica and and one worker\n",
    "# Since this is running on CPU only, inference might take ~1 minute (increasing timeout)\n",
    "serve_func.spec.min_replicas = 1\n",
    "serve_func.spec.max_replicas = 1\n",
    "serve_func.with_http(worker_timeout=120, gateway_timeout=150, workers=1)\n",
    "serve_func.set_config(\"spec.readinessTimeoutSeconds\", 1200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "506b55fe-df13-431b-99e4-251954abe10e",
   "metadata": {},
   "outputs": [],
   "source": [
    "if not is_ce:\n",
    "    serve_func.apply(mlrun.auto_mount())\n",
    "    print(\"Applying mlrun.auto_mount!\")\n",
    "else:\n",
    "    print(\"Not applying mlrun.auto_mount!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bc7ceef6-f763-417c-9b0c-d0f2920fe1b1",
   "metadata": {},
   "outputs": [],
   "source": [
    "serve_func = project.deploy_function(function=\"serve-llm\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea783442-f32c-406d-aa8e-6ea7b740cd5b",
   "metadata": {},
   "source": [
    "### Test Serving Function\n",
    "The inference endpoint of a LLM which is hosted in the cluster"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b2a14ebc-e25c-4b9a-8d94-a7252962dde2",
   "metadata": {},
   "outputs": [],
   "source": [
    "body = {\n",
    "    \"question\": \"What are some new developments in space travel?\",\n",
    "    \"topic\": \"science\",\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fa27def9-3112-4eb4-a445-754d80240145",
   "metadata": {},
   "outputs": [],
   "source": [
    "resp = serve_func.function.invoke(\"/\", body=json.dumps(body))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6d7d62b5-ea4e-4402-bf55-6e558dcfbb47",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(resp[\"response\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "51fe96d4-0126-4511-9c05-188121a6f474",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(resp[\"sources\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bfb12a95-e0b4-4cf2-9023-4bfe67a4ca33",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(resp[\"prompt\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0f32f25a-df60-4667-9145-a272da8d1cb2",
   "metadata": {},
   "outputs": [],
   "source": [
    "project.set_function(f\"db://{project.name}/fetch-vectordb-data\")\n",
    "project.set_function(f\"db://{project.name}/build-vectordb\")\n",
    "project.set_function(f\"db://{project.name}/serve-llm\")\n",
    "project.set_source(f\"db://{project.name}\")\n",
    "project.save()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9775188-439f-4fcc-8616-7f399a0cc869",
   "metadata": {},
   "source": [
    "### Run E2E Workflow"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a127f20d-dbb6-4e8a-80d4-fac9fc3e2019",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile workflow.py\n",
    "import mlrun\n",
    "from kfp import dsl\n",
    "\n",
    "    \n",
    "@dsl.pipeline(\n",
    "    name=\"GenAI demo\"\n",
    ")\n",
    "\n",
    "def kfpipeline(data_set, cache_dir, model_id):\n",
    "    \n",
    "    project = mlrun.get_current_project()\n",
    "    \n",
    "    fetch = project.run_function(\n",
    "        function=\"fetch-vectordb-data\",\n",
    "        name=\"fetch-vectordb-data-run\",\n",
    "        handler=\"handler\",\n",
    "        params = {\"data_set\" : data_set},\n",
    "        outputs=['vector-db-dataset']\n",
    "    )\n",
    "    \n",
    "    \n",
    "    vectordb_build = project.run_function(\n",
    "        function=\"build-vectordb\",\n",
    "        inputs={\"df\" : fetch.outputs[\"vector-db-dataset\"]},\n",
    "        params={\"cache_dir\" : cache_dir},\n",
    "        handler=\"handler_chroma\",\n",
    "        outputs=[\"vect_db\"]        \n",
    "    )\n",
    "\n",
    "    serve_func = project.get_function(\"serve-llm\")\n",
    "    serve_func.set_envs(\n",
    "        env_vars={\"MODEL_ID\": model_id, \n",
    "                  \"CACHE_DIR\": cache_dir, \n",
    "                  \"VECTORDB_PATH\":vectordb_build.outputs[\"vect_db\"]}\n",
    "    )\n",
    "    serve_func.spec.min_replicas = 1\n",
    "    serve_func.spec.max_replicas = 1\n",
    "    serve_func.with_http(worker_timeout=120, gateway_timeout=150, workers=1)\n",
    "    serve_func.set_config(\"spec.readinessTimeoutSeconds\", 1200)\n",
    "    \n",
    "    deploy = project.deploy_function(\"serve-llm\", verbose=True).after(vectordb_build)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4e6bf445-53f7-4b38-9caa-997022da65bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "project.set_workflow(\"main\", \"workflow.py\", embed=True)\n",
    "project.save()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21762b8d-d4ce-40e8-a7c7-85f9c0153b66",
   "metadata": {},
   "source": [
    "#### Please note that the workflow may take up to 20 mins to complete."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8834c1d0-076f-4a35-be53-00be8591fc4f",
   "metadata": {},
   "outputs": [],
   "source": [
    "run_id = project.run(\n",
    "    \"main\",\n",
    "    arguments={\"cache_dir\": CACHE_DIR, \"data_set\": DATA_SET, \"model_id\": MODEL_ID},\n",
    "    engine=\"remote\",\n",
    "    watch=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2646bc3b-0bc2-431f-9063-a64c369d6493",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py311",
   "language": "python",
   "name": "conda-env-.conda-py311-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
