{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "bef1b1d0-bd2b-4fbc-b1b2-98be57dd6bc4",
   "metadata": {},
   "source": [
    "# Working with RAG in MLRun\n",
    "\n",
    "Learn how to build a RAG pipeline using MLRun from top to bottom, using popular models and frameworks.\n",
    "\n",
    "**In this section**\n",
    "- [Overview of RAG application and strategy](#overview-of-rag-application-and-strategy)\n",
    "- [Prerequisite](#prerequisite)\n",
    "- [Install a vector DB and dependencies](#install-a-vector-db-and-dependencies)\n",
    "- [Setup](#setup)\n",
    "- [Define OpenAI arguments](#define-openai-arguments)\n",
    "- [Create a data indexing function](#create-a-data-indexing-function)\n",
    "- [Index the dataset in the vector store](#index-the-dataset-in-the-vector-store)\n",
    "- [Create a retrieval function](#create-a-retrieval-function)\n",
    "- [Test locally](#test-locally)\n",
    "- [Deploy to an endpoint](#deploy-to-an-endpoint)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc225810-f2f2-4969-846d-823d39b8ba03",
   "metadata": {},
   "source": [
    "## Overview of RAG application and strategy\n",
    "Retrieval Augmented Generation (RAG) is a common strategy for extending the knowledge of pre-trained LLMs.<br>\n",
    "It gives the LLM a corpus of knowledge to draw from when answering questions about topics, or information that was not in the original training set.\n",
    "\n",
    "![](_static/rag_architecture.jpg)\n",
    "\n",
    "### Parts of RAG architecture\n",
    "\n",
    "There are a number of open source and proprietary embedding models, LLMs, and vector stores.<br>\n",
    "\n",
    "\n",
    "- **Vector Store**: Used to store the embedded documents in a format that allows for semantic search and retrieval.<br>\n",
    "    - This example uses [Milvus](https://milvus.io) &mdash; an open-source vector database designed specifically for similarity search on massive datasets of high-dimensional vectors.<br>\n",
    "- **Embeddings Model**: Used to embed the documents, as well as embedding new user queries for semantic search.<br>\n",
    "    - This example uses OpenAI's [text-embedding-3-small](https://platform.openai.com/docs/guides/embeddings).<br>\n",
    "- **LLM**: used for generating responses.<br>\n",
    "    - This example uses OpenAI's [gpt-3.5-turbo-0125](https://platform.openai.com/docs/models/gpt-3-5-turbo).<br>\n",
    "    \n",
    "## Prerequisite"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "edb9aba0-5dd0-40ac-bacc-42ec4aceb6d7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install langchain langchain_community langchain_openai pymilvus langchain_huggingface \"protobuf<3.20\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0573e6fc-2996-43f8-815e-e2c0cc39d63b",
   "metadata": {},
   "source": [
    "## Install a vector DB and dependencies\n",
    "\n",
    "As mentioned above, this example uses [Milvus](https://milvus.io) -&mdash; an open-source vector database designed specifically for similarity search on massive datasets of high-dimensional vectors.<br>\n",
    "It can be installed via [docker-compose](https://milvus.io/docs/v2.0.x/install_standalone-docker.md) or [Kubernetes](https://milvus.io/docs/v2.0.x/install_standalone-helm.md) depending on your MLRun installation method.<br>\n",
    "In this example, Milvus is already installed in the Kubernetes cluster and the service is available at `http://milvus.default.svc.cluster.local:19530`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bed72e02-44ea-4c9c-a82e-291b079c8267",
   "metadata": {},
   "source": [
    "## Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "bd5dbd1f-eebe-433f-a304-bedab4318ddc",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "# OpenAI\n",
    "OPENAI_BASE_URL = \"\"\n",
    "OPENAI_API_KEY = \"\"\n",
    "os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n",
    "os.environ[\"OPENAI_BASE_URL\"] = OPENAI_BASE_URL\n",
    "OPENAI_MODEL = \"gpt-3.5-turbo-0125\"\n",
    "\n",
    "# Embeddings\n",
    "EMBEDDINGS_MODEL = \"all-MiniLM-L6-v2\"\n",
    "\n",
    "# Milvus dev deployment using helm:\n",
    "# helm install my-milvus milvus/milvus --set cluster.enabled=false --set standalone.persistence.enabled=false --set etcd.replicaCount=1 --set minio.mode=standalone --set pulsar.enabled=false --set minio.persistence.enabled=false --set etcd.persistence.enabled=false\n",
    "\n",
    "# Milvus\n",
    "MILVUS_CONNECTION_ARGS = {\n",
    "    \"host\": \"my-milvus.default.svc.cluster.local\",\n",
    "    \"port\": \"19530\",\n",
    "}\n",
    "\n",
    "PROMPT_TEMPLATE = \"\"\"Use the following pieces of context to answer the question at the end.\n",
    "If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
    "Use three sentences maximum and keep the answer as concise as possible.\n",
    "Always say \"thanks for asking!\" at the end of the answer.\n",
    "\n",
    "{context}\n",
    "\n",
    "Question: {question}\n",
    "\n",
    "Helpful Answer:\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c4f76bfa-6f89-43ec-92f4-cc6a92151050",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mlrun\n",
    "\n",
    "project = mlrun.get_or_create_project(\"rag\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe149e56-7c04-43bf-b292-fac569f1556c",
   "metadata": {},
   "source": [
    "## Define OpenAI arguments"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "671b7fcc-e8a3-489e-bec9-8810cd1e8da7",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-08T16:05:12.642080Z",
     "start_time": "2025-04-08T16:05:12.639588Z"
    }
   },
   "outputs": [],
   "source": [
    "# Set embedding and model arguments\n",
    "embeddings_class = \"langchain_openai.embeddings.OpenAIEmbeddings\"\n",
    "embeddings_kwargs = {\"model\": \"text-embedding-3-small\"}\n",
    "llm_class = \"langchain_openai.chat_models.ChatOpenAI\"\n",
    "llm_kwargs = {\"model\": OPENAI_MODEL}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a3c7674-cb68-43e5-b1ce-d9c6546045c9",
   "metadata": {},
   "source": [
    "## Create a data indexing function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "4f7f8b98-fd29-451b-a9c4-71f8a0d47a4e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting index_data_new.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile index_data_new.py\n",
    "import mlrun\n",
    "from mlrun.utils import create_class\n",
    "import pandas as pd\n",
    "from langchain_community.vectorstores import Milvus\n",
    "from langchain_community.document_loaders import WebBaseLoader\n",
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "import uuid\n",
    "\n",
    "@mlrun.handler()\n",
    "def index_urls(\n",
    "    data: pd.DataFrame,\n",
    "    url_column: str,\n",
    "    embeddings_class: str,\n",
    "    embeddings_kwargs: dict,\n",
    "    milvus_host: str,\n",
    "    milvus_port: int,\n",
    "    chunk_size: int,\n",
    "    chunk_overlap: int,\n",
    "):  \n",
    "    \n",
    "    spec = mlrun.artifacts.DocumentLoaderSpec(loader_class_name=\"langchain_community.document_loaders.WebBaseLoader\", \n",
    "                                              src_name=\"web_path\",\n",
    "                                              download_object =False)\n",
    "    \n",
    "    # Split documents\n",
    "    text_splitter = RecursiveCharacterTextSplitter(\n",
    "            chunk_size=chunk_size,\n",
    "            chunk_overlap=chunk_overlap\n",
    "        )\n",
    "    \n",
    "    project = mlrun.get_current_project()\n",
    "\n",
    "    \n",
    "    # Load embeddings and LLM models\n",
    "    embeddings = create_class(embeddings_class)(**embeddings_kwargs)\n",
    "    \n",
    "    # Load vector store\n",
    "    vector_db = Milvus(\n",
    "        embedding_function=embeddings,\n",
    "        connection_args={\n",
    "            \"host\": milvus_host,\n",
    "            \"port\": str(milvus_port),\n",
    "        },\n",
    "        auto_id=True\n",
    "    )\n",
    "    \n",
    "    # Create MLRun collection wrapper\n",
    "    collection = project.get_vector_store_collection(vector_store=vector_db)\n",
    "    \n",
    "    # Get input URLs\n",
    "    urls = set(data[url_column].values)\n",
    "        \n",
    "    for doc in urls:\n",
    "        \n",
    "        artifact_key = mlrun.artifacts.DocumentArtifact.key_from_source(doc)\n",
    "        \n",
    "        artifact = project.log_document(key=artifact_key, \n",
    "                                        target_path=doc, \n",
    "                                        document_loader_spec=spec)\n",
    "        \n",
    "        milvus_ids = collection.add_artifacts([artifact], splitter=text_splitter)\n",
    "        \n",
    "        print(\"Documents added with IDs:\", milvus_ids)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e4c06a5-58e0-45b4-8e8a-32098d74e7ec",
   "metadata": {},
   "source": [
    "## Index the dataset in the vector store"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "fedb144c-f1de-457c-b57c-579739e64bf2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<mlrun.runtimes.kubejob.KubejobRuntime at 0x7f92f8670a30>"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "index_fn = project.set_function(\n",
    "    name=\"index\",\n",
    "    func=\"index_data_new.py\",\n",
    "    kind=\"job\",\n",
    "    image=\"mlrun/mlrun\",\n",
    "    handler=\"index_urls\",\n",
    ")\n",
    "\n",
    "index_fn.set_envs(\n",
    "    {\"OPENAI_API_KEY\": OPENAI_API_KEY, \"OPENAI_BASE_URL\": OPENAI_BASE_URL}\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "831696b3-bc9d-497a-99fd-91998546ad33",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>url</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>https://docs.mlrun.org/en/latest/index.html</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>https://docs.mlrun.org/en/latest/cheat-sheet.html</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>https://docs.mlrun.org/en/latest/tutorials/01-...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                                 url\n",
       "0        https://docs.mlrun.org/en/latest/index.html\n",
       "1  https://docs.mlrun.org/en/latest/cheat-sheet.html\n",
       "2  https://docs.mlrun.org/en/latest/tutorials/01-..."
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Create a dataset of files to use in RAG\n",
    "data = pd.DataFrame(\n",
    "    [\n",
    "        {\"url\": \"https://docs.mlrun.org/en/latest/index.html\"},\n",
    "        {\"url\": \"https://docs.mlrun.org/en/latest/cheat-sheet.html\"},\n",
    "        {\"url\": \"https://docs.mlrun.org/en/latest/tutorials/01-mlrun-basics.html\"},\n",
    "    ]\n",
    ")\n",
    "data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "1cde734b-77d3-4e72-946a-1b0059f4934a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'store://datasets/rag/to-index#0@608179dd-88fa-4f22-9bf0-182c05487094^e91e1fcb0d50e5dcbfd27636132648929c5fb20a'"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Log the dataset as an artifact\n",
    "dataset_artifact = project.log_dataset(key=\"to-index\", df=data)\n",
    "dataset_artifact.uri"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "33f8660c-ff1a-4a2d-8415-0990e89dfd7f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2025-01-21 06:58:25,118 [info] Storing function: {\"db\":\"http://mlrun-api:8080\",\"name\":\"index-index-urls\",\"uid\":\"9c105c7feafb4fb6b60ac7c49c11fbe2\"}\n",
      "Documents added with IDs: [454873324101293747, 454873324101293748, 454873324101293749, 454873324101293750, 454873324101293751, 454873324101293752, 454873324101293753, 454873324101293754, 454873324101293755, 454873324101293756]\n",
      "Documents added with IDs: [454873324101293758, 454873324101293759, 454873324101293760, 454873324101293761, 454873324101293762, 454873324101293763, 454873324101293764, 454873324101293765, 454873324101293766, 454873324101293767, 454873324101293768, 454873324101293769, 454873324101293770]\n",
      "Documents added with IDs: [454873324101293772, 454873324101293773, 454873324101293774, 454873324101293775, 454873324101293776, 454873324101293777, 454873324101293778, 454873324101293779, 454873324101293780, 454873324101293781, 454873324101293782, 454873324101293783, 454873324101293784, 454873324101293785, 454873324101293786, 454873324101293787, 454873324101293788, 454873324101293789, 454873324101293790, 454873324101293791, 454873324101293792, 454873324101293793, 454873324101293794, 454873324101293795]\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<style>\n",
       ".dictlist {\n",
       "  background-color: #4EC64B;\n",
       "  text-align: center;\n",
       "  margin: 4px;\n",
       "  border-radius: 3px; padding: 0px 3px 1px 3px; display: inline-block;}\n",
       ".artifact {\n",
       "  cursor: pointer;\n",
       "  background-color: #4EC64B;\n",
       "  text-align: left;\n",
       "  margin: 4px; border-radius: 3px; padding: 0px 3px 1px 3px; display: inline-block;\n",
       "}\n",
       "div.block.hidden {\n",
       "  display: none;\n",
       "}\n",
       ".clickable {\n",
       "  cursor: pointer;\n",
       "}\n",
       ".ellipsis {\n",
       "  display: inline-block;\n",
       "  max-width: 60px;\n",
       "  white-space: nowrap;\n",
       "  overflow: hidden;\n",
       "  text-overflow: ellipsis;\n",
       "}\n",
       ".master-wrapper {\n",
       "  display: flex;\n",
       "  flex-flow: row nowrap;\n",
       "  justify-content: flex-start;\n",
       "  align-items: stretch;\n",
       "}\n",
       ".master-tbl {\n",
       "  flex: 3\n",
       "}\n",
       ".master-wrapper > div {\n",
       "  margin: 4px;\n",
       "  padding: 10px;\n",
       "}\n",
       "iframe.fileview {\n",
       "  border: 0 none;\n",
       "  height: 100%;\n",
       "  width: 100%;\n",
       "  white-space: pre-wrap;\n",
       "}\n",
       ".pane-header-title {\n",
       "  width: 80%;\n",
       "  font-weight: 500;\n",
       "}\n",
       ".pane-header {\n",
       "  line-height: 1;\n",
       "  background-color: #4EC64B;\n",
       "  padding: 3px;\n",
       "}\n",
       ".pane-header .close {\n",
       "  font-size: 20px;\n",
       "  font-weight: 700;\n",
       "  float: right;\n",
       "  margin-top: -5px;\n",
       "}\n",
       ".master-wrapper .right-pane {\n",
       "  border: 1px inset silver;\n",
       "  width: 40%;\n",
       "  min-height: 300px;\n",
       "  flex: 3\n",
       "  min-width: 500px;\n",
       "}\n",
       ".master-wrapper * {\n",
       "  box-sizing: border-box;\n",
       "}\n",
       "</style><script>\n",
       "function copyToClipboard(fld) {\n",
       "    if (document.queryCommandSupported && document.queryCommandSupported('copy')) {\n",
       "        var textarea = document.createElement('textarea');\n",
       "        textarea.textContent = fld.innerHTML;\n",
       "        textarea.style.position = 'fixed';\n",
       "        document.body.appendChild(textarea);\n",
       "        textarea.select();\n",
       "\n",
       "        try {\n",
       "            return document.execCommand('copy'); // Security exception may be thrown by some browsers.\n",
       "        } catch (ex) {\n",
       "\n",
       "        } finally {\n",
       "            document.body.removeChild(textarea);\n",
       "        }\n",
       "    }\n",
       "}\n",
       "function expandPanel(el) {\n",
       "  const panelName = \"#\" + el.getAttribute('paneName');\n",
       "\n",
       "  // Get the base URL of the current notebook\n",
       "  var baseUrl = window.location.origin;\n",
       "\n",
       "  // Construct the full URL\n",
       "  var fullUrl = new URL(el.title, baseUrl).href;\n",
       "\n",
       "  document.querySelector(panelName + \"-title\").innerHTML = fullUrl\n",
       "  iframe = document.querySelector(panelName + \"-body\");\n",
       "\n",
       "  const tblcss = `<style> body { font-family: Arial, Helvetica, sans-serif;}\n",
       "    #csv { margin-bottom: 15px; }\n",
       "    #csv table { border-collapse: collapse;}\n",
       "    #csv table td { padding: 4px 8px; border: 1px solid silver;} </style>`;\n",
       "\n",
       "  function csvToHtmlTable(str) {\n",
       "    return '<div id=\"csv\"><table><tr><td>' +  str.replace(/[\\n\\r]+$/g, '').replace(/[\\n\\r]+/g, '</td></tr><tr><td>')\n",
       "      .replace(/,/g, '</td><td>') + '</td></tr></table></div>';\n",
       "  }\n",
       "\n",
       "  function reqListener () {\n",
       "    if (fullUrl.endsWith(\".csv\")) {\n",
       "      iframe.setAttribute(\"srcdoc\", tblcss + csvToHtmlTable(this.responseText));\n",
       "    } else {\n",
       "      iframe.setAttribute(\"srcdoc\", this.responseText);\n",
       "    }\n",
       "    console.log(this.responseText);\n",
       "  }\n",
       "\n",
       "  const oReq = new XMLHttpRequest();\n",
       "  oReq.addEventListener(\"load\", reqListener);\n",
       "  oReq.open(\"GET\", fullUrl);\n",
       "  oReq.send();\n",
       "\n",
       "\n",
       "  //iframe.src = fullUrl;\n",
       "  const resultPane = document.querySelector(panelName + \"-pane\");\n",
       "  if (resultPane.classList.contains(\"hidden\")) {\n",
       "    resultPane.classList.remove(\"hidden\");\n",
       "  }\n",
       "}\n",
       "function closePanel(el) {\n",
       "  const panelName = \"#\" + el.getAttribute('paneName')\n",
       "  const resultPane = document.querySelector(panelName + \"-pane\");\n",
       "  if (!resultPane.classList.contains(\"hidden\")) {\n",
       "    resultPane.classList.add(\"hidden\");\n",
       "  }\n",
       "}\n",
       "\n",
       "</script>\n",
       "<div class=\"master-wrapper\">\n",
       "  <div class=\"block master-tbl\"><div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th>project</th>\n",
       "      <th>uid</th>\n",
       "      <th>iter</th>\n",
       "      <th>start</th>\n",
       "      <th>state</th>\n",
       "      <th>kind</th>\n",
       "      <th>name</th>\n",
       "      <th>labels</th>\n",
       "      <th>inputs</th>\n",
       "      <th>parameters</th>\n",
       "      <th>results</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>rag</td>\n",
       "      <td><div title=\"9c105c7feafb4fb6b60ac7c49c11fbe2\"><a href=\"https://dashboard.default-tenant.app.llm-3-6-0.iguazio-cd1.com/mlprojects/rag/jobs/monitor/9c105c7feafb4fb6b60ac7c49c11fbe2/overview\" target=\"_blank\" >...9c11fbe2</a></div></td>\n",
       "      <td>0</td>\n",
       "      <td>Jan 21 06:58:25</td>\n",
       "      <td>completed</td>\n",
       "      <td>run</td>\n",
       "      <td>index-index-urls</td>\n",
       "      <td><div class=\"dictlist\">v3io_user=edmond</div><div class=\"dictlist\">kind=local</div><div class=\"dictlist\">owner=edmond</div><div class=\"dictlist\">host=jupyter-edmond-769c6b57f6-6xpgz</div></td>\n",
       "      <td><div title=\"store://datasets/rag/to-index#0@608179dd-88fa-4f22-9bf0-182c05487094^e91e1fcb0d50e5dcbfd27636132648929c5fb20a\">data</div></td>\n",
       "      <td><div class=\"dictlist\">url_column=url</div><div class=\"dictlist\">embeddings_class=langchain_openai.embeddings.OpenAIEmbeddings</div><div class=\"dictlist\">embeddings_kwargs={'model': 'text-embedding-3-small'}</div><div class=\"dictlist\">milvus_host=my-milvus.default.svc.cluster.local</div><div class=\"dictlist\">milvus_port=19530</div><div class=\"dictlist\">chunk_size=2000</div><div class=\"dictlist\">chunk_overlap=200</div></td>\n",
       "      <td></td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div></div>\n",
       "  <div id=\"result27e89047-pane\" class=\"right-pane block hidden\">\n",
       "    <div class=\"pane-header\">\n",
       "      <span id=\"result27e89047-title\" class=\"pane-header-title\">Title</span>\n",
       "      <span onclick=\"closePanel(this)\" paneName=\"result27e89047\" class=\"close clickable\">&times;</span>\n",
       "    </div>\n",
       "    <iframe class=\"fileview\" id=\"result27e89047-body\"></iframe>\n",
       "  </div>\n",
       "</div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<b> > to track results use the .show() or .logs() methods  or <a href=\"https://dashboard.default-tenant.app.llm-3-6-0.iguazio-cd1.com/mlprojects/rag/jobs/monitor-jobs/index-index-urls/9c105c7feafb4fb6b60ac7c49c11fbe2/overview\" target=\"_blank\">click here</a> to open in UI</b>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2025-01-21 06:58:29,529 [info] Run execution finished: {\"name\":\"index-index-urls\",\"status\":\"completed\"}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<mlrun.model.RunObject at 0x7f92175d7280>"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Run the indexing function you wrote above\n",
    "project.run_function(\n",
    "    index_fn,\n",
    "    inputs={\"data\": dataset_artifact.uri},\n",
    "    params={\n",
    "        \"url_column\": \"url\",\n",
    "        \"embeddings_class\": embeddings_class,\n",
    "        \"embeddings_kwargs\": embeddings_kwargs,\n",
    "        \"milvus_host\": MILVUS_CONNECTION_ARGS[\"host\"],\n",
    "        \"milvus_port\": MILVUS_CONNECTION_ARGS[\"port\"],\n",
    "        \"chunk_size\": 2000,\n",
    "        \"chunk_overlap\": 200,\n",
    "    },\n",
    "    local=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c4b53a2e-bcef-4919-8387-760cdfc2f96c",
   "metadata": {},
   "source": [
    "## Create a retrieval function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "299d079c-1976-4d35-a9b8-b447bbb06b2e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting retrieval_new.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile retrieval_new.py\n",
    "import mlrun\n",
    "import os\n",
    "from mlrun.serving.v2_serving import V2ModelServer\n",
    "from typing import Dict, Any, Union\n",
    "from langchain_core.language_models.llms import LLM\n",
    "from mlrun.utils import create_class\n",
    "from langchain_community.vectorstores import Milvus\n",
    "\n",
    "class QueryMilvus:\n",
    "    \"\"\" A class to query the Milvus vector store and retrieve relevant documents.\"\"\"\n",
    "    def __init__(\n",
    "        self,\n",
    "        project: str,\n",
    "        embeddings_class: str,\n",
    "        embeddings_kwargs: dict,\n",
    "        milvus_connection_args: dict,\n",
    "        num_documents: int = 3,\n",
    "    ):\n",
    "        self.project = mlrun.get_or_create_project(project, \"./\")\n",
    "        self.embeddings_class = embeddings_class\n",
    "        self.embeddings_kwargs = embeddings_kwargs\n",
    "        self.embeddings = create_class(self.embeddings_class)(**self.embeddings_kwargs)\n",
    "        self.milvus_connection_args = milvus_connection_args\n",
    "        self.vector_db = Milvus(\n",
    "            embedding_function=self.embeddings,\n",
    "            connection_args=self.milvus_connection_args,\n",
    "            auto_id=True\n",
    "        )\n",
    "        self.collection = self.project.get_vector_store_collection(vector_store=self.vector_db)\n",
    "        self.num_documents = num_documents\n",
    "        \n",
    "    def do(self, event: dict):\n",
    "        \"\"\"\n",
    "        The main function to query the Milvus vector store and retrieve relevant documents.\n",
    "        \"\"\"\n",
    "        question = event[\"question\"]\n",
    "        num_documents = event.get(\"num_documents\", self.num_documents)\n",
    "        docs = self.collection.similarity_search(question, k=num_documents)\n",
    "        event[\"context\"] = \"\\n\\n\".join(doc.page_content for doc in docs)\n",
    "        event[\"sources\"] = list({doc.metadata[\"source\"] for doc in docs})\n",
    "        return event\n",
    "\n",
    "    \n",
    "class FormatPrompt:\n",
    "    \"\"\" A class to format the prompt for the LLM model.\"\"\"\n",
    "    def __init__(self, prompt: str):\n",
    "        self.prompt = prompt\n",
    "        \n",
    "    def do(self, event:dict):\n",
    "        formatted_prompt = self.prompt.format(**event)\n",
    "        event[\"inputs\"] = [formatted_prompt]\n",
    "        return event\n",
    "    \n",
    "        \n",
    "class LangChainModelServer(mlrun.serving.V2ModelServer):\n",
    "    \"\"\" A class to serve a LangChain model using MLRun serving.\"\"\"\n",
    "    def __init__(\n",
    "        self,\n",
    "        context: mlrun.MLClientCtx = None,\n",
    "        model_class: str = None,\n",
    "        llm: Union[str, LLM] = None,\n",
    "        init_kwargs: Dict[str, Any] = None,\n",
    "        generation_kwargs: Dict[str, Any] = None,\n",
    "        name: str = None,\n",
    "        model_path: str = None,\n",
    "        **kwargs,\n",
    "    ):\n",
    "        \"\"\"\n",
    "        Initialize a serving class for general llm usage.\n",
    "        :param model_class:      The class of the model to use.\n",
    "        :param llm:              The name of specific llm to use, or the llm object itself in case of local usage.\n",
    "        :param init_kwargs:      The initialization arguments to use while initializing the llm.\n",
    "        :param generation_kwargs: The generation arguments to use while generating text.\n",
    "        \"\"\"\n",
    "        super().__init__(\n",
    "            name=name,\n",
    "            context=context,\n",
    "            model_path=model_path\n",
    "        )\n",
    "        self.model_class = model_class\n",
    "        self.llm = llm\n",
    "        self.init_kwargs = init_kwargs or {}\n",
    "        self.generation_kwargs = generation_kwargs\n",
    "\n",
    "    def load(self):\n",
    "        # If the llm is already an LLM object, use it directly\n",
    "        if isinstance(self.llm, LLM):\n",
    "            self.model = self.llm\n",
    "            return\n",
    "        # If the llm is a string (or not given, then we take default model), load the llm from langchain.\n",
    "        self.model = create_class(self.model_class)(**self.init_kwargs)\n",
    "\n",
    "    def predict(self, request: Dict[str, Any], generation_kwargs: Dict[str, Any] = None):\n",
    "        access_key = os.environ[\"V3IO_ACCESS_KEY\"]\n",
    "        headers = {\"Cookie\": 'session=j:{\"sid\": \"' + access_key + '\"}'}\n",
    "        inputs = request.get(\"inputs\", [])\n",
    "        generation_kwargs = generation_kwargs or self.generation_kwargs        \n",
    "        return self.model.invoke(input=inputs[0], \n",
    "                                 config=generation_kwargs, \n",
    "                                 headers=headers).dict()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "6bd40818-a5a5-471e-b400-2c446b5bed3c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/svg+xml": [
       "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n",
       "<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
       " \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
       "<!-- Generated by graphviz version 2.43.0 (0)\n",
       " -->\n",
       "<!-- Title: mlrun&#45;flow Pages: 1 -->\n",
       "<svg width=\"941pt\" height=\"84pt\"\n",
       " viewBox=\"0.00 0.00 940.82 84.00\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n",
       "<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 80)\">\n",
       "<title>mlrun&#45;flow</title>\n",
       "<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-80 936.82,-80 936.82,4 -4,4\"/>\n",
       "<g id=\"clust1\" class=\"cluster\">\n",
       "<title>cluster_ModelRouter</title>\n",
       "<polygon fill=\"none\" stroke=\"black\" points=\"469.57,-8 469.57,-68 924.82,-68 924.82,-8 469.57,-8\"/>\n",
       "</g>\n",
       "<!-- _start -->\n",
       "<g id=\"node1\" class=\"node\">\n",
       "<title>_start</title>\n",
       "<polygon fill=\"lightgrey\" stroke=\"black\" points=\"38.55,-20.05 40.7,-20.15 42.83,-20.3 44.92,-20.49 46.98,-20.74 48.99,-21.03 50.95,-21.36 52.84,-21.75 54.66,-22.18 56.4,-22.65 58.06,-23.16 59.63,-23.71 61.11,-24.31 62.49,-24.94 63.76,-25.61 64.93,-26.31 65.99,-27.04 66.93,-27.8 67.77,-28.59 68.48,-29.41 69.09,-30.25 69.58,-31.11 69.95,-31.99 70.21,-32.89 70.36,-33.8 70.4,-34.72 70.33,-35.65 70.16,-36.59 69.89,-37.53 69.53,-38.47 69.07,-39.41 68.52,-40.35 67.89,-41.28 67.18,-42.2 66.4,-43.11 65.55,-44.01 64.63,-44.89 63.65,-45.75 62.62,-46.59 61.53,-47.41 60.4,-48.2 59.23,-48.96 58.02,-49.69 56.78,-50.39 55.5,-51.06 54.2,-51.69 52.88,-52.29 51.53,-52.84 50.17,-53.35 48.79,-53.82 47.4,-54.25 46,-54.64 44.59,-54.97 43.17,-55.26 41.75,-55.51 40.32,-55.7 38.89,-55.85 37.45,-55.95 36.02,-56 34.58,-56 33.15,-55.95 31.71,-55.85 30.28,-55.7 28.85,-55.51 27.43,-55.26 26.01,-54.97 24.6,-54.64 23.2,-54.25 21.81,-53.82 20.43,-53.35 19.07,-52.84 17.72,-52.29 16.4,-51.69 15.1,-51.06 13.82,-50.39 12.58,-49.69 11.37,-48.96 10.2,-48.2 9.07,-47.41 7.98,-46.59 6.95,-45.75 5.97,-44.89 5.05,-44.01 4.2,-43.11 3.42,-42.2 2.71,-41.28 2.08,-40.35 1.53,-39.41 1.07,-38.47 0.71,-37.53 0.44,-36.59 0.27,-35.65 0.2,-34.72 0.24,-33.8 0.39,-32.89 0.65,-31.99 1.02,-31.11 1.51,-30.25 2.11,-29.41 2.83,-28.59 3.66,-27.8 4.61,-27.04 5.67,-26.31 6.84,-25.61 8.11,-24.94 9.49,-24.31 10.97,-23.71 12.54,-23.16 14.2,-22.65 15.94,-22.18 17.76,-21.75 19.65,-21.36 21.61,-21.03 23.62,-20.74 25.68,-20.49 27.77,-20.3 29.9,-20.15 32.05,-20.05 34.22,-20 36.38,-20 38.55,-20.05\"/>\n",
       "<text text-anchor=\"middle\" x=\"35.3\" y=\"-34.3\" font-family=\"Times,serif\" font-size=\"14.00\">start</text>\n",
       "</g>\n",
       "<!-- QueryMilvus -->\n",
       "<g id=\"node2\" class=\"node\">\n",
       "<title>QueryMilvus</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"176.79\" cy=\"-38\" rx=\"70.39\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"176.79\" y=\"-34.3\" font-family=\"Times,serif\" font-size=\"14.00\">QueryMilvus</text>\n",
       "</g>\n",
       "<!-- _start&#45;&gt;QueryMilvus -->\n",
       "<g id=\"edge1\" class=\"edge\">\n",
       "<title>_start&#45;&gt;QueryMilvus</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M69.71,-38C77.88,-38 86.97,-38 96.3,-38\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"96.48,-41.5 106.48,-38 96.48,-34.5 96.48,-41.5\"/>\n",
       "</g>\n",
       "<!-- FormatPrompt -->\n",
       "<g id=\"node3\" class=\"node\">\n",
       "<title>FormatPrompt</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"362.28\" cy=\"-38\" rx=\"79.09\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"362.28\" y=\"-34.3\" font-family=\"Times,serif\" font-size=\"14.00\">FormatPrompt</text>\n",
       "</g>\n",
       "<!-- QueryMilvus&#45;&gt;FormatPrompt -->\n",
       "<g id=\"edge2\" class=\"edge\">\n",
       "<title>QueryMilvus&#45;&gt;FormatPrompt</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M247.16,-38C255.55,-38 264.2,-38 272.82,-38\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"272.9,-41.5 282.9,-38 272.9,-34.5 272.9,-41.5\"/>\n",
       "</g>\n",
       "<!-- ModelRouter -->\n",
       "<g id=\"node4\" class=\"node\">\n",
       "<title>ModelRouter</title>\n",
       "<polygon fill=\"none\" stroke=\"black\" points=\"636.23,-30.54 636.23,-45.46 590.96,-56 526.95,-56 481.68,-45.46 481.68,-30.54 526.95,-20 590.96,-20 636.23,-30.54\"/>\n",
       "<polygon fill=\"none\" stroke=\"black\" points=\"640.23,-27.37 640.23,-48.63 591.42,-60 526.49,-60 477.69,-48.63 477.69,-27.37 526.49,-16 591.42,-16 640.23,-27.37\"/>\n",
       "<text text-anchor=\"middle\" x=\"558.96\" y=\"-34.3\" font-family=\"Times,serif\" font-size=\"14.00\">ModelRouter</text>\n",
       "</g>\n",
       "<!-- FormatPrompt&#45;&gt;ModelRouter -->\n",
       "<g id=\"edge4\" class=\"edge\">\n",
       "<title>FormatPrompt&#45;&gt;ModelRouter</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M441.62,-38C450.13,-38 458.83,-38 467.47,-38\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"467.55,-41.5 477.55,-38 467.55,-34.5 467.55,-41.5\"/>\n",
       "</g>\n",
       "<!-- ModelRouter/LangChainModelServer -->\n",
       "<g id=\"node5\" class=\"node\">\n",
       "<title>ModelRouter/LangChainModelServer</title>\n",
       "<ellipse fill=\"none\" stroke=\"black\" cx=\"796.58\" cy=\"-38\" rx=\"120.48\" ry=\"18\"/>\n",
       "<text text-anchor=\"middle\" x=\"796.58\" y=\"-34.3\" font-family=\"Times,serif\" font-size=\"14.00\">LangChainModelServer</text>\n",
       "</g>\n",
       "<!-- ModelRouter&#45;&gt;ModelRouter/LangChainModelServer -->\n",
       "<g id=\"edge3\" class=\"edge\">\n",
       "<title>ModelRouter&#45;&gt;ModelRouter/LangChainModelServer</title>\n",
       "<path fill=\"none\" stroke=\"black\" d=\"M640.39,-38C648.72,-38 657.34,-38 666.04,-38\"/>\n",
       "<polygon fill=\"black\" stroke=\"black\" points=\"666.28,-41.5 676.28,-38 666.28,-34.5 666.28,-41.5\"/>\n",
       "</g>\n",
       "</g>\n",
       "</svg>\n"
      ],
      "text/plain": [
       "<graphviz.graphs.Digraph at 0x7fc0c072f3a0>"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Set the retrival function\n",
    "rag_fn = project.set_function(\n",
    "    name=\"rag\",\n",
    "    func=\"retrieval_new.py\",\n",
    "    kind=\"serving\",\n",
    "    image=\"gcr.io/iguazio/rag-deploy:1.0\",\n",
    ")\n",
    "\n",
    "rag_fn.set_envs({\"OPENAI_API_KEY\": OPENAI_API_KEY, \"OPENAI_BASE_URL\": OPENAI_BASE_URL})\n",
    "\n",
    "# Create the workflow and start connecting steps\n",
    "graph = rag_fn.set_topology(\"flow\", engine=\"async\")\n",
    "\n",
    "graph.add_step(\n",
    "    class_name=\"QueryMilvus\",\n",
    "    project=project.name,\n",
    "    embeddings_class=embeddings_class,\n",
    "    embeddings_kwargs=embeddings_kwargs,\n",
    "    milvus_connection_args=MILVUS_CONNECTION_ARGS,\n",
    ")\n",
    "graph.add_step(class_name=\"FormatPrompt\", prompt=PROMPT_TEMPLATE, after=\"$prev\")\n",
    "\n",
    "router = graph.add_step(\n",
    "    \"*mlrun.serving.ModelRouter\",\n",
    "    name=\"ModelRouter\",\n",
    "    after=\"$prev\",\n",
    "    result_path=\"prediction\",\n",
    ").respond()\n",
    "router.add_route(\n",
    "    key=\"LangChainModelServer\",\n",
    "    class_name=\"LangChainModelServer\",\n",
    "    model_class=llm_class,\n",
    "    init_kwargs=llm_kwargs,\n",
    "    result_path=\"output\",\n",
    "    after=\"$prev\",\n",
    ")\n",
    "graph.plot(rankdir=\"LR\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42c07df9-566e-4ce4-a7fc-3e339bf2dc40",
   "metadata": {},
   "source": [
    "## Test locally"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "c99d2d72-c771-49c6-a634-d876deb96930",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2024-12-29 10:07:45,136 [info] Project loaded successfully: {\"project_name\":\"rag\"}\n",
      "> 2024-12-29 10:07:45,188 [info] model LangChainModelServer was loaded\n",
      "> 2024-12-29 10:07:45,189 [info] Loaded ['LangChainModelServer']\n",
      "{'Cookie': 'session=j:{\"sid\": \"56167b0d-de03-4ba1-bc58-b9adf9d6a47b\"}'}\n",
      "CPU times: user 118 ms, sys: 49.8 ms, total: 168 ms\n",
      "Wall time: 4.39 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "mock = rag_fn.to_mock_server()\n",
    "resp = mock.test(\n",
    "    path=\"/\",\n",
    "    body={\"question\": \"Give me a python example of how to deploy a serving function\"},\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "4b964c78-c07a-4c59-82ed-3f1cd4f59613",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Here is an example of how to deploy a serving function in Python:\n",
      "\n",
      "```python\n",
      "import mlrun\n",
      "\n",
      "# Define the serving function\n",
      "def main():\n",
      "    # Create a project and set up the serving topology\n",
      "    project = mlrun.serving.start_project()\n",
      "    serving_fn = project.set_function(\n",
      "        func=\"\",\n",
      "        name=\"serving\",\n",
      "        image=\"mlrun/mlrun\",\n",
      "        kind=\"serving\",\n",
      "        requirements=[\"scikit-learn~=1.5.1\"],\n",
      "    )\n",
      "\n",
      "    # Add a model to the serving function\n",
      "    serving_fn.add_model(\n",
      "        \"cancer-classifier\",\n",
      "        model_path=\"path/to/model\",\n",
      "        class_name=\"mlrun.frameworks.sklearn.SKLearnModelServer\",\n",
      "    )\n",
      "\n",
      "    # Create a mock server and test the endpoint\n",
      "    server = serving_fn.to_mock_server()\n",
      "    server.test(\"/v2/models/\", method=\"GET\")\n",
      "```\n",
      "\n",
      "Thanks for asking!\n"
     ]
    }
   ],
   "source": [
    "print(resp[\"prediction\"][\"outputs\"][\"content\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "94c2604d-6bb5-4199-b517-a42a38207a1c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['rag/2e588a01b0dc4806bf2b8592a047b0f2', 'rag/dad80a1867c74cf4aef40e3866362d71', 'rag/539a329744374d908bee3eb3d83ff8b8']\n"
     ]
    }
   ],
   "source": [
    "print(resp[\"sources\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2a70d6fb-e467-4e00-8b19-c2604683bc1a",
   "metadata": {},
   "source": [
    "## Deploy to an endpoint"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "635ea772-f047-4458-bf67-ac7cbf7503c8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2024-12-29 10:08:05,916 [info] Starting remote function deploy\n",
      "2024-12-29 10:08:06  (info) Deploying function\n",
      "2024-12-29 10:08:06  (info) Building\n",
      "2024-12-29 10:08:06  (info) Staging files and preparing base images\n",
      "2024-12-29 10:08:06  (warn) Using user provided base image, runtime interpreter version is provided by the base image\n",
      "2024-12-29 10:08:06  (info) Building processor image\n",
      "2024-12-29 10:11:11  (info) Build complete\n",
      "2024-12-29 10:11:57  (info) Function deploy complete\n",
      "> 2024-12-29 10:11:57,982 [info] Successfully deployed function: {\"external_invocation_urls\":[\"rag-rag.default-tenant.app.llm-3-6-0.iguazio-cd1.com/\"],\"internal_invocation_urls\":[\"nuclio-rag-rag.default-tenant.svc.cluster.local:8080\"]}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "DeployStatus(state=ready, outputs={'endpoint': 'http://rag-rag.default-tenant.app.llm-3-6-0.iguazio-cd1.com/', 'name': 'rag-rag'})"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "project.deploy_function(rag_fn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "c07a0811-94c3-4da6-b40f-f427ca4f984e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2024-12-29 10:12:05,853 [info] Invoking function: {\"method\":\"POST\",\"path\":\"http://nuclio-rag-rag.default-tenant.svc.cluster.local:8080/\"}\n",
      "CPU times: user 9 ms, sys: 90 µs, total: 9.09 ms\n",
      "Wall time: 1.84 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "resp2 = rag_fn.invoke(path=\"/\", body={\"question\": \"What is MLRun?\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "8d6354ef-0c37-4cf8-8b63-05ccac43eaf4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MLRun is an open-source Python framework for managing the lifecycle of machine learning models and applications. It provides a unified way to deploy, manage, and monitor machine learning models across different environments.\n",
      "\n",
      "Thanks for asking!\n"
     ]
    }
   ],
   "source": [
    "print(resp2[\"prediction\"][\"outputs\"][\"content\"])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mlrun-base",
   "language": "python",
   "name": "conda-env-mlrun-base-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
