{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "662be8cc",
   "metadata": {},
   "source": [
    "(genai-serving-graph)=\n",
    "# Gen AI realtime serving graph\n",
    "Learn how to create a serving graph using multiple LLM calls, including specific prompt templates, inside your complete workflow.\n",
    "\n",
    "Take a project, for example, of an customer service chatbot that receives customers' requests and gives the best answer according to the customer’s specific data and the company’s procedures.\n",
    "This would require a few calls for LLMs, each with its purpose, potentially using different prompts and different LLMs. \n",
    "The first step is classification: receiving the user’s request and trying to classify it into the pre-defined flow.\n",
    "This step uses a specific prompt instructing the LLM to classify the request. The LLM's answer can be a short text or a number specifying the classified path. The LLM used at this stage would not require the creation of sophisticated answers, and the invocation configuration can allow only very short answers. \n",
    "After the relevant flows are understood, the system can ask for a description of the issue they have, or the the ticket ID, and either offer troubleshooting responses or the ticket status. In this case, the LLMs and prompts are different.\n",
    "\n",
    "This page guides you through the basic steps to generate a serving graph using LLMs based on this flowchart. The customer support template and the QA template each have their own prompt templates and attached model. \n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e7db1bb",
   "metadata": {},
   "source": [
    "<img src=\"../../_static/images/genai-serving-graph.png\" width=\"900\">\n",
    "\n",
    "**In this section**\n",
    "- [SDK](#sdk)\n",
    "- [Guidelines](#guidelines)\n",
    "- [Define the LLM prompt template](#define-the-llm-prompt-template)\n",
    "- [Log the LLM prompt artifacts](#log-the-llm-prompt-artifacts)\n",
    "- [Serve a graph](#serve-a-graph)\n",
    "- [Distributed pipelines](#distributed-pipelines)\n",
    "\n",
    "**See also**\n",
    "- {ref}`genai-04-llm-prompt-artifact`\n",
    "- {ref}`llm-prompt-artifacts`\n",
    "\n",
    "## SDK\n",
    "- {py:meth}`~mlrun.projects.MlrunProject.log_llm_prompt`: Log an LLM prompt artifact to the project\n",
    "- {py:meth}`~mlrun.projects.MlrunProject.log_model`: Log a model artifact and optionally upload it to the datastore\n",
    "- {py:meth}`~mlrun.runtimes.ServingRuntime.add_model`: Add a model and/or route to the function\n",
    "- {py:class}`~mlrun.serving.ModelRunnerStep`: Run multiple models on each event\n",
    "\n",
    "\n",
    "\n",
    "## Guidelines\n",
    "\n",
    "- One LLM can be used by multiple LLM prompts \n",
    "- The `invocation_config` is specific per LLM prompt. For example, you can limit the tokens in a classification step, while other steps do not have a token limitation.\n",
    "- When the graph is deployed, each model step, which represents a model/prompt combination, is translated to a model endpoint and can be monitored individually."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "feb6ef05",
   "metadata": {},
   "source": [
    "## Define the LLM prompt template \n",
    "\n",
    "Prompt templates guide the LLM to generate responses based on user queries and the role of this specific LLM call in the workflow. They\n",
    "use variables to define the format of the prompt. \n",
    "The name of the template is important, since you will use it subsequently in filters and searches.\n",
    "\n",
    "The prompt template format is a `list[dict]`, using variables to define the format of the prompt:\n",
    "```\n",
    "prompt_template = [\n",
    "{ \"role\": \"system\", \"content\": \"You are a helpful assistant ...\" },\n",
    "{ \"role\": \"user\", \"content\": \"please help with this issue {user_message}\" }\n",
    "]\n",
    "```\n",
    "\n",
    "- There is no limitation on the list’s size, although common cases will have 2 dictionaries (system and user)\n",
    "- Each content can hold plain text, a place holder or a combination of both.\n",
    "- The place holders names are relevant for the entire template:  if there is a place holder “user_input” it can be used inside a few contents, and will always be the same.\n",
    "- The `prompt_path` / `target_path` point to a JSON file that follows the same structure as above.\n",
    "- (Optional) arguments: A dictionary of argument names and their description: what is the expected value.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79815f2d",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "## Log the LLM prompt artifacts\n",
    "\n",
    "LLM prompt artifacts capture a prompt definition for LLM interactions. You can log prompt artifacts (to your project) with an inline prompt template, or from a file, and with optional metadata like generation parameters, a legend for variable injection, and references to a parent model artifact. \n",
    "Prompt artifacts are uniquely defined by their LLM, prompt template, and the model generation configuration.\n",
    "\n",
    "See the parameters and examples in {py:meth}`~mlrun.projects.MlrunProject.log_llm_prompt`. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94c8c4ef",
   "metadata": {},
   "source": [
    "**Example of logging directly with an inline prompt template**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7e323821",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "QA_prompt = project.log_llm_prompt(\n",
    "    \"QA_prompt\",\n",
    "    prompt_template=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are a member of the QA team responsible for tracking the status of customer issues.\",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": \"Provide the status of {issue_number}\",\n",
    "        },\n",
    "    ],\n",
    "    model_artifact=model_artifact,\n",
    "    prompt_legend={\n",
    "        \"issue_number\": {\n",
    "            \"field\": \"issue_number\",\n",
    "            \"description\": \"The issue tracking reference in the QA system\",\n",
    "        },\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1de90369",
   "metadata": {},
   "source": [
    "**Example of logging a prompt from a file**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bc92e993",
   "metadata": {},
   "outputs": [],
   "source": [
    "project.log_llm_prompt(\n",
    "    key=\"qa_prompt\",\n",
    "    prompt_path=\"prompts/template.json\",\n",
    "    prompt_legend={\n",
    "        \"question\": {\n",
    "            \"field\": \"user_question\",\n",
    "            \"description\": \"The actual question asked by the user\",\n",
    "        }\n",
    "    },\n",
    "    model_artifact=model,\n",
    "    invocation_config={\"temperature\": 0.7, \"max_tokens\": 256},\n",
    "    description=\"Q&A prompt template with user-provided question\",\n",
    "    tag=\"v2\",\n",
    "    labels={\"task\": \"qa\", \"stage\": \"experiment\"},\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b50fe37",
   "metadata": {},
   "source": [
    "## Serve a graph\n",
    "\n",
    "This end-to-end code example implements the callflow that directs calls between customer support and QA responses illustrated above. \n",
    "Models can be either local or remote (see {ref}`genai-serving`). This example uses a remote model.\n",
    "The graph uses the {py:class}`~mlrun.serving.ModelRunnerStep`, enabling the running of multiple models on each event.\n",
    "When the graph is deployed, each model step, which represents a model/prompt combination, is translated to a model endpoint.\n",
    "\n",
    "Set up environment, import the mlrun library and initialize the project:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c574718b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Union\n",
    "\n",
    "from mlrun.serving import ModelSelector\n",
    "from my_modul import MsgClassifier\n",
    "\n",
    "import mlrun\n",
    "from mlrun import get_or_create_project\n",
    "\n",
    "image = \"mlrun/mlrun\"\n",
    "project_name = \"my-project\"\n",
    "project = get_or_create_project(\n",
    "    project_name, context=\"./\", user_project=True, allow_cross_project=True\n",
    ")\n",
    "\n",
    "\n",
    "class MyClassifier(ModelSelector):\n",
    "    def select(\n",
    "        self, event, available_models: list[Model]\n",
    "    ) -> Union[list[str], list[Model]]:\n",
    "        return MsgClassifier.classify(event)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d93e451",
   "metadata": {},
   "source": [
    "Define the openAI credentials and env parameters, specify and log the model, and log the llm prompts:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "35540679",
   "metadata": {},
   "outputs": [],
   "source": [
    "from mlrun.datastore.datastore_profile import OpenAIProfile\n",
    "\n",
    "open_ai_profile = OpenAIProfile(\n",
    "    name=\"openai_profile\",\n",
    "    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n",
    "    base_url=os.environ.get(\"OPENAI_BASE_URL\"),\n",
    ")\n",
    "project.register_datastore_profile(open_ai_profile)\n",
    "model_url = f\"ds://openai_profile/gpt-4o-mini\"\n",
    "\n",
    "from src.llm_prompts import customer_support_prompt_template, qa_prompt_template\n",
    "\n",
    "model_artifact = project.log_model(\n",
    "    \"open-ai\",\n",
    "    model_url=model_url,\n",
    ")\n",
    "classification_prompt = project.log_llm_prompt(\n",
    "    \"classification_prompt\",\n",
    "    prompt_template=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are the first response to a customer call and need to understand whether the caller wants help with an issue or wants to get status on an open bug. In case of a bug , extract the 'issue number' from the call \",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": \"The customer inquires about {user_issue}\",\n",
    "        },\n",
    "    ],\n",
    "    model_artifact=model_artifact,\n",
    "    prompt_legend={\n",
    "        \"user_issue\": {\n",
    "            \"field\": \"user_issue\",\n",
    "            \"description\": \"The original input of the user\",\n",
    "        },\n",
    "    },\n",
    ")\n",
    "QA_prompt = project.log_llm_prompt(\n",
    "    \"QA_prompt\",\n",
    "    prompt_template=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are a member of the QA team responsible for tracking the status of customer issues.\",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": \"Provide the status of {issue_number}\",\n",
    "        },\n",
    "    ],\n",
    "    model_artifact=model_artifact,\n",
    "    prompt_legend={\n",
    "        \"issue_number\": {\n",
    "            \"field\": \"issue_number\",\n",
    "            \"description\": \"The issue tracking reference in the QA system\",\n",
    "        },\n",
    "    },\n",
    ")\n",
    "customer_support_prompt = project.log_llm_prompt(\n",
    "    \"customer_support_prompt\",\n",
    "    prompt_template=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are a helpful customer support assistant.\",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": \"Provide helpful troubleshooting information for {user_issue}\",\n",
    "        },\n",
    "    ],\n",
    "    model_artifact=model_artifact,\n",
    "    prompt_legend={\n",
    "        \"user_issue\": {\n",
    "            \"field\": \"user_issue\",\n",
    "            \"description\": \"The original input of the user\",\n",
    "        },\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "246391ce",
   "metadata": {},
   "source": [
    "Add the function using the flow topology and the async engine, and add the ModelRunnerStep:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9c46b8db",
   "metadata": {},
   "outputs": [],
   "source": [
    "from mlrun.serving import ModelRunnerStep\n",
    "from mlrun.common.schemas.model_monitoring.constants import (\n",
    "    ModelEndpointCreationStrategy,\n",
    ")\n",
    "\n",
    "function = project.set_function(\n",
    "    name=\"chat-bot\",\n",
    "    kind=\"serving\",\n",
    "    tag=\"latest\",\n",
    "    func=\"./src/LLM_file.py\",\n",
    "    image=image,\n",
    "    requirements=[\"openai==1.77.0\"],\n",
    ")\n",
    "graph = function.set_topology(\"flow\", engine=\"async\")\n",
    "\n",
    "model_runner_step = ModelRunnerStep(\n",
    "    name=\"model_runner_step\",\n",
    "    model_selector=\"MyClassifier\",  # Classify which model should be used\n",
    ")\n",
    "\n",
    "model_runner_step.add_model(\n",
    "    endpoint_name=\"qa_prompt_ep\",\n",
    "    model_artifact=qa_prompt,\n",
    "    model_endpoint_creation_strategy=ModelEndpointCreationStrategy.OVERWRITE,\n",
    "    execution_mechanism=\"thread_pool\",\n",
    "    model_class=\"LLModel\",\n",
    ")\n",
    "model_runner_step.add_model(\n",
    "    endpoint_name=\"customer_support_endpoint\",\n",
    "    model_artifact=customer_support_llm_prompt_artifact,\n",
    "    model_endpoint_creation_strategy=ModelEndpointCreationStrategy.OVERWRITE,\n",
    "    execution_mechanism=\"thread_pool\",\n",
    "    model_class=\"LLModel\",\n",
    ")\n",
    "\n",
    "graph.to(model_runner_step).respond()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1ccf10c1",
   "metadata": {},
   "source": [
    "## Distributed pipelines\n",
    "\n",
    "By default, all steps of the serving graph run on the same pod. It is possible to run different steps on different pods using \n",
    "{ref}`distributed pipelines<distributed-graph>`.Typically you run steps that require CPU on one pod, and steps that require a GPU on a \n",
    "different pod that is running on a potentially different node that has GPU support."
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
