{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "(genai-04-llm-prompt-artifact)=\n",
    "# Using LLM prompt templates and artifacts\n",
    "\n",
    "This tutorial illustrates how easy it is to use LLMs and prompt templates, inside a complete workflow using the llm-prompt artifact.\n",
    "\n",
    "Whenever an LLM-Prompt artifact is used, there MUST be a definition of:\n",
    "- What is the prompt template\n",
    "- Which LLM is used\n",
    "- What the model’s generation configuration is (if not using the default)\n",
    "\n",
    "The model we are using is `gpt-4o-mini` from OpenAI with the default configuration (see section 3 for available model params), this case covers using a remote model directly from the configured datasource without having to download it first.\n",
    "We use streamlit to create a chat front-end and deploy it as [application runtime](https://docs.mlrun.org/en/stable/runtimes/application.html).\n",
    "\n",
    "**In this tutorial**\n",
    "* [Set up the environment](#set-up-the-environment)\n",
    "* [Import mlrun library and initialize the project](#import-mlrun-library-and-initialize-the-project)\n",
    "* [Configure OpenAI profile](#configure-openai-profile)\n",
    "* [Define the prompt templates and the prompt artifact template](#define-the-prompt-templates-and-the-prompt-artifact-template)\n",
    "* [Define the function graph and add ModelRunnerStep with proxy models for the shared model](#define-the-function-graph-and-add-modelrunnerstep-with-proxy-models-for-the-shared-model)\n",
    "* [Enable tracking and deploy the function](#enable-tracking-and-deploy-the-function)\n",
    "* [Deploy the model monitoring application](#deploy-the-model-monitoring-application)\n",
    "* [Configure the Streamlit chatbot application](#configure-the-streamlit-chatbot-application)\n",
    "* [Launch the Streamlit Chatbot to Interact with the LLM Model](#launch-the-streamlit-chatbot-to-interact-with-the-llm-model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!pip install streamlit"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Set up the environment\n",
    "This section sets up the environment variables required for OpenAI API access, including the base URL and API key."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "# Load environment variables\n",
    "load_dotenv(\"ai_gateway.env\")\n",
    "\n",
    "# Validate OpenAI credentials\n",
    "missing_vars = [\n",
    "    var for var in (\"OPENAI_API_KEY\", \"OPENAI_BASE_URL\") if not os.getenv(var)\n",
    "]\n",
    "if missing_vars:\n",
    "    raise EnvironmentError(\n",
    "        f\"Missing required environment variables: {', '.join(missing_vars)}. \"\n",
    "        \"Please ensure they are set in 'ai_gateway.env' or your system environment.\"\n",
    "    )\n",
    "\n",
    "# Set additional configuration\n",
    "os.environ[\"OPENAI_MAX_RETRIES\"] = \"100\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Import mlrun library and initialize the project\n",
    "This initializes the MLRun project"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "%config Completer.use_jedi = False\n",
    "\n",
    "import mlrun\n",
    "from mlrun import get_or_create_project\n",
    "\n",
    "image = \"mlrun/mlrun\"\n",
    "project_name = \"llm-openai-bot\"\n",
    "project = get_or_create_project(\n",
    "    project_name, context=\"./\", user_project=True, allow_cross_project=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "This section sets up the necessary datastore profiles for time-series database (TSDB) and stream data.\n",
    "which are essential for monitoring model performance and detecting drift.\n",
    "You can use a data store profile to manage datastore credentials.\n",
    "A data store profile holds all the information required to address an external data source, including credentials.\n",
    "The `DatastoreProfileV3io` is used for V3IO storages while `DatastoreProfileTDEngine`, `DatastoreProfileKafkaSource` are used in community edition.\n",
    "Notice that recommended base period is 10 minutes, for demo purposes we set base period to 1 minute."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from src.model_monitoring_utils import enable_model_monitoring\n",
    "\n",
    "enable_model_monitoring(\n",
    "    project=project, deploy_histogram_data_drift_app=False, base_period=1\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Configure OpenAI profile\n",
    "This section sets up an openAI profile (credentials and environment variables), and specifies the model. This tutorial uses the model `gpt-4o-mini`. You can change it to any model you want to use."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from mlrun.datastore.datastore_profile import OpenAIProfile\n",
    "\n",
    "open_ai_profile = OpenAIProfile(\n",
    "    name=\"openai_profile\",\n",
    "    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n",
    "    organization=os.environ.get(\"OPENAI_ORG_ID\"),\n",
    "    project=os.environ.get(\"OPENAI_PROJECT_ID\"),\n",
    "    base_url=os.environ.get(\"OPENAI_BASE_URL\"),\n",
    "    timeout=os.environ.get(\"OPENAI_TIMEOUT\"),\n",
    "    max_retries=os.environ.get(\"OPENAI_MAX_RETRIES\"),\n",
    ")\n",
    "project.register_datastore_profile(open_ai_profile)\n",
    "model_url = f\"ds://openai_profile/gpt-4o-mini\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Define the prompt templates and the prompt artifact template\n",
    "The prompt templates are defined in the [`src/llm_prompts.py`](./src/llm_prompts.py) file and include templates for the finance and sport domains.\n",
    " These templates - `finance_prompt_template` and `sport_prompt_template` - are structured to guide the LLM in generating responses based on user queries.\n",
    " Each template includes a system message that sets the context for the LLM and a user message that includes the user's ID, tone, depth level, and question.\n",
    "\n",
    "Use the prompt_legend parameter to specify how to map input fields to the corresponding prompt placeholders and to provide descriptive metadata for each placeholder.\n",
    "\n",
    "For reference, see {py:meth}`~mlrun.projects.MlrunProject.log_llm_prompt` for how the LLM prompt artifacts are logged as part of the project."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from src.llm_prompts import finance_prompt_template, sport_prompt_template\n",
    "\n",
    "model_artifact = project.log_model(\n",
    "    \"open-ai\",\n",
    "    model_url=model_url,\n",
    ")\n",
    "# Create and log the finance prompt template as an LLM prompt artifact, capturing its definition and metadata\n",
    "finance_llm_prompt_artifact = project.log_llm_prompt(\n",
    "    \"finance_llm_prompt\",\n",
    "    prompt_template=finance_prompt_template,\n",
    "    model_artifact=model_artifact,\n",
    "    invocation_config={\n",
    "        \"temperature\": 0.7,\n",
    "        \"max_tokens\": 256,\n",
    "    },  # Invocation config will be add to each invocation\n",
    "    prompt_legend={\n",
    "        \"question\": {\n",
    "            \"field\": \"user_query\",\n",
    "            \"description\": \"The main financial question or request the user is asking.\",\n",
    "        },\n",
    "        \"depth_level\": {\n",
    "            \"field\": \"response_detail_level\",\n",
    "            \"description\": \"Indicates the level of detail in the answer (e.g., basic, intermediate, advanced).\",\n",
    "        },\n",
    "        \"user_id\": {\n",
    "            \"field\": \"customer_id\",\n",
    "            \"description\": \"Unique identifier of the user, useful for personalization and tracking.\",\n",
    "        },\n",
    "        \"tone\": {\n",
    "            \"field\": \"reply_style\",\n",
    "            \"description\": \"The desired style of the response (e.g., formal, friendly, concise, detailed).\",\n",
    "        },\n",
    "    },\n",
    ")\n",
    "sport_llm_prompt_artifact = project.log_llm_prompt(\n",
    "    \"sport_llm_prompt\",\n",
    "    prompt_template=sport_prompt_template,\n",
    "    model_artifact=model_artifact,\n",
    "    prompt_legend={\n",
    "        \"question\": {\n",
    "            \"field\": \"user_query\",\n",
    "            \"description\": \"The main sports or fitness-related question from the user.\",\n",
    "        },\n",
    "        \"depth_level\": {\n",
    "            \"field\": \"response_detail_level\",\n",
    "            \"description\": \"Indicates how in-depth the explanation should be (e.g., beginner, intermediate, expert).\",\n",
    "        },\n",
    "        \"user_id\": {\n",
    "            \"field\": \"customer_id\",\n",
    "            \"description\": \"Unique identifier of the user, used for personalization or tracking.\",\n",
    "        },\n",
    "        \"tone\": {\n",
    "            \"field\": \"reply_style\",\n",
    "            \"description\": \"The preferred style or tone of the response (e.g., motivational, professional, casual).\",\n",
    "        },\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Define the function graph and add ModelRunnerStep with proxy models for the shared model\n",
    "\n",
    "`ModelRunnerStep` is used to run multiple models on each event.\n",
    "When a `ModelRunnerStep` is included in a function graph, MLRun automatically imports the default language model class (`LLModel` or `mlrun.serving.states.LLModel`) during function deployment to wrap the model for handling a LLM prompt-based inference.\n",
    "This class extends the base `Model` to provide specialized handling for `LLMPromptArtifact` objects, enabling both synchronous and asynchronous invocation of language models.\n",
    "Follow the class description and implement your own enrichment when custom class is needed.\n",
    "\n",
    "Use the `add_shared_model` method to add a shared model to the graph — this model becomes accessible to all `ModelRunners` in the graph.\n",
    "Use `add_shared_model_proxy` to add a *proxy model* to a `ModelRunnerStep`. A proxy model acts as a lightweight reference to an existing shared model within the graph. It allows each step to reuse the same underlying shared model without duplicating it, while still being able to assign a unique endpoint name, labels, and endpoint creation strategy for tracking or monitoring purposes. This helps maintain efficiency and consistency across multiple model runners that operate on shared models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from mlrun.serving import ModelRunnerStep\n",
    "from mlrun.common.schemas.model_monitoring.constants import (\n",
    "    ModelEndpointCreationStrategy,\n",
    ")\n",
    "\n",
    "function = project.set_function(\n",
    "    name=\"open-ai-tut\",\n",
    "    kind=\"serving\",\n",
    "    tag=\"latest\",\n",
    "    func=\"./src/LLM_file.py\",\n",
    "    image=image,\n",
    "    requirements=[\"openai==1.77.0\"],\n",
    ")\n",
    "graph = function.set_topology(\"flow\", engine=\"async\")\n",
    "\n",
    "model_runner_step = ModelRunnerStep(\n",
    "    name=\"model_runner_step\", model_selector=\"MyModelSelector\"\n",
    ")\n",
    "\n",
    "graph.add_shared_model(\n",
    "    name=\"shared_llm\",\n",
    "    execution_mechanism=\"dedicated_process\",\n",
    "    model_class=\"LLModel\",\n",
    "    model_artifact=model_artifact,\n",
    "    result_path=\"outputs\",\n",
    ")\n",
    "\n",
    "model_runner_step.add_shared_model_proxy(\n",
    "    endpoint_name=\"finance_endpoint\",\n",
    "    model_artifact=finance_llm_prompt_artifact,\n",
    "    shared_model_name=\"shared_llm\",\n",
    "    model_endpoint_creation_strategy=ModelEndpointCreationStrategy.OVERWRITE,\n",
    ")\n",
    "model_runner_step.add_shared_model_proxy(\n",
    "    endpoint_name=\"sport_endpoint\",\n",
    "    model_artifact=sport_llm_prompt_artifact,\n",
    "    shared_model_name=\"shared_llm\",\n",
    "    model_endpoint_creation_strategy=ModelEndpointCreationStrategy.OVERWRITE,\n",
    ")\n",
    "\n",
    "graph.to(model_runner_step).respond()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Enable tracking and deploy the function\n",
    "This section enables experiment tracking, deploys the function, and visualizes the workflow of the LLM model using a graph within the Streamlit app.\n",
    "**Note:** The `deploy_endpoint` provides the URL to interact with the Streamlit interface."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "function.set_tracking(enable_tracking=True)\n",
    "graph.plot()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "deploy_endpoint = function.deploy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Deploy the model monitoring application\n",
    "This section deploys the model monitoring application, which is responsible for monitoring the performance of the LLMs that were deployed in the previous step. It uses the [monitoring_application](./src/monitoring_application.py) script to define the monitoring logic. The application is deployed using the `deploy_function` method, which makes it available for monitoring the LLMs in real time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "llm_monitoring_app = project.set_model_monitoring_function(\n",
    "    func=\"./src/monitoring_application.py\",\n",
    "    application_class=\"ModelMonitoringApplication\",\n",
    "    name=\"llm-monitoring\",\n",
    "    image=image,\n",
    ")\n",
    "\n",
    "project.deploy_function(llm_monitoring_app)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "payload = {\n",
    "    \"model_name\": \"sport_endpoint\",\n",
    "    \"user_query\": \"What can you tell me about finance ?\",\n",
    "    \"response_detail_level\": \"basic overview\",\n",
    "    \"customer_id\": 12345,\n",
    "    \"reply_style\": \"casual\",\n",
    "}\n",
    "\n",
    "function.invoke(\"\", body=json.dumps(payload).encode(\"utf-8\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Configure the Streamlit chatbot application\n",
    "This section sets up a [Streamlit app](./src/streamlit_ui.py) that enables you to interact with the LLMs deployed in the previous steps. The app provides a user interface for selecting different models, tones, and depth levels, and allows users to submit questions to the LLMs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!tar -czvf frontend_ui.tar.gz ./src/streamlit_ui.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Log the streamlit tar file as project artifact and use it as source archive\n",
    "frontend_source = project.log_artifact(\n",
    "    \"frontend_source\", local_path=\"./frontend_ui.tar.gz\", upload=True\n",
    ")\n",
    "\n",
    "ui_fn = project.set_function(\n",
    "    name=\"frontend\",\n",
    "    kind=\"application\",\n",
    "    image=\"mlrun/mlrun\",\n",
    "    requirements=[\"streamlit==1.49.1\"],\n",
    ")\n",
    "\n",
    "\n",
    "API_URL = function.get_url()\n",
    "\n",
    "# Set application spec and envs\n",
    "ui_fn.set_env(\"API_URL\", API_URL)\n",
    "ui_fn.with_source_archive(frontend_source.target_path, pull_at_runtime=False)\n",
    "ui_fn.set_internal_application_port(8000)\n",
    "ui_fn.spec.command = \"streamlit\"\n",
    "ui_fn.spec.args = [\n",
    "    \"run\",\n",
    "    \"--server.port\",\n",
    "    \"8000\",\n",
    "    \"/home/mlrun_code/src/streamlit_ui.py\",\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Launch the Streamlit Chatbot to Interact with the LLM Model\n",
    "This section launches the Streamlit chatbot, providing a user-friendly interface for interacting with the deployed LLM models. Users can select the model, tone, and depth level, submit questions, and view responses in a chat-style format.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "ui_fn.deploy(with_mlrun=False, create_default_api_gateway=False)\n",
    "ui_fn.create_api_gateway(\n",
    "    name=\"llm-prompt-artifact-ui\",\n",
    "    path=\"/\",\n",
    "    direct_port_access=True,\n",
    "    ssl_redirect=True,\n",
    "    set_as_default=False,\n",
    "    authentication_mode=\"none\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "print(\n",
    "    f\"Use this address to interact with your new chatbot ! https://{ui_fn.status.address}\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "![Model Architecture](./_static/images/llm-prompt-streamlit-ui.png)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
