{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(deploy-openai-model)=\n",
    "# Integrating an OpenAI LLM with MLRun\n",
    "\n",
    "This notebook demonstrates how to set up and test an OpenAI model integration with MLRun, including profile setup, model deployment, and inference testing. After running this notebook, the model is now ready for production use with the configured execution mechanism and token limits.\n",
    "\n",
    "This notebook uses an OpenAI Large Language Model. You create a connection to the model, query the model, and receive responses.\n",
    "\n",
    "You could run a similar flow on a Hugging Face LLM, but it would need greater resources since Hugging Face models need to be downloaded.\n",
    "\n",
    "**In this section**\n",
    "- [Import the dependencies](#import-the-dependencies)\n",
    "- [Configure the project and test the data](#configure-the-project-and-test-the-data)\n",
    "- [Set up the project and the OpenAI profile](#set-up-the-project-and-the-openai-profile)\n",
    "- [Create/log the model artifact](#log-model-artifact)\n",
    "- [Create/log the LLM Prompt artifact](#log-llm-artifact)\n",
    "- [Create the serving function](#create-the-serving-function)\n",
    "- [Set up the serving graph](#set-up-the-serving-graph)\n",
    "- [Deploy the function](#deploy-the-function)\n",
    "- [Test the model inference](#test-the-model-inference)\n",
    "- [Analyze the token usage](#analyze-the-token-usage)\n",
    "\n",
    "**See also**\n",
    "- [Remote models](../../store/models.md#remote-models)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import the dependencies\n",
    "\n",
    "The MLRun imports include:\n",
    "- {py:meth}`~mlrun.datastore.ModelProvider`: abstract base for integrating with external model providers, primarily generative AI (GenAI) services.\n",
    "- {py:meth}`~mlrun.serving.ModelRunnerStep`: to run multiple models on each event.\n",
    "- {py:meth}`~lrun.serving.LLModel`: to wrap a model for handling a LLM (Large Language Model) prompt-based inference.\n",
    "- {py:meth}`~mlrun.datastore.datastore_profile.OpenAIProfile`: to create a new model by parsing and validating input data from keyword arguments."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2025-10-29 16:59:58,732 [warning] Failed resolving version info. Ignoring and using defaults\n",
      "> 2025-10-29 17:00:01,282 [warning] Server or client version is unstable. Assuming compatible: {\"client_version\":\"0.0.0+unstable\",\"server_version\":\"0.0.0+unstable\"}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "import json\n",
    "from dotenv import load_dotenv\n",
    "import mlrun.serving\n",
    "from mlrun.datastore.model_provider.model_provider import UsageResponseKeys\n",
    "from mlrun.serving import ModelRunnerStep\n",
    "from mlrun.datastore.datastore_profile import OpenAIProfile\n",
    "\n",
    "load_dotenv(\"secrets.env\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Configure the project and test the data\n",
    "\n",
    "The project uses an OpenAI LLM (gpt-4o-mini). The execution process value of `dedicated_process` is used for large models that are CPU/GPU-intensive tasks that also require significant runnable-specific initialization. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Project configuration\n",
    "project_name = \"openai-project\"\n",
    "image = \"mlrun/mlrun\"\n",
    "profile_name = \"my_openai_profile\"\n",
    "basic_llm_model = \"gpt-4o-mini\"\n",
    "execution_mechanism = \"dedicated_process\"\n",
    "mlrun_model_name = \"sync_invoke_model\"\n",
    "\n",
    "# Test input data\n",
    "INPUT_DATA = {\n",
    "    \"question\": \"What is the capital of France? Answer with one word first, then provide a historical overview.\",\n",
    "    \"depth_level\": \"detailed\",\n",
    "    \"persona\": \"teacher\",\n",
    "    \"tone\": \"casual\",\n",
    "}\n",
    "\n",
    "EXPECTED_RESULT = \"paris\"\n",
    "\n",
    "PROMPT_TEMPLATE = [\n",
    "    {\n",
    "        \"role\": \"user\",\n",
    "        \"content\": \"{question}. Explain {depth_level} as a {persona} in {tone} style.\",\n",
    "    }\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set up the project and the OpenAI profile\n",
    "\n",
    "The MLRun project is a container for all your work on a this gen AI application. Read more about [Projects and automation](https://docs.mlrun.org/en/stable/projects/project.html).\n",
    "\n",
    "The `OpenAIProfile` is a datastore profile for credentials management. Read more about [Data store profiles](https://docs.mlrun.org/en/stable/store/datastore.html#data-store-profiles)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2025-10-29 17:00:16,954 [info] Created and saved project: {\"context\":\"./\",\"from_template\":null,\"name\":\"openai-project\",\"overwrite\":false,\"save\":true}\n",
      "> 2025-10-29 17:00:16,957 [info] Project created successfully: {\"project_name\":\"openai-project\",\"stored_in_db\":true}\n",
      "Project: openai-project\n",
      "Profile: my_openai_profile\n",
      "Model URL: ds://my_openai_profile/gpt-4o-mini\n",
      "Execution Mechanism: dedicated_process\n"
     ]
    }
   ],
   "source": [
    "# Initialize MLRun project\n",
    "project = mlrun.get_or_create_project(project_name)\n",
    "\n",
    "# Create an OpenAI profile with environment variables\n",
    "profile = OpenAIProfile(\n",
    "    name=profile_name,\n",
    "    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n",
    "    organization=os.environ.get(\"OPENAI_ORG_ID\"),\n",
    "    project=os.environ.get(\"OPENAI_PROJECT_ID\"),\n",
    "    base_url=os.environ.get(\"OPENAI_BASE_URL\"),\n",
    "    timeout=os.environ.get(\"OPENAI_TIMEOUT\"),\n",
    "    max_retries=os.environ.get(\"OPENAI_MAX_RETRIES\"),\n",
    ")\n",
    "\n",
    "# Register the profile with the project\n",
    "project.register_datastore_profile(profile)\n",
    "\n",
    "# Set up the LLM URL\n",
    "url_prefix = f\"ds://{profile_name}/\"\n",
    "model_url = url_prefix + basic_llm_model\n",
    "\n",
    "print(f\"Project: {project_name}\")\n",
    "print(f\"Profile: {profile_name}\")\n",
    "print(f\"Model URL: {model_url}\")\n",
    "print(f\"Execution Mechanism: {execution_mechanism}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(log-model-artifact)=\n",
    "## Create/log the model artifact\n",
    "\n",
    "This step logs the model artifact. See full details in {py:class}`~mlrun.projects.MlrunProject.log_model`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model artifact created: {'spec': {'parameters': {'default_config': {'max_tokens': 100}}, 'model_url': 'ds://my_openai_profile/gpt-4o-mini', 'has_children': False, 'framework': '', 'db_key': 'sync_invoke_model', 'license': '', 'model_file': '', 'producer': {'kind': 'project', 'name': 'openai-project', 'tag': '3afd90e8-46c8-47f2-90b5-cf372c3bca1b', 'owner': 'admin'}}, 'status': {'state': 'created'}, 'kind': 'model', 'metadata': {'key': 'sync_invoke_model', 'tree': '3afd90e8-46c8-47f2-90b5-cf372c3bca1b', 'project': 'openai-project', 'iter': 0, 'uid': 'aa608f068257e4967dc62a78c58aef661f349031'}}\n"
     ]
    }
   ],
   "source": [
    "# Log the model artifact\n",
    "model_artifact = project.log_model(\n",
    "    mlrun_model_name,\n",
    "    model_url=model_url,\n",
    "    default_config={\"max_tokens\": 100},\n",
    ")\n",
    "\n",
    "print(f\"Model artifact created: {model_artifact}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(log-llm-artifact)=\n",
    "## Create/log the LLM prompt artifact\n",
    "\n",
    "{py:class}`~mlrun.projects.MlrunProject.log_llm_prompt` creates and logs an LLMPromptArtifact that captures a prompt definition for large language model (LLM) interactions. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LLM prompt artifact created: {'spec': {'target_path': 'v3io:///projects/openai-project/artifacts/my_llm_prompt.json', 'prompt_template': [{'role': 'user', 'content': '{question}. Explain {depth_level} as a {persona} in {tone} style.'}], 'size': 98, 'has_children': False, 'parent_uri': 'store://models/openai-project/sync_invoke_model#0@3afd90e8-46c8-47f2-90b5-cf372c3bca1b^aa608f068257e4967dc62a78c58aef661f349031', 'format': 'json', 'db_key': 'my_llm_prompt', 'license': '', 'producer': {'kind': 'project', 'name': 'openai-project', 'tag': 'c3cd9d9c-9dd9-4ca9-bc29-b9836db078cb', 'owner': 'admin'}, 'prompt_legend': {'question': {'field': 'question', 'description': None}, 'depth_level': {'field': 'depth_level', 'description': None}, 'persona': {'field': 'persona', 'description': None}, 'tone': {'field': 'tone', 'description': None}}}, 'status': {'state': 'created'}, 'kind': llm-prompt, 'metadata': {'key': 'my_llm_prompt', 'hash': '24312969d4fde40522a147a1728bfe0fb5fb7755', 'tree': 'c3cd9d9c-9dd9-4ca9-bc29-b9836db078cb', 'project': 'openai-project', 'iter': 0, 'uid': 'b560cd9f8ef1e209ac2b8407eeb0dc25c2628b9f'}}\n"
     ]
    }
   ],
   "source": [
    "# Log the LLM prompt artifact\n",
    "llm_prompt_artifact = project.log_llm_prompt(\n",
    "    \"my_llm_prompt\",\n",
    "    prompt_template=PROMPT_TEMPLATE,\n",
    "    model_artifact=model_artifact,\n",
    "    prompt_legend={\n",
    "        \"question\": {\"field\": None, \"description\": None},\n",
    "        \"depth_level\": {\"field\": None, \"description\": None},\n",
    "        \"persona\": {\"field\": None, \"description\": None},\n",
    "        \"tone\": {\"field\": None, \"description\": None},\n",
    "    },\n",
    ")\n",
    "\n",
    "print(f\"LLM prompt artifact created: {llm_prompt_artifact}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create the serving function\n",
    "\n",
    "The `serving` type function is used for deploying models and higher-level real-time Graphs (DAG) over one or more Nuclio functions. See more details in [serving graphs](https://docs.mlrun.org/en/stable/serving/serving-graph.html) and {py:class}`~set_function`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting models.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile models.py\n",
    "from mlrun.serving.states import LLModel  # noqa"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Serving function created\n"
     ]
    }
   ],
   "source": [
    "function = project.set_function(\n",
    "    func=\"models.py\",\n",
    "    name=\"openai-model-test\",\n",
    "    kind=\"serving\",\n",
    "    image=image,\n",
    "    requirements=[\"openai==1.77.0\"],\n",
    ")\n",
    "print(\"Serving function created\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set up the serving graph\n",
    "The {ref}`flow-topology` topology is a full graph/DAG. In this example it uses the async engine, which is based on {py:mod}`storey.transformations` and an asynchronous event loop.\n",
    "This notebook uses the {py:class}`~mlrun.serving.ModelRunnerStep` to run the model as a graph."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Serving graph configured with dedicated_process execution mechanism\n"
     ]
    }
   ],
   "source": [
    "graph = function.set_topology(\"flow\", engine=\"async\")\n",
    "model_runner_step = ModelRunnerStep(name=\"my-model-runner\")\n",
    "model_runner_step.add_model(\n",
    "    endpoint_name=\"my_endpoint\",\n",
    "    model_class=\"LLModel\",\n",
    "    execution_mechanism=execution_mechanism,\n",
    "    model_artifact=llm_prompt_artifact,\n",
    "    result_path=\"output\",\n",
    ")\n",
    "graph.to(model_runner_step).respond()\n",
    "\n",
    "print(\"Serving graph configured with dedicated_process execution mechanism\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deploy the function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deploying function...\n",
      "> 2025-10-29 17:00:17,230 [info] Starting remote function deploy\n",
      "2025-10-29 17:00:17  (info) Deploying function\n",
      "2025-10-29 17:00:17  (info) Building\n",
      "2025-10-29 17:00:17  (info) Staging files and preparing base images\n",
      "2025-10-29 17:00:17  (warn) Using user provided base image, runtime interpreter version is provided by the base image\n",
      "2025-10-29 17:00:17  (info) Building processor image\n",
      "2025-10-29 17:02:22  (info) Build complete\n",
      "2025-10-29 17:02:52  (info) Function deploy complete\n",
      "> 2025-10-29 17:03:02,508 [info] Model endpoint creation task completed with state succeeded\n",
      "> 2025-10-29 17:03:02,509 [info] Successfully deployed function: {\"external_invocation_urls\":[\"openai-project-openai-model-test.default-tenant.app.vmdev68.lab.iguazeng.com/\"],\"internal_invocation_urls\":[\"nuclio-openai-project-openai-model-test.default-tenant.svc.cluster.local:8080\"]}\n",
      "Function deployed successfully!\n"
     ]
    }
   ],
   "source": [
    "# Deploy the function\n",
    "print(\"Deploying function...\")\n",
    "function.deploy()\n",
    "print(\"Function deployed successfully!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Test the model inference\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Response received:\n",
      "Response length: 2\n",
      "\n",
      "Response structure:\n",
      "  - answer\n",
      "  - usage\n"
     ]
    }
   ],
   "source": [
    "# Test the model with the input data\n",
    "response = function.invoke(\n",
    "    f\"v2/models/{mlrun_model_name}/infer\",\n",
    "    json.dumps(INPUT_DATA),\n",
    ")[\"output\"]\n",
    "\n",
    "print(\"Response received:\")\n",
    "print(f\"Response length: {len(response)}\")\n",
    "print(\"\\nResponse structure:\")\n",
    "for key in response.keys():\n",
    "    print(f\"  - {key}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Answer:\n",
      "Paris.\n",
      "\n",
      "Alright, let's dive into the historical journey of Paris! \n",
      "\n",
      "Paris, known as the \"City of Light,\" has a history that stretches back over 2,000 years. It all began with a group of people called the Parisii, a Celtic tribe that settled on the banks of the Seine River around the 3rd century BC. They established a small fishing village that gradually developed into a bustling trade center.\n",
      "\n",
      "By the 1st century BC, the Romans took notice of this growing\n",
      "\n",
      "Expected keyword: paris\n",
      "Contains expected result: True\n"
     ]
    }
   ],
   "source": [
    "# Extract and display the answer\n",
    "answer = response[UsageResponseKeys.ANSWER]\n",
    "print(\"Answer:\")\n",
    "print(answer)\n",
    "print(f\"\\nExpected keyword: {EXPECTED_RESULT}\")\n",
    "print(f\"Contains expected result: {EXPECTED_RESULT in answer.lower()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Analyze the token usage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Token Analysis:\n",
      "Completion tokens (API): 100\n",
      "Prompt tokens: 35\n",
      "Total tokens: 135\n"
     ]
    }
   ],
   "source": [
    "stats = response[UsageResponseKeys.USAGE]\n",
    "\n",
    "print(\"Token Analysis:\")\n",
    "print(f\"Completion tokens (API): {stats['completion_tokens']}\")\n",
    "print(f\"Prompt tokens: {stats['prompt_tokens']}\")\n",
    "print(f\"Total tokens: {stats['total_tokens']}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mlrun-base-py311",
   "language": "python",
   "name": "conda-env-mlrun-base-py311-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
