{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(hf-model-image-classification)=\n",
    "# Integrating a Hugging Face image classification model with MLRun\n",
    "This notebook demonstrates how to set up and test HuggingFace model integration with MLRun, including the Hugging Face profile configuration, creating model artifacts, deploying a serving function with the dedicated_process execution mechanism, and testing the model inference on a sample image.\n",
    "It uses a custom invoke method for non-LLM operations, demonstrated here with image classification.\n",
    "\n",
    "After completing the steps described here, the model is ready for use with the configured execution mechanism.\n",
    "\n",
    "**In this section**\n",
    "- [Import the dependencies](#import-the-dependencies)\n",
    "- [Configure the project](#configure-the-project)\n",
    "- [Create the project and the Hugging Face profile](#create-the-project-and-the-hugging-face-profile)\n",
    "- [Log the image artifact to V3IO](#log-the-image-artifact-to-v3io)\n",
    "- [Create the model artifact](#create-the-model-artifact)\n",
    "- [Create and configure the serving function](#create-and-configure-the-serving-function)\n",
    "- [Deploy the function](#deploy-the-function)\n",
    "- [Test the model inference](#test-the-model-inference)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import the dependencies\n",
    "The MLRun imports include:\n",
    "- {py:meth}`~mlrun.datastore.ModelProvider`: abstract base for integrating with external model providers, primarily generative AI (GenAI) services.\n",
    "- {py:meth}`~mlrun.serving.ModelRunnerStep`: to run multiple models on each event.\n",
    "- {py:meth}`~mlrun.serving.LLModel`: to wrap a model for handling a LLM (Large Language Model) prompt-based inference.\n",
    "- {py:meth}`~mlrun.datastore.datastore_profile.OpenAIProfile`: to create a new model by parsing and validating input data from keyword arguments."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2025-09-17 13:58:56,766 [warning] Failed resolving version info. Ignoring and using defaults\n",
      "> 2025-09-17 13:59:00,360 [warning] Server or client version is unstable. Assuming compatible: {\"client_version\":\"0.0.0+unstable\",\"server_version\":\"0.0.0+unstable\"}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "from dotenv import load_dotenv\n",
    "\n",
    "import mlrun\n",
    "import mlrun.artifacts\n",
    "import mlrun.serving\n",
    "from mlrun.serving import ModelRunnerStep\n",
    "from mlrun.datastore.datastore_profile import HuggingFaceProfile\n",
    "\n",
    "load_dotenv(\"secrets.env\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Configure the project\n",
    "\n",
    "The MLRun project is a container for all your work on a this gen AI application. Read more about {ref}`projects`. \n",
    "\n",
    "First you configure the project, then initialize it a few steps further on."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Project configuration\n",
    "project_name = \"hf-image-classification\"\n",
    "image = \"mlrun/mlrun\"\n",
    "profile_name = \"huggingface_image_classification\"\n",
    "image_classification_model = \"microsoft/resnet-50\"\n",
    "execution_mechanism = \"dedicated_process\"\n",
    "mlrun_model_name = \"sync_invoke_model\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Download an image of a cat to be used for testing:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download an image\n",
    "# This notebook uses an Unsplash image of a cat\n",
    "# Source: https://unsplash.com"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create the project and the Hugging Face profile\n",
    "\n",
    "The `HuggingFaceProfile` is a datastore profile for credentials management. Read more about [Data store profiles](../../store/datastore.md#data-store-profiles)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> 2025-09-17 13:59:15,958 [info] Created and saved project: {\"context\":\"./\",\"from_template\":null,\"name\":\"hf-image-detection\",\"overwrite\":false,\"save\":true}\n",
      "> 2025-09-17 13:59:15,961 [info] Project created successfully: {\"project_name\":\"hf-image-detection\",\"stored_in_db\":true}\n",
      "Project: hf-image-detection\n",
      "Profile: huggingface_image_detection\n",
      "Model URL: ds://huggingface_image_detection/microsoft/resnet-50\n",
      "Execution Mechanism: dedicated_process\n"
     ]
    }
   ],
   "source": [
    "# Initialize the MLRun project\n",
    "project = mlrun.get_or_create_project(project_name)\n",
    "\n",
    "# Create the HuggingFace data store profile with environment variables\n",
    "profile = HuggingFaceProfile(\n",
    "    name=profile_name,\n",
    "    task=\"image-classification\",\n",
    "    token=os.environ.get(\"HF_TOKEN\"),\n",
    "    device=os.environ.get(\"HF_DEVICE\"),\n",
    "    device_map=os.environ.get(\"HF_DEVICE_MAP\"),\n",
    "    trust_remote_code=os.environ.get(\"HF_TRUST_REMOTE_CODE\"),\n",
    ")\n",
    "\n",
    "# Register the profile with the project\n",
    "project.register_datastore_profile(profile)\n",
    "\n",
    "# Set up model URL\n",
    "url_prefix = f\"ds://{profile_name}/\"\n",
    "model_url = url_prefix + image_classification_model\n",
    "\n",
    "print(f\"Project: {project_name}\")\n",
    "print(f\"Profile: {profile_name}\")\n",
    "print(f\"Model URL: {model_url}\")\n",
    "print(f\"Execution Mechanism: {execution_mechanism}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Log the image artifact to V3IO\n",
    "\n",
    "The cat image that you downloaded must be logged to V3IO so that the model can access it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "artifact = project.log_artifact(\"image_artifact\", local_path=\"cat.jpg\", upload=True)\n",
    "v3io_path = artifact.get_target_path()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create the model artifact"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model artifact created: {'metadata': {'tree': '3670dd4f-aa82-4230-ac14-ae4d5bb0f892', 'key': 'sync_invoke_model', 'uid': '9c848f2fb9a768c0d7693de8eb2daab4c78ec6f7', 'project': 'hf-image-detection', 'iter': 0}, 'status': {'state': 'created'}, 'spec': {'has_children': False, 'license': '', 'parameters': {'default_config': {'top_k': 2}}, 'framework': '', 'producer': {'kind': 'project', 'name': 'hf-image-detection', 'tag': '3670dd4f-aa82-4230-ac14-ae4d5bb0f892', 'owner': 'admin'}, 'model_url': 'ds://huggingface_image_detection/microsoft/resnet-50', 'model_file': '', 'db_key': 'sync_invoke_model'}, 'kind': 'model'}\n"
     ]
    }
   ],
   "source": [
    "# Log the model artifact\n",
    "model_artifact = project.log_model(\n",
    "    mlrun_model_name,\n",
    "    model_url=model_url,\n",
    "    default_config={\"top_k\": 2},\n",
    ")\n",
    "\n",
    "print(f\"Model artifact created: {model_artifact}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create and configure the serving function\n",
    "\n",
    "First configure the serving function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting image_detection_model.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile image_classification_model.py\n",
    "from typing import Any\n",
    "import mlrun.serving\n",
    "from mlrun.datastore.model_provider.model_provider import ModelProvider\n",
    "from mlrun.serving.states import LLModel  # noqa\n",
    "\n",
    "\n",
    "class ImageDetectionModel(mlrun.serving.states.Model):\n",
    "    \"\"\"Custom MLRun model wrapper for Hugging Face image classification that loads an image\n",
    "    from a given path and returns predictions via HuggingFaceProvider.\"\"\"\n",
    "\n",
    "    def predict(self, body: Any, **kwargs) -> Any:\n",
    "        if isinstance(self.model_provider, ModelProvider):\n",
    "            # Imported here to avoid requiring Pillow in environments where it's not needed\n",
    "            from PIL import Image\n",
    "            \n",
    "            dataitem = mlrun.get_dataitem(body[\"input\"])\n",
    "            with dataitem.open(\"rb\") as f:\n",
    "                image = Image.open(f)\n",
    "                image.load()  # ensure image is fully read into memory\n",
    "\n",
    "            result = self.model_provider.custom_invoke(\n",
    "                inputs=image,\n",
    "            )\n",
    "            body[\"result\"] = result\n",
    "            return body\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now create the serving function, using the configuration you just defined in `image_classification_model.py`. Read mode about {py:class}`~mlrun.projects.MlrunProject.set_function`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create serving function\n",
    "function = project.set_function(\n",
    "    name=\"huggingface-model-test\",\n",
    "    kind=\"serving\",\n",
    "    tag=\"latest\",\n",
    "    image=image,\n",
    "    func=\"image_classification_model.py\",\n",
    "    requirements=[\n",
    "        \"--extra-index-url\",\n",
    "        \"https://download.pytorch.org/whl/cpu\",\n",
    "        \"torch==2.7.1+cpu\",\n",
    "        \"transformers==4.53.2\",\n",
    "        \"pillow~=11.3\",\n",
    "    ],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set up the serving graph\n",
    "The {ref}`flow-topology` topology is a full graph/DAG. In this example it uses the async engine, which is based on {py:mod}`storey.transformations` and an asynchronous event loop. \n",
    "This notebook uses the {py:class}`~mlrun.serving.ModelRunnerStep` to run the model as a graph."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Serving graph configured with dedicated_process execution mechanism\n"
     ]
    }
   ],
   "source": [
    "graph = function.set_topology(\"flow\", engine=\"async\")\n",
    "model_runner_step = ModelRunnerStep(name=\"my_model_runner\")\n",
    "model_runner_step.add_model(\n",
    "    model_class=\"ImageDetectionModel\",\n",
    "    endpoint_name=\"my_endpoint\",\n",
    "    execution_mechanism=execution_mechanism,\n",
    "    model_artifact=model_artifact,\n",
    "    result_path=\"output\",\n",
    ")\n",
    "graph.to(model_runner_step).respond()\n",
    "\n",
    "print(\"Serving graph configured with dedicated_process execution mechanism\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deploy the function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "# For larger models, Hugging Face models may require extended resources:\n",
    "#\n",
    "# function.spec.resources = {\n",
    "#     \"limits\": {\"cpu\": \"5\", \"memory\": \"30Gi\"},\n",
    "#     \"requests\": {\"cpu\": \"3\", \"memory\": \"1Mi\"},\n",
    "# }\n",
    "# function.spec.max_replicas = (\n",
    "#     1  # prevents allocating extended resources to multiple replicas\n",
    "# )"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deploying function...\n",
      "> 2025-09-17 13:59:16,353 [info] Starting remote function deploy\n",
      "2025-09-17 13:59:16  (info) Deploying function\n",
      "2025-09-17 13:59:16  (info) Building\n",
      "2025-09-17 13:59:16  (info) Staging files and preparing base images\n",
      "2025-09-17 13:59:16  (warn) Using user provided base image, runtime interpreter version is provided by the base image\n",
      "2025-09-17 13:59:16  (info) Building processor image\n",
      "2025-09-17 14:06:52  (info) Build complete\n",
      "2025-09-17 14:08:10  (info) Function deploy complete\n",
      "> 2025-09-17 14:08:18,161 [info] Model endpoint creation task completed with state succeeded\n",
      "> 2025-09-17 14:08:18,162 [info] Successfully deployed function: {\"external_invocation_urls\":[\"hf-image-detection-huggingface-model-test.default-tenant.app.vmdev25.lab.iguazeng.com/\"],\"internal_invocation_urls\":[\"nuclio-hf-image-detection-huggingface-model-test.default-tenant.svc.cluster.local:8080\"]}\n",
      "Function deployed successfully!\n"
     ]
    }
   ],
   "source": [
    "# Deploy the function\n",
    "print(\"Deploying function...\")\n",
    "function.deploy()\n",
    "print(\"Function deployed successfully!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Test the model inference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Response received:\n",
      "Number of results: 2\n",
      "[{'label': 'tabby, tabby cat', 'score': 0.8296310901641846}, {'label': 'tub, vat', 'score': 0.0689203143119812}]\n"
     ]
    }
   ],
   "source": [
    "# Test the model with the input data\n",
    "results = function.invoke(\n",
    "    f\"v2/models/{mlrun_model_name}/infer\",\n",
    "    {\"input\": v3io_path},\n",
    ")[\"result\"]\n",
    "\n",
    "print(\"Response received:\")\n",
    "print(f\"Number of results: {len(results)}\")\n",
    "print(results)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mlrun-base",
   "language": "python",
   "name": "conda-env-mlrun-base-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
