{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(deploying-graphs)=\n",
    "# Deploying graphs\n",
    "\n",
    "**In this section**\n",
    "- [Graph serving function](#graph-serving-function)\n",
    "- [Graph engines](#graph-engines)\n",
    "- [Deploy the function to a mock server](#deploy-the-function-to-a-mock-server)\n",
    "- [Deploy the function as a Nuclio function](#deploy-the-function-as-a-nuclio-function)\n",
    "- [Deploy the function as a Kubernetes job](#deploy-the-function-as-a-kubernetes-job)\n",
    "\n",
    "(graph-serving-function)=\n",
    "## Graph serving function\n",
    "\n",
    "To start using a serving graph, you first need a serving function. A serving function contains the serving\n",
    "class code to run the model and all the code necessary to run the tasks. MLRun comes with a wide library of tasks. If you\n",
    "use just those, you don't have to add any special code to the serving function, you only have to provide\n",
    "the code that runs the model. For more information about serving classes see {ref}`custom-model-serving-class`.\n",
    "\n",
    "For example, the following code is a basic model serving class:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# mlrun: start-code"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cloudpickle import load\n",
    "from typing import List\n",
    "import numpy as np\n",
    "\n",
    "import mlrun\n",
    "\n",
    "\n",
    "class ClassifierModel(mlrun.serving.V2ModelServer):\n",
    "    def load(self):\n",
    "        \"\"\"load and initialize the model and/or other elements\"\"\"\n",
    "        model_file, extra_data = self.get_model(\".pkl\")\n",
    "        self.model = load(open(model_file, \"rb\"))\n",
    "\n",
    "    def predict(self, body: dict) -> List:\n",
    "        \"\"\"Generate model predictions from sample.\"\"\"\n",
    "        feats = np.asarray(body[\"inputs\"])\n",
    "        result: np.ndarray = self.model.predict(feats)\n",
    "        return result.tolist()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# mlrun: end-code"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To define the serving function, create the project, then the function with `project.set_function` and specify `kind` to be `serving`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "project = mlrun.get_or_create_project(\"serving\")\n",
    "fn = project.set_function(name=\"serving_example\", kind=\"serving\", image=\"mlrun/mlrun\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(graph-engine)=\n",
    "## Graph engines\n",
    "\n",
    "Once you have a serving function, you need to choose the graph topology:\n",
    "* [Router topology](#router)\n",
    "* [Flow topology](#flow)\n",
    "```{note} Once the topology is set, you cannot change an existing function topology.\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Router\n",
    "\n",
    "The default topology is the `router` topology. It is a minimal configuration with a single router and one or more child routes/models, used for simple model serving or \n",
    "single hop configurations. The basic routing logic is to route to the child routes based on the `event.path`.\n",
    "\n",
    "With the `router` topology you can specify different machine learning models. Each model has a logical name. This name is used to route to the correct model when calling the serving function.\n",
    "\n",
    "More advanced or custom routing can be used, for example, the ensemble router sends the event to all child routes in parallel, aggregates the result, and responds."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_iris\n",
    "\n",
    "# set the topology/router\n",
    "graph = fn.set_topology(\"router\")\n",
    "\n",
    "# Add the model\n",
    "fn.add_model(\n",
    "    \"model1\",\n",
    "    class_name=\"ClassifierModel\",\n",
    "    model_path=\"https://s3.wasabisys.com/iguazio/models/iris/model.pkl\",\n",
    ")\n",
    "\n",
    "# Add additional models\n",
    "# fn.add_model(\"model2\", class_name=\"ClassifierModel\", model_path=\"<path2>\")\n",
    "\n",
    "# create and use the graph simulator\n",
    "server = fn.to_mock_server()\n",
    "x = load_iris()[\"data\"].tolist()\n",
    "result = server.test(\"/v2/models/model1/infer\", {\"inputs\": x})\n",
    "server.wait_for_completion()\n",
    "\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(flow-topology)=\n",
    "### Flow\n",
    "\n",
    "The flow topology is a full graph/DAG. It is implemented using two engines: async (the default)\n",
    "is based on [Storey](https://github.com/mlrun/storey) and asynchronous event loop; and `sync`, which supports a simple\n",
    "sequence of steps. You can use the `flow` topology to specify tasks, which typically manipulate the data. The most common scenario is pre-processing of data prior to the model execution.\n",
    "\n",
    "In this topology, you build and connect the graph (DAG) by adding steps using the `step.to()` method, or by using the \n",
    "`graph.add_step()` method.\n",
    "\n",
    "The `step.to()` is typically used to chain steps together. `graph.add_step` can add steps anywhere on the\n",
    "graph and has `before` and `after` parameters to specify the location of the step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fn2 = project.set_function(\n",
    "    name=\"serving_example_flow\", kind=\"serving\", image=\"mlrun/mlrun\"\n",
    ")\n",
    "\n",
    "graph2 = fn2.set_topology(\"flow\")\n",
    "\n",
    "graph2_enrich = graph2.to(\"storey.Extend\", name=\"enrich\", _fn='({\"tag\": \"something\"})')\n",
    "\n",
    "# add an Ensemble router with two child models (routes)\n",
    "router = graph2.add_step(mlrun.serving.ModelRouter(), name=\"router\", after=\"enrich\")\n",
    "router.add_route(\n",
    "    \"m1\",\n",
    "    class_name=\"ClassifierModel\",\n",
    "    model_path=\"https://s3.wasabisys.com/iguazio/models/iris/model.pkl\",\n",
    ")\n",
    "router.respond()\n",
    "\n",
    "# add an error handling step, run only when/if the \"pre-process\" step fails\n",
    "graph.to(name=\"pre-process\", handler=\"raising_step\").error_handler(\n",
    "    name=\"catcher\", handler=\"handle_error\", full_event=True\n",
    ")\n",
    "\n",
    "# Add additional models\n",
    "# router.add_route(\"m2\", class_name=\"ClassifierModel\", model_path=path2)\n",
    "\n",
    "# plot the graph (using Graphviz)\n",
    "graph2.plot(rankdir=\"LR\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deploy the function to a mock server\n",
    "\n",
    "Use MLRun's mock server to test and debug your model before deploying it. Specify a mock server with either {py:meth}`~mlrun.runtimes.ServingRuntime.to_mock_server` or with `mock=True` in {py:meth}`~mlrun.projects.MlrunProject.deploy_function`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fn2_server = fn2.to_mock_server()\n",
    "\n",
    "result = fn2_server.test(\"/v2/models/m1/infer\", {\"inputs\": x})\n",
    "fn2_server.wait_for_completion()\n",
    "\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deploy the function as a Nuclio function\n",
    "\n",
    "Deploy graphs as a real-time Nuclio serverless function to your cluster with the command: `function.deploy()`. See {py:meth}`~mlrun.runtimes.ServingRuntime.deploy`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fn2.deploy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deploy the function as a Kubernetes job\n",
    "\n",
    "You can deploy serving graphs as one-time, or scheduled, KubejobRuntime. This enables use-cases such as batch-infer and various evaluation options. And you can run the graph on demand with a list of inputs. Use {py:meth}`~mlrun.runtimes.ServingRuntime.to_job`. See an example in {ref}`batch-infer-drift-tutor`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job = fn2.to_job()\n",
    "run_obj = project.run_function(job)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
