{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mBcdKd8Z7WYH"
      },
      "source": [
        "<center>\n",
        "    <p style=\"text-align:center\">\n",
        "        <h1>OpenLLM</h1>\n",
        "        <img alt=\"BentoML logo\" src=\"https://raw.githubusercontent.com/bentoml/BentoML/main/docs/source/_static/img/bentoml-logo-black.png\" width=\"200\"/>\n",
        "        </br>\n",
        "        <a href=\"https://github.com/bentoml/OpenLLM\">GitHub</a>\n",
        "        |\n",
        "        <a href=\"https://l.bentoml.com/join-openllm-discord\">Community</a>\n",
        "    </p>\n",
        "</center>\n",
        "<h1 align=\"center\">Serving Llama2 with OpenLLM</h1>\n",
        "\n",
        "[OpenLLM](https://github.com/bentoml/OpenLLM) is an open-source framework for serving and operating any Large Language Models (LLMs) in production."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fqZysSucYlJv"
      },
      "source": [
        "This is a project demonstrating basic usage of OpenLLM with\n",
        "Llama2 as an example. In this project, you'll learn how to run Llama2 with OpenLLM, serve the model over a REST API endpoint locally, and deploy it in production with BentoCloud for better scalability and cost efficiency."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "w22BwSTCGa44"
      },
      "source": [
        "## Deployment options\n",
        "\n",
        "You can try this demo in one of the following ways:\n",
        "\n",
        "1. Via Google Colab.\n",
        "\n",
        "   We recommend you run this demo on a GPU. To verify if you're using a GPU on Google Colab, check the runtime type in the top left corner.\n",
        "\n",
        "   To change the runtime type: In the toolbar menu, click **Runtime** > **Change runtime type** > Select the GPU (T4)\n",
        "   \n",
        "   Paid users may have access to more advanced GPUs. For free users, the T4 GPU might occasionally be unavailable.\n",
        "\n",
        "2. (Optional) Run this project locally.\n",
        "\n",
        "    If you have a GPU, you can also run this notebook locally with:\n",
        "    \n",
        "    ```\n",
        "    git clone git@github.com:bentoml/OpenLLM.git && cd OpenLLM/examples/openllm-llama2-demo && jupyter notebook\n",
        "    ```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZQ777iu3J69B"
      },
      "source": [
        "## Set up the environment"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DZO7EF_s5kbu"
      },
      "source": [
        "### [Optional] Check GPU and memory resources"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jnDCYgl0tm0X",
        "outputId": "ff4ab504-6abf-426d-c3db-de6c4b7d7d3b"
      },
      "outputs": [],
      "source": [
        "##@title Check the memory and GPU info you have\n",
        "import psutil\n",
        "import torch\n",
        "\n",
        "ram = psutil.virtual_memory()\n",
        "ram_total = ram.total / (1024 ** 3)\n",
        "print(\"MemTotal: %.2f GB\" % ram_total)\n",
        "\n",
        "print(\"=============GPU INFO=============\")\n",
        "if torch.cuda.is_available():\n",
        "    !/opt/bin/nvidia-smi || ture\n",
        "else:\n",
        "    print(\"GPU NOT available\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Q0ZTpnVOKYbj"
      },
      "source": [
        "### Install required dependencies"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ea6PG28SsdvA"
      },
      "outputs": [],
      "source": [
        "!pip install -U -q  openllm[llama,vllm] openai langchain"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RAVscZBnNozc"
      },
      "source": [
        "## Python API demo"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8aGgALP0L4Cs"
      },
      "source": [
        "### Initialize an OpenLLM Runner locally\n",
        "\n",
        "Learn more about Runners in the [BentoML documentation](https://docs.bentoml.com/en/latest/concepts/runner.html).\n",
        "\n",
        "👉 The Runner downloads the model from Hugging Face automatically.\n",
        "\n",
        "👉 We recommend you use vLLM as the backend for better performance.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "z2kiJ3Pe8teq",
        "outputId": "14b3f4c4-bb14-43a2-af82-e6e689a29f7f"
      },
      "outputs": [],
      "source": [
        "import openllm\n",
        "llm_runner = openllm.Runner(\"llama\",  model_id=\"NousResearch/llama-2-7b-chat-hf\",\n",
        "                            backend='vllm', init_local=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6slh7cFpwohd"
      },
      "source": [
        "You can manually trigger the model download if it is not downloaded automatically."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ukY2Cugr8tBD",
        "outputId": "bba81848-865b-4286-b052-542e556d3c4b"
      },
      "outputs": [],
      "source": [
        "llm_runner.download_model()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3LOHjoBOMj2L"
      },
      "source": [
        "### Test it with an prompt  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "7lUEDZkC8s-N",
        "outputId": "7443e5aa-4740-4086-bc61-da376e5ab8db"
      },
      "outputs": [],
      "source": [
        "async for output in llm_runner.vllm_generate.async_stream(\"what is the weather like in San Francisco?\", request_id=\"testId\"):\n",
        "    print(output[0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vu7Y8c4XOPiP"
      },
      "source": [
        "### Clean up\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ihFtzXWCOW8G",
        "outputId": "dd1544e3-db87-466e-bc7a-2c69ffa4aff8"
      },
      "outputs": [],
      "source": [
        "import gc\n",
        "import torch\n",
        "\n",
        "llm_runner.destroy()\n",
        "del llm_runner\n",
        "\n",
        "#release GPU memory\n",
        "torch.cuda.empty_cache()\n",
        "gc.collect()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HTJIVFrU75zF"
      },
      "source": [
        "## LLM server demo"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gI1E8Lu4Z2zi"
      },
      "source": [
        "### Launch an OpenLLM server using the Llama2 model\n",
        "\n",
        "Command: `openllm start llama --model-id {MODEL_ID} --backend [pt|vllm|...]`\n",
        "\n",
        "> We recommend you use the `vllm` backend for better performance.\n",
        "\n",
        "OpenLLM supports a variety of LLMs and archtiectures. Learn more in https://github.com/bentoml/OpenLLM#-supported-models.\n",
        "\n",
        "To run it in the background via `nohup`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1KxYxYCZ8s5D"
      },
      "outputs": [],
      "source": [
        "!nohup openllm start llama --model-id NousResearch/llama-2-7b-chat-hf --port 8001 --backend vllm > my.log 2>&1 &"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lITqp6LqiRZM"
      },
      "source": [
        "Check the process you just started."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "j4HjNf9jjAva",
        "outputId": "30352888-f719-4096-f0c7-5496f88b5b7d"
      },
      "outputs": [],
      "source": [
        "! ps aux|grep 'openllm'|grep -v grep"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6Cgu02pwbuOf"
      },
      "source": [
        "### [IMPORTANT] Server status check\n",
        "\n",
        "You can check if the server is ready (status code = 200). It may take some time for the server to be up and running."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "rzVfeo9Tbytk",
        "outputId": "a9704b27-8d33-4b2b-9485-a48fd35cc8fe"
      },
      "outputs": [],
      "source": [
        "! curl -i http://127.0.0.1:8001/readyz"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jo2d1_g4-lAW"
      },
      "source": [
        "### Interact with the LLM server\n",
        "\n",
        "Use one of the following ways to access the server.\n",
        "\n",
        "1. Run the `openllm query` command to query the model:\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "PvWPuh6Q-6Vq",
        "outputId": "a9f7faec-bdf7-4633-d31a-531f190a5e91"
      },
      "outputs": [],
      "source": [
        "!openllm query --endpoint http://127.0.0.1:8001 --timeout 120 \"What is the weight of the earth?\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aBvXcOL7a__J"
      },
      "source": [
        "2. if you are in Google Colab, visit the web UI."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        },
        "id": "VTvLhCDabFpe",
        "outputId": "65f49af5-2ffc-47ad-affa-2f19e96a4afb"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "if 'google.colab' in sys.modules:\n",
        "    #using colab proxy URL\n",
        "    from google.colab.output import eval_js\n",
        "    print(\"you are in colab runtime. please try it out in %s\" % eval_js(\"google.colab.kernel.proxyPort(8001)\"))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2j25weRLUFlb"
      },
      "source": [
        "3. Use OpenLLM's built-in Python client."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "AQX6pOz8BEu3"
      },
      "outputs": [],
      "source": [
        "import openllm\n",
        "import nest_asyncio\n",
        "nest_asyncio.apply()\n",
        "\n",
        "client = openllm.client.HTTPClient(\"http://127.0.0.1:8001\")\n",
        "res = client.query('what is the weight of the earth?', return_response='raw')\n",
        "print(res['responses'][0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qgq0TOZqUm_Z"
      },
      "source": [
        "4. Send a request using `curl`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wQlwT1wc_M-r"
      },
      "outputs": [],
      "source": [
        "!curl -k -X 'POST' \\\n",
        "  'http://127.0.0.1:8001/v1/generate_stream' \\\n",
        "  -H 'accept: text/event-stream' \\\n",
        "  -H 'Content-Type: application/json' \\\n",
        "  -d '{\"prompt\":\"write a tagline for an ice cream shop\", \"llm_config\": {\"max_new_tokens\": 256}}'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vFAYN1_o_bRS"
      },
      "source": [
        "5. Use the OpenAI compatible endpoint."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0G5clTYV_M8J"
      },
      "outputs": [],
      "source": [
        "import openai\n",
        "openai.api_base = \"http://localhost:8001/v1\"\n",
        "openai.api_key = \"na\"\n",
        "\n",
        "response = openai.Completion.create(model=\"llama2\", prompt=\"Say this is a test\")\n",
        "print(response)\n",
        "\n",
        "chatCompletion = openai.ChatCompletion.create(\n",
        "    model=\"llama2\",\n",
        "    messages=[\n",
        "        {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
        "        {\"role\": \"user\", \"content\": \"Hello!\"}\n",
        "    ]\n",
        ")\n",
        "print(chatCompletion)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "T3JOFs9sBNyH"
      },
      "source": [
        "### LangChain integration\n",
        "\n",
        "OpenLLM supports integration with LangChain. You can use `langchain.llms.OpenLLM` to interact with the remote OpenLLM server. You can connect to it by specifying its URL:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GTg055FH_M5w"
      },
      "outputs": [],
      "source": [
        "from langchain.llms import OpenLLM\n",
        "llm = OpenLLM(server_url=\"http://localhost:8001\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Wyiz3fLoBeJ2"
      },
      "outputs": [],
      "source": [
        "from langchain.prompts import PromptTemplate\n",
        "from langchain.chains import LLMChain\n",
        "\n",
        "template = \"What is a good name for a company that makes {product}?\"\n",
        "\n",
        "prompt = PromptTemplate(template=template, input_variables=[\"product\"])\n",
        "\n",
        "llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
        "\n",
        "generated = llm_chain.run(product=\"mechanical keyboard\")\n",
        "print(generated)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DNZFqf-gAdgF"
      },
      "source": [
        "### Stop the server in the background"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zZX8ICQmAdTi"
      },
      "outputs": [],
      "source": [
        "!pkill -f 'bentoml|openllm'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4tUAIkMuBzPs"
      },
      "source": [
        "## Deploy Llama2 in production with BentoCloud\n",
        "\n",
        "After you test the server, you can deploy it in production using BentoCloud."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "otXwv1No1RXP"
      },
      "source": [
        "### What is BentoCloud?\n",
        "\n",
        "[BentoCloud](https://www.bentoml.com/cloud) is a fully-managed platform designed for building and operating AI applications.\n",
        "\n",
        "  * Easiest way to deploy and operate AI applications.\n",
        "  * Natively support the OpenLLM workflow and optimization.\n",
        "\n",
        "If you don't have a BentoCloud account, visit the [BentoCloud website](https://www.bentoml.com/cloud) to start a free trial.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YRtSpeyQ2A0S"
      },
      "source": [
        "You can follow the steps in these blog posts to deploy your LLM to BentoCloud:\n",
        "\n",
        "* [llama2-7b](https://www.bentoml.com/blog/deploying-llama-2-7b-on-bentocloud)\n",
        "* [llama2-13b](https://www.bentoml.com/blog/openllm-in-action-part-2-deploying-llama-2-13b-on-bentocloud)\n",
        "* [llama2-70b]()\n",
        "\n",
        "![xlarge_demo_llama_overview_bentocloud_8d10abadb0.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n-3NqbdB3PPM"
      },
      "source": [
        "### Build a Bento\n",
        "\n",
        "Use OpenLLM to build the model into a standardized distribution unit in BentoML, also known as a Bento. Command:\n",
        "\n",
        "```\n",
        "openllm build llama --model-id {model-id} --backend [pt|vllm]\n",
        "```"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BKpvHxGNwhIK",
        "outputId": "9eff9af7-20ed-41e8-debe-86ef0916a908"
      },
      "outputs": [],
      "source": [
        "!openllm build llama --model-id NousResearch/llama-2-7b-chat-hf --backend vllm"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EDFu2jTG3n-Q"
      },
      "source": [
        "### View the Bento"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3k9816Jd3rnY",
        "outputId": "69a5d573-5e3a-4fab-9167-238e7126704e"
      },
      "outputs": [],
      "source": [
        "!openllm list-bentos"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "i2qwy9l53uso"
      },
      "source": [
        "### Log in to BentoCloud and push the Bento\n",
        "\n",
        "To log in to BentoCloud and push the Bento to it, you need your BentoCloud endpoint URL and an API token. For more information, see [Manage access tokens](https://docs.bentoml.com/en/latest/bentocloud/how-tos/manage-access-token.html)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "MTMSjC_71Xk6"
      },
      "outputs": [],
      "source": [
        "return_code = !bentoml cloud list-context\n",
        "\n",
        "if \"colab-user\" not in ''.join(return_code):\n",
        "  # Log in to BentoCloud\n",
        "  endpoint = input(\"input endpoint (like https://xxx.cloud.bentoml.com): \")\n",
        "  token = input(\"input token (please follow https://docs.bentoml.com/en/latest/bentocloud/how-tos/manage-access-token.html#creating-an-api-token):\")\n",
        "  !bentoml cloud login --api-token {token} --endpoint {endpoint} --context colab-user\n",
        "\n",
        "# Replace the Bento tag with your own\n",
        "!bentoml push nousresearch--llama-2-7b-chat-hf-service:37892f30c23786c0d5367d80481fa0d9fba93cf8 --context colab-user"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7YwNbOF84Oer"
      },
      "source": [
        "### Create a Deployment via the BentoCloud Console\n",
        "\n",
        "Follow this [guide](https://www.bentoml.com/blog/deploying-llama-2-7b-on-bentocloud) to deploy this Bento on BentoCloud.\n",
        "\n",
        "![large_create_llama2_bento_deployment_c6f4ea390e.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QtzO813w9pR4"
      },
      "source": [
        "### Create a Deployment via the BentoML client\n",
        "\n",
        "You can find detailed configuration in [Deployment creation and update information](https://docs.bentoml.com/en/latest/bentocloud/reference/deployment-creation-and-update-info.html).\n",
        "\n",
        "📢 Make sure you have logged in to BentoCloud in the last step."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WLUH1c6rcE47"
      },
      "outputs": [],
      "source": [
        "###@title Alternatively, use the BentoML client to create a Deployment.\n",
        "import bentoml\n",
        "import json\n",
        "\n",
        "return_code = !bentoml cloud list-context\n",
        "if \"colab-user\" not in ''.join(return_code):\n",
        "  print(\"please login first!\")\n",
        "else:\n",
        "  client = bentoml.cloud.BentoCloudClient()\n",
        "  #runner config\n",
        "  runner = bentoml.cloud.Resource.for_runner(\n",
        "      resource_instance=\"starter-aws-g4dn-xlarge-gpu-t4-xlarge\",\n",
        "      #hpa_conf={\"min_replicas\": 1, \"max_replicas\": 1},\n",
        "  )\n",
        "  #api-server hpa config\n",
        "  api_server = bentoml.cloud.Resource.for_api_server(\n",
        "      resource_instance=\"starter-aws-t3-2xlarge-cpu-small\",\n",
        "  )\n",
        "  hpa_conf = bentoml.cloud.Resource.for_hpa_conf(min_replicas=1, max_replicas=1)\n",
        "\n",
        "  res = client.deployment.create(\n",
        "      deployment_name=\"test-llama2\",\n",
        "      bento=\"nousresearch--llama-2-7b-chat-hf-service:37892f30c23786c0d5367d80481fa0d9fba93cf8\",\n",
        "      context = \"colab-user\",\n",
        "      cluster_name = \"default\",\n",
        "      #mode=\"deployment\",\n",
        "      kube_namespace='yatai',\n",
        "      runners_config={\"llm-llama-runner\": runner},\n",
        "      api_server_config=api_server,\n",
        "      hpa_conf=hpa_conf,\n",
        "  )\n",
        "  print(json.dump(res, indent=4))"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
