{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Deploying Multiple LoRA Adapters on Vertex AI with vLLM\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fopen-models%2Fserving%2Fget_started_with_vllm_lora_serving_on_vertex_ai.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<p>\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/serving/get_started_with_vllm_lora_serving_on_vertex_ai.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n",
        "</p>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Author(s) |\n",
        "| --- |\n",
        "| [Ivan Nardini](https://github.com/inardini) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This tutorial provides a comprehensive guide to deploying multiple LoRA (Low-Rank Adaptation) adapters on Google Cloud's Vertex AI using vLLM. By the end of this tutorial, you'll be able to serve a single base model with multiple specialized adapters, allowing you to handle different types of tasks (like SQL generation and code generation) using the same infrastructure.\n",
        "\n",
        "### What you'll cover\n",
        "\n",
        "You'll deploy a Vertex AI endpoint that serves:\n",
        "- **Base Model**: `google/gemma-2-2b-it` (Gemma 2 2B Instruct)\n",
        "- **SQL Adapter**: `google-cloud-partnership/gemma-2-2b-it-lora-sql` (specialized for SQL query generation)\n",
        "- **Magicoder Adapter**: `google-cloud-partnership/gemma-2-2b-it-lora-magicoder` (specialized for code generation)\n",
        "\n",
        "All three models will be available simultaneously, and clients can switch between them on a per-request basis with minimal overhead.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install required packages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-cloud-aiplatform huggingface_hub[hf_transfer]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Also you need to be sure that the following IAM permissions are assigned:\n",
        "\n",
        "   - `roles/aiplatform.user` (Vertex AI User)\n",
        "   - `roles/artifactregistry.admin` (Artifact Registry Admin)\n",
        "   - `roles/cloudbuild.builds.editor` (Cloud Build Editor)\n",
        "   - `roles/storage.admin` (Storage Admin)\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "import vertexai\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
        "\n",
        "# Create GCS bucket\n",
        "BUCKET_NAME = f\"{PROJECT_ID}-vllm-peft-serving\"\n",
        "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
        "\n",
        "! gcloud storage buckets create {BUCKET_URI} --location={LOCATION} --project={PROJECT_ID}\n",
        "\n",
        "# Set fast download from HF\n",
        "os.environ[\"HF_HUB_ENABLE_HF_TRANSFER\"] = \"1\"\n",
        "\n",
        "# Initialize Vertex AI SDK\n",
        "vertexai.init(project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wr4_ajtJKC8c"
      },
      "source": [
        "### Import libraries\n",
        "\n",
        "Import required libraries."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-5WG5dRqKJcw"
      },
      "outputs": [],
      "source": [
        "from pathlib import Path as p\n",
        "from huggingface_hub import interpreter_login\n",
        "from huggingface_hub import get_token, snapshot_download\n",
        "from google.cloud import aiplatform\n",
        "import json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-Ef1BGnlJR27"
      },
      "source": [
        "## Create Artifact Registry Repository\n",
        "\n",
        "Artifact Registry is Google Cloud's service for storing and managing container images, packages, and other artifacts. Think of it as a private Docker Hub for your organization.\n",
        "\n",
        "In the Artifact Registry, you will store your custom Docker container image.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "uiNgcL05JRQI"
      },
      "outputs": [],
      "source": [
        "DOCKER_REPOSITORY = \"vllm-lora-repo\"\n",
        "\n",
        "# Create the repository\n",
        "!gcloud artifacts repositories create {DOCKER_REPOSITORY} \\\n",
        "    --repository-format=docker \\\n",
        "    --project={PROJECT_ID} \\\n",
        "    --location={LOCATION} \\\n",
        "    --description=\"Repository for vLLM containers with LoRA support\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5n6KDpFxJqrg"
      },
      "source": [
        "Verify the repository."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "EmAsoJxgJt-4"
      },
      "outputs": [],
      "source": [
        "!gcloud artifacts repositories list --location={LOCATION} --project={PROJECT_ID}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EmkgjGjbU3s9"
      },
      "source": [
        "## Download Models and Adapters\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f9af3e57f89a"
      },
      "source": [
        "### Authenticate your Hugging Face account\n",
        "\n",
        "Many models on HuggingFace, including Gemma, are \"gated\" - meaning you need to request access and accept terms of use before downloading. The HuggingFace token authenticates your downloads. This token is only needed during the download phase, not when serving the model.\n",
        "\n",
        "To generate a new user access token with read-only access, you need:\n",
        "\n",
        "1. Create a [HuggingFace account](https://huggingface.co/) if you don't have one\n",
        "2. Go to **Settings → Access Tokens**\n",
        "3. Click **New Token**\n",
        "4. Set name (e.g., \"vertex-ai-deployment\") and role (Read)\n",
        "5. Click **Generate**\n",
        "6. Copy the token"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8d836e0210fe"
      },
      "outputs": [],
      "source": [
        "interpreter_login()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2pezdYCcWafD"
      },
      "source": [
        "### Create Build directory and download models\n",
        "\n",
        "Prepare the directory you will use to build the serving image."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GRtgKHakWdyd"
      },
      "outputs": [],
      "source": [
        "base_model_id = \"google/gemma-2-2b-it\"\n",
        "sql_adapter_id = \"google-cloud-partnership/gemma-2-2b-it-lora-sql\"\n",
        "magicoder_adapter_id = \"google-cloud-partnership/gemma-2-2b-it-lora-magicoder\"\n",
        "\n",
        "models_dir = \"./models\"\n",
        "adapters_dir = \"./adapters\"\n",
        "\n",
        "p(models_dir).mkdir(exist_ok=True, parents=True)\n",
        "p(adapters_dir).mkdir(exist_ok=True, parents=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cjuvuOKg5yOQ"
      },
      "source": [
        "Download base model and adapters."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qXkm4JfFXEY4"
      },
      "outputs": [],
      "source": [
        "# Download base model\n",
        "base_model_path = snapshot_download(\n",
        "    repo_id=base_model_id,\n",
        "    token=get_token(),\n",
        "    local_dir=f\"{models_dir}/gemma-2-2b-it\",\n",
        "    local_dir_use_symlinks=False\n",
        ")\n",
        "\n",
        "# Download SQL LoRA adapter\n",
        "sql_adapter_path = snapshot_download(\n",
        "    repo_id=sql_adapter_id,\n",
        "    token=get_token(),\n",
        "    local_dir=f\"{adapters_dir}/sql-lora\",\n",
        "    local_dir_use_symlinks=False\n",
        ")\n",
        "\n",
        "# Download Magicoder LoRA adapter\n",
        "magicoder_adapter_path = snapshot_download(\n",
        "    repo_id=magicoder_adapter_id,\n",
        "    token=get_token(),\n",
        "    local_dir=f\"{adapters_dir}/magicoder-lora\",\n",
        "    local_dir_use_symlinks=False\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hiQDJsu3cqzz"
      },
      "source": [
        "#### Upload Models to Google Cloud Storage\n",
        "\n",
        "Upload models to GCS for fast downloads during container startup.\n",
        "\n",
        "> **Note**: Instead of baking models into the Docker image (which would create a ~13GB image), we store models in GCS and download them when the container starts. This keeps the Docker image lightweight, makes model updates easier (no rebuild needed), and leverages GCS's fast regional download speeds.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7Jn_qYQndbG0"
      },
      "outputs": [],
      "source": [
        "! gcloud config set storage/parallel_composite_upload_enabled True\n",
        "\n",
        "# Upload base model\n",
        "!gcloud storage cp -r {models_dir}/gemma-2-2b-it {BUCKET_URI}/models/\n",
        "\n",
        "# Upload adapters\n",
        "!gcloud storage cp -r {adapters_dir}/sql-lora {BUCKET_URI}/adapters/\n",
        "!gcloud storage cp -r {adapters_dir}/magicoder-lora {BUCKET_URI}/adapters/"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8L8-S4OHJzP_"
      },
      "source": [
        "## Build Custom vLLM Container\n",
        "\n",
        "Our custom Docker container will:\n",
        "\n",
        "1. Start from the official vLLM GPU image\n",
        "2. **Add a script to download the base model and LoRA adapters** from GCS at container startup\n",
        "3. Configure vLLM to use the local models (no downloads at runtime)\n",
        "4. Set up Vertex AI compatible health checks\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "y2muZr-xKcNL"
      },
      "source": [
        "### Create the build files"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9NQi9aFUKYlA"
      },
      "source": [
        "The build uses three key files:\n",
        "\n",
        "- **`Dockerfile`**: Defines the container image\n",
        "- **`entrypoint.sh`**: Downloads models from GCS at startup\n",
        "- **`cloudbuild.yaml`**: Orchestrates the Docker build process"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CLUhK5tRKkV0"
      },
      "source": [
        "#### Create Dockerfile\n",
        "\n",
        "This Dockerfile:\n",
        "\n",
        "- Starts from the official vLLM GPU image\n",
        "- Installs Google Cloud SDK (for downloading from GCS)\n",
        "- Adds entrypoint script\n",
        "\n",
        "The Dockerfile uses a multi-layer approach. The base vLLM image already contains Python, CUDA drivers, and vLLM itself. We add only the `gcloud` CLI tool to enable GCS downloads. The `ENTRYPOINT` directive ensures our custom script runs before vLLM starts, downloading models first."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "efA_W6c3hmx-"
      },
      "outputs": [],
      "source": [
        "build_dir = \"build\"\n",
        "p(build_dir).mkdir(exist_ok=True, parents=True)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0ND1QfwKKnpO"
      },
      "outputs": [],
      "source": [
        "dockerfile = \"\"\"\n",
        "ARG BASE_IMAGE\n",
        "FROM ${BASE_IMAGE}\n",
        "\n",
        "ENV DEBIAN_FRONTEND=noninteractive\n",
        "\n",
        "# Install gcloud SDK for downloading models from GCS\n",
        "RUN apt-get update && \\\\\n",
        "    apt-get install -y apt-utils git apt-transport-https gnupg ca-certificates curl && \\\\\n",
        "    echo \"deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main\" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \\\\\n",
        "    curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg && \\\\\n",
        "    apt-get update -y && apt-get install google-cloud-cli -y && \\\\\n",
        "    rm -rf /var/lib/apt/lists/*\n",
        "\n",
        "WORKDIR /workspace/vllm\n",
        "\n",
        "# Copy entrypoint script\n",
        "COPY ./entrypoint.sh /workspace/vllm/vertexai/entrypoint.sh\n",
        "RUN chmod +x /workspace/vllm/vertexai/entrypoint.sh\n",
        "\n",
        "ENTRYPOINT [\"/workspace/vllm/vertexai/entrypoint.sh\"]\n",
        "\"\"\"\n",
        "\n",
        "# Write Dockerfile\n",
        "with open(f\"{build_dir}/Dockerfile\", \"w\") as f:\n",
        "    f.write(dockerfile)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IKCLLHOAK_Kf"
      },
      "source": [
        "#### Create entrypoint.sh\n",
        "\n",
        "This script downloads models/adapters from GCS and starts vLLM.\n",
        "\n",
        "In particular, it intercepts vLLM arguments, detects GCS paths (gs://...), downloads those resources to local directories, rewrites the paths to point locally, then launches vLLM with the updated arguments. This happens transparently - vLLM never knows it's loading from GCS. The `set -euo pipefail` ensures the script fails fast if any download fails, preventing vLLM from starting with missing models."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9sA-RI07LFE3"
      },
      "outputs": [],
      "source": [
        "entrypoint = \"\"\"#!/bin/bash\n",
        "\n",
        "set -euo pipefail\n",
        "\n",
        "readonly LOCAL_MODEL_DIR=\"/workspace/models\"\n",
        "readonly LOCAL_ADAPTER_DIR=\"/workspace/adapters\"\n",
        "\n",
        "gcloud config set storage/parallel_composite_upload_enabled True\n",
        "\n",
        "download_from_gcs() {\n",
        "    gcs_uri=$1\n",
        "    local_dir=$2\n",
        "\n",
        "    echo \"Downloading from $gcs_uri to $local_dir...\"\n",
        "    parent_dir=$(dirname \"$local_dir\")\n",
        "    mkdir -p \"$parent_dir\"\n",
        "\n",
        "    # Download contents to parent, which creates the final directory\n",
        "    if gcloud storage cp -r \"$gcs_uri\" \"$parent_dir/\"; then\n",
        "        echo \"Downloaded successfully to ${local_dir}\"\n",
        "    else\n",
        "        echo \"Failed to download from: $gcs_uri\" >&2\n",
        "        exit 1\n",
        "    fi\n",
        "}\n",
        "\n",
        "updated_args=()\n",
        "for arg in \"$@\"; do\n",
        "    # Check if argument starts with --model= and points to gs://\n",
        "    if [[ $arg == --model=gs://* ]]; then\n",
        "        model_path=\"${arg#--model=}\"\n",
        "        base_model_name=$(basename \"$model_path\")\n",
        "        local_path=\"${LOCAL_MODEL_DIR}/${base_model_name}\"\n",
        "\n",
        "        download_from_gcs \"$model_path\" \"$local_path\"\n",
        "        updated_args+=(\"--model=${local_path}\")\n",
        "\n",
        "    # Check if argument is a LoRA module path with gs://\n",
        "    elif [[ $arg == *=gs://* && $arg != --model=* ]]; then\n",
        "        # Format: name=gs://path\n",
        "        adapter_name=\"${arg%%=*}\"\n",
        "        gcs_path=\"${arg#*=}\"\n",
        "        adapter_basename=$(basename \"$gcs_path\")\n",
        "        local_adapter_path=\"${LOCAL_ADAPTER_DIR}/${adapter_basename}\"\n",
        "\n",
        "        download_from_gcs \"$gcs_path\" \"$local_adapter_path\"\n",
        "        updated_args+=(\"${adapter_name}=${local_adapter_path}\")\n",
        "    else\n",
        "        updated_args+=(\"$arg\")\n",
        "    fi\n",
        "done\n",
        "\n",
        "echo \"Starting vLLM with arguments: ${updated_args[@]}\"\n",
        "exec \"${updated_args[@]}\"\n",
        "\"\"\"\n",
        "\n",
        "with open(f\"{build_dir}/entrypoint.sh\", \"w\") as f:\n",
        "    f.write(entrypoint)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xNT1hGlILua9"
      },
      "source": [
        "#### Create cloudbuild.yaml\n",
        "\n",
        "This file tells Cloud Build how to build your container:\n",
        "- Uses the official vLLM GPU image as a base\n",
        "- Adds custom entrypoint for Vertex AI compatibility\n",
        "- Pushes the image to Artifact Registry"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LSoRnzRALiyv"
      },
      "outputs": [],
      "source": [
        "cloudbuild = \"\"\"\n",
        "steps:\n",
        "  - name: 'gcr.io/cloud-builders/docker'\n",
        "    automapSubstitutions: true\n",
        "    script: |\n",
        "      #!/usr/bin/env bash\n",
        "      set -euo pipefail\n",
        "\n",
        "      base_image=${_BASE_IMAGE}\n",
        "      image_name=\"vllm-lora-gcs\"\n",
        "\n",
        "      echo \"Building container image...\"\n",
        "      docker build -t $LOCATION-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/$image_name --build-arg BASE_IMAGE=$base_image .\n",
        "\n",
        "      echo \"Pushing image to Artifact Registry...\"\n",
        "      docker push $LOCATION-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/$image_name\n",
        "\n",
        "substitutions:\n",
        "  _BASE_IMAGE: vllm/vllm-openai:v0.11.0\n",
        "  _REPOSITORY: {DOCKER_REPOSITORY}\n",
        "\n",
        "timeout: 1800s\n",
        "\"\"\".replace(\n",
        "    \"{DOCKER_REPOSITORY}\", DOCKER_REPOSITORY\n",
        ")\n",
        "\n",
        "with open(f\"{build_dir}/cloudbuild.yaml\", \"w\") as f:\n",
        "    f.write(cloudbuild)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LIg0ZfuYMMr_"
      },
      "source": [
        "### Build the Container\n",
        "\n",
        "Now let's build the container using Cloud Build. This process takes less than 10 minutes.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bol28Mc2MRFn"
      },
      "outputs": [],
      "source": [
        "# Build the container using Cloud Build\n",
        "!cd {build_dir} && \\\n",
        "gcloud builds submit \\\n",
        "    --config=cloudbuild.yaml \\\n",
        "    --project={PROJECT_ID} \\\n",
        "    --region={LOCATION} \\\n",
        "    --timeout=\"1h\" \\\n",
        "    --machine-type=e2-highcpu-8 \\\n",
        "    --substitutions=_REPOSITORY={DOCKER_REPOSITORY},_BASE_IMAGE=vllm/vllm-openai:latest"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nTxSX7ITMyjS"
      },
      "source": [
        "Verify image exists in Artifact Registry\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "vvi9J4sfMz2P"
      },
      "outputs": [],
      "source": [
        "! gcloud artifacts docker images list {LOCATION}-docker.pkg.dev/{PROJECT_ID}/{DOCKER_REPOSITORY} --include-tags"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4AKpi6eZjiKz"
      },
      "source": [
        "## Configure and Upload Model to Vertex AI\n",
        "\n",
        "When you upload a model to Vertex AI, you're creating a **Model Resource** that contains:\n",
        "- Container image location\n",
        "- Server startup arguments\n",
        "- Environment variables\n",
        "- Health check configuration\n",
        "- Resource requirements"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RDpy0w_UkTE1"
      },
      "source": [
        "#### Define model configuration and environment variables\n",
        "\n",
        "Below you have a description of some of the main parameters to deploy the model.\n",
        "\n",
        "| Argument | Purpose | Impact |\n",
        "|----------|---------|--------|\n",
        "| `--model=gs://...` | GCS path to base model | Downloaded at startup by entrypoint.sh |\n",
        "| `--enable-lora` | Enables LoRA adapter support | **Required** for serving adapters |\n",
        "| `--lora-modules name=gs://...` | Pre-load LoRA adapters from GCS | Downloaded and loaded at startup |\n",
        "| `--max-loras=4` | Max adapters in memory simultaneously | Higher = more adapters, more memory usage |\n",
        "| `--max-lora-rank=64` | Max rank of LoRA matrices | Must match or exceed your adapter ranks |\n",
        "| `--max-model-len=2048` | Max sequence length (tokens) | Lower = more memory for batch processing |\n",
        "| `--gpu-memory-utilization=0.9` | GPU memory to use | 0.9 leaves 10% buffer for safety |\n",
        "| `--enable-prefix-caching` | Cache common prefixes | Improves latency for similar requests |\n",
        "\n",
        "\n",
        "> **Note**: The L4 GPU has 24GB VRAM. With our settings: ~5GB for the base model, ~8GB for KV cache, ~50MB per LoRA adapter, ~2GB CUDA overhead = ~15GB used, leaving ~9GB buffer. If you increase `--max-model-len` or `--max-loras`, you may need to reduce `--gpu-memory-utilization` to avoid OOM errors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sj8IaNpTjuGL"
      },
      "outputs": [],
      "source": [
        "# Model configuration\n",
        "MODEL_NAME = \"gemma-2-2b-multi-lora\"\n",
        "MACHINE_TYPE = \"g2-standard-8\"\n",
        "ACCELERATOR_TYPE = \"NVIDIA_L4\"\n",
        "ACCELERATOR_COUNT = 1\n",
        "DOCKER_URI = f\"{LOCATION}-docker.pkg.dev/{PROJECT_ID}/{DOCKER_REPOSITORY}/vllm-lora-gcs\"\n",
        "\n",
        "# vLLM server arguments - using GCS paths (downloaded by entrypoint.sh at startup)\n",
        "vllm_args = [\n",
        "    \"python3\",\n",
        "    \"-m\",\n",
        "    \"vllm.entrypoints.openai.api_server\",\n",
        "    \"--host=0.0.0.0\",\n",
        "    \"--port=8080\",\n",
        "    f\"--model={BUCKET_URI}/models/gemma-2-2b-it\",  # GCS path\n",
        "    \"--max-model-len=2048\",\n",
        "    \"--gpu-memory-utilization=0.9\",\n",
        "    \"--enable-lora\",  # CRITICAL: Enable LoRA support\n",
        "    \"--max-loras=4\",  # Allow up to 4 LoRA adapters in memory\n",
        "    \"--max-lora-rank=64\",\n",
        "    \"--enable-prefix-caching\",\n",
        "    f\"--tensor-parallel-size={ACCELERATOR_COUNT}\",\n",
        "    # Load LoRA adapters at startup from GCS\n",
        "    \"--lora-modules\",\n",
        "    f\"sql-lora={BUCKET_URI}/adapters/sql-lora\",\n",
        "    f\"magicoder-lora={BUCKET_URI}/adapters/magicoder-lora\",\n",
        "]\n",
        "\n",
        "# Environment variables for the container\n",
        "env_vars = {\n",
        "    \"LD_LIBRARY_PATH\": \"$LD_LIBRARY_PATH:/usr/local/nvidia/lib64\",  # NVIDIA libraries\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "21vlLwHPk5Fl"
      },
      "source": [
        "#### Upload Model to Model Registry\n",
        "\n",
        "We are now ready to register the model in Vertex AI Model Registry, a managed model repository to version your models.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "207GTbAuk5YA"
      },
      "outputs": [],
      "source": [
        "vertexai_model = aiplatform.Model.upload(\n",
        "    display_name=MODEL_NAME,\n",
        "    serving_container_image_uri=DOCKER_URI,\n",
        "    serving_container_args=vllm_args,\n",
        "    serving_container_ports=[8080],\n",
        "    serving_container_predict_route=\"/v1/completions\",\n",
        "    serving_container_health_route=\"/health\",\n",
        "    serving_container_environment_variables=env_vars,\n",
        "    serving_container_shared_memory_size_mb=(16 * 1024),  # 16 GB shared memory\n",
        "    serving_container_deployment_timeout=1800,  # 30 minutes timeout\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "U7fGOjeslSd4"
      },
      "source": [
        "## Create Vertex AI Endpoint\n",
        "\n",
        "An **Endpoint** in Vertex AI is a stable URL for making predictions. You can host one or more deployed models, it handles load balancing and traffic splitting and provides monitoring and logging.\n",
        "\n",
        "Let's create a new endpoint to deploy our models.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zydF5Kr7lZjS"
      },
      "outputs": [],
      "source": [
        "vertexai_endpoint = aiplatform.Endpoint.create(\n",
        "    display_name=f\"{MODEL_NAME}-endpoint\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8CN7dXmBlld6"
      },
      "source": [
        "## Deploy Model to Endpoint\n",
        "\n",
        "Finally, let's deploy our models. Vertex AI provisions a VM with the specified GPU, pulls your Docker image from Artifact Registry, starts the container, and monitors health checks. Your entrypoint.sh script downloads ~5.1 GB of model files from GCS (fast because it's in the same region), then vLLM loads them into GPU memory and starts the OpenAI-compatible API server on port 8080. The deployment completes when the health check endpoint (`/health`) returns 200 OK.\n",
        "\n",
        "This would take 15-25 minutes.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1oR9w-uplzNR"
      },
      "outputs": [],
      "source": [
        "vertexai_model.deploy(\n",
        "    endpoint=vertexai_endpoint,\n",
        "    deployed_model_display_name=MODEL_NAME,\n",
        "    machine_type=MACHINE_TYPE,\n",
        "    accelerator_type=ACCELERATOR_TYPE,\n",
        "    accelerator_count=ACCELERATOR_COUNT,\n",
        "    min_replica_count=1,  # Minimum number of instances\n",
        "    max_replica_count=4,  # Maximum for autoscaling\n",
        "    autoscaling_target_accelerator_duty_cycle=60,  # Scale up at 60% GPU utilization\n",
        "    traffic_percentage=100,  # Route 100% of traffic to this model\n",
        "    deploy_request_timeout=1800,  # 30 minute timeout\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lVIhUgoQmP5Q"
      },
      "source": [
        "## Testing Your Deployment\n",
        "\n",
        "Now for the fun part - testing your multi-LoRA deployment."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a7bUjVZ4mFtO"
      },
      "source": [
        "### Test 1: Base Model (No Adapter)\n",
        "\n",
        "Let's first test the base Gemma model without any adapter.\n",
        "\n",
        "Without a LoRA adapter, you're using the pure Gemma-2-2b-it model. It will provide general conversational responses. This serves as a baseline - when you specify a LoRA adapter in subsequent tests, you'll see how the model's behavior changes for specialized tasks."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QpK20fOpmWkm"
      },
      "outputs": [],
      "source": [
        "# Test base model without adapter\n",
        "prompt = \"What is machine learning?\"\n",
        "\n",
        "request_body = json.dumps({\n",
        "    \"prompt\": prompt,\n",
        "    \"max_tokens\": 100,\n",
        "    \"temperature\": 0.7,\n",
        "})\n",
        "\n",
        "response = vertexai_endpoint.raw_predict(\n",
        "    body=request_body,\n",
        "    headers={\"Content-Type\": \"application/json\"}\n",
        ")\n",
        "\n",
        "if response.status_code == 200:\n",
        "    result = json.loads(response.text)\n",
        "    generated_text = result[\"choices\"][0][\"text\"]\n",
        "    print(f\"Response:\\n{generated_text}\\n\")\n",
        "else:\n",
        "    print(f\"Error: {response.status_code}\")\n",
        "    print(response.text)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ymb9YgWRngj7"
      },
      "source": [
        "### Test 2: SQL Adapter\n",
        "\n",
        "Now let's test the SQL adapter for database query generation.\n",
        "\n",
        "Notice we're adding `\"model\": \"sql-lora\"` to the request body. This tells vLLM to apply the SQL LoRA adapter on top of the base model. The adapter was trained specifically for text-to-SQL tasks, so it should generate syntactically correct SQL that accurately answers the question. We also set `temperature=0.0` for deterministic output - SQL queries shouldn't be creative."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kn6UZ9tQniNt"
      },
      "outputs": [],
      "source": [
        "# Test SQL LoRA adapter\n",
        "prompt = \"\"\"Write a SQL query to answer the question based on the table schema.\n",
        "\n",
        "context: CREATE TABLE employees (\n",
        "    id INT PRIMARY KEY,\n",
        "    name VARCHAR(100),\n",
        "    department VARCHAR(50),\n",
        "    salary DECIMAL(10,2),\n",
        "    hire_date DATE\n",
        ")\n",
        "\n",
        "question: What is the average salary of employees in the Engineering department?\n",
        "\n",
        "SQL query:\"\"\"\n",
        "\n",
        "request_body = json.dumps({\n",
        "    \"model\": \"sql-lora\",  # Specify the adapter\n",
        "    \"prompt\": prompt,\n",
        "    \"max_tokens\": 150,\n",
        "    \"temperature\": 0.0,  # Use 0 for deterministic SQL generation\n",
        "    # \"stop\": [\";\", \"\\n\\n\"]  # Stop at query end\n",
        "})\n",
        "\n",
        "response = vertexai_endpoint.raw_predict(\n",
        "    body=request_body,\n",
        "    headers={\"Content-Type\": \"application/json\"}\n",
        ")\n",
        "\n",
        "if response.status_code == 200:\n",
        "    result = json.loads(response.text)\n",
        "    generated_sql = result[\"choices\"][0][\"text\"]\n",
        "    print(f\"Generated SQL:\\n{generated_sql}\\n\")\n",
        "else:\n",
        "    print(f\"Error: {response.status_code}\")\n",
        "    print(response.text)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OwiFd1KcntJ5"
      },
      "source": [
        "### Test 3: Magicoder Adapter\n",
        "\n",
        "Test the code generation adapter."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "lTFaz4O4nuTD"
      },
      "outputs": [],
      "source": [
        "# Test Magicoder LoRA adapter for code generation\n",
        "prompt = \"\"\"Write a Python function to count to 10\"\"\"\n",
        "\n",
        "request_body = json.dumps({\n",
        "    \"model\": \"magicoder-lora\",  # Specify the adapter\n",
        "    \"prompt\": prompt,\n",
        "    \"max_tokens\": 200,\n",
        "    \"temperature\": 0.2,\n",
        "    # \"stop\": [\"\\n\\n\", \"# Example\", \"# Test\"]\n",
        "})\n",
        "\n",
        "response = vertexai_endpoint.raw_predict(\n",
        "    body=request_body,\n",
        "    headers={\"Content-Type\": \"application/json\"}\n",
        ")\n",
        "\n",
        "if response.status_code == 200:\n",
        "    result = json.loads(response.text)\n",
        "    generated_code = result[\"choices\"][0][\"text\"]\n",
        "    print(f\"Generated Code:\\n{generated_code}\\n\")\n",
        "else:\n",
        "    print(f\"Error: {response.status_code}\")\n",
        "    print(response.text)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0RhJi_SLglqq"
      },
      "source": [
        "## Advanced Usage Patterns"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n9Oy75AQg3gV"
      },
      "source": [
        "### Pattern 1: Batch Processing with Different Adapters\n",
        "\n",
        "Process multiple requests with different adapters in parallel using the Vertex AI SDK."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "u3fettbig22j"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "from concurrent.futures import ThreadPoolExecutor, as_completed\n",
        "\n",
        "def batch_predict_multi_adapter(endpoint: aiplatform.Endpoint, requests: list):\n",
        "    \"\"\"\n",
        "    Send multiple requests to different adapters in parallel.\n",
        "\n",
        "    Args:\n",
        "        endpoint: The Vertex AI endpoint object\n",
        "        requests: List of (model_name, prompt) tuples\n",
        "\n",
        "    Returns:\n",
        "        List of (model_name, response_text) tuples\n",
        "    \"\"\"\n",
        "    def single_predict(model_name, prompt):\n",
        "        request_body = {\n",
        "            \"prompt\": prompt,\n",
        "            \"max_tokens\": 300,\n",
        "            \"temperature\": 0.7,\n",
        "        }\n",
        "\n",
        "        if model_name:\n",
        "            request_body[\"model\"] = model_name\n",
        "\n",
        "        response = endpoint.raw_predict(\n",
        "            body=json.dumps(request_body),\n",
        "            headers={\"Content-Type\": \"application/json\"}\n",
        "        )\n",
        "\n",
        "        if response.status_code == 200:\n",
        "            result = json.loads(response.text)\n",
        "            return (model_name or \"base\", result[\"choices\"][0][\"text\"])\n",
        "        else:\n",
        "            return (model_name or \"base\", f\"Error: {response.status_code}\")\n",
        "\n",
        "    # Execute requests in parallel\n",
        "    results = []\n",
        "    with ThreadPoolExecutor(max_workers=4) as executor:\n",
        "        futures = [executor.submit(single_predict, model_name, prompt)\n",
        "                   for model_name, prompt in requests]\n",
        "\n",
        "        for future in as_completed(futures):\n",
        "            results.append(future.result())\n",
        "\n",
        "    return results\n",
        "\n",
        "# Example usage\n",
        "requests = [\n",
        "    (None, \"What is Python?\"),\n",
        "    (\"sql-lora\", \"Generate SQL: SELECT all users\"),\n",
        "    (\"magicoder-lora\", \"Write a function to reverse a string\"),\n",
        "    (\"sql-lora\", \"Generate SQL: JOIN users and orders\"),\n",
        "]\n",
        "\n",
        "results = batch_predict_multi_adapter(vertexai_endpoint, requests)\n",
        "\n",
        "for model_name, response in results:\n",
        "    print(f\"\\n{model_name}:\")\n",
        "    print(f\"  {response}...\")\n",
        "    print(\"=\"*50)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-p5AVRTZj8s4"
      },
      "source": [
        "### Pattern 2: Longer Generation with Streaming\n",
        "\n",
        "Although Vertex AI supports `stream_raw_predict` method, Streaming responses require additional server-side configuration in the vLLM container. For production streaming needs, consider using the vLLM server's native OpenAI-compatible streaming endpoint directly with appropriate authentication."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HAALLPfyq9m5"
      },
      "source": [
        "### Pattern 3: Dynamic Adapter Loading\n",
        "\n",
        "Dynamic adapter loading at runtime is an advanced feature that requires:\n",
        "1. Setting `VLLM_ALLOW_RUNTIME_LORA_UPDATING=True` in environment variables\n",
        "2. Redeploying the model with this configuration\n",
        "3. Direct access to vLLM's API endpoints (not available through Vertex AI raw_predict)\n",
        "\n",
        "For this tutorial's setup, **adapters are pre-loaded at startup** via the `--lora-modules` flag. To add new adapters, you would:\n",
        "\n",
        "```python\n",
        "# Steps to add a new adapter (requires redeployment):\n",
        "\n",
        "# 1. Upload new adapter to GCS\n",
        "# !gcloud storage cp -r ./adapters/new-adapter gs://{BUCKET_NAME}/adapters/new-adapter\n",
        "\n",
        "# 2. Update vLLM args to include the new adapter\n",
        "new_vllm_args = [\n",
        "    \"python3\", \"-m\", \"vllm.entrypoints.openai.api_server\",\n",
        "    \"--host=0.0.0.0\", \"--port=8080\",\n",
        "    f\"--model={BUCKET_URI}/models/gemma-2-2b-it\",\n",
        "    \"--max-model-len=2048\",\n",
        "    \"--gpu-memory-utilization=0.9\",\n",
        "    \"--enable-lora\",\n",
        "    \"--max-loras=4\",\n",
        "    \"--max-lora-rank=64\",\n",
        "    \"--enable-prefix-caching\",\n",
        "    \"--tensor-parallel-size=1\",\n",
        "    \"--lora-modules\",\n",
        "    f\"sql-lora={BUCKET_URI}/adapters/sql-lora\",\n",
        "    f\"magicoder-lora={BUCKET_URI}/adapters/magicoder-lora\",\n",
        "    f\"new-adapter={BUCKET_URI}/adapters/new-adapter\",  # New adapter\n",
        "]\n",
        "\n",
        "# 3. Upload new model version with updated args\n",
        "# 4. Redeploy to endpoint\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2a4e033321ad"
      },
      "source": [
        "## Cleaning up\n",
        "\n",
        "To avoid extra charges, don't forget to delete your resources.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "V9B5RP04sMyX"
      },
      "outputs": [],
      "source": [
        "delete_endpoint = True\n",
        "delete_model = True\n",
        "delete_docker_repo = True\n",
        "delete_bucket = True\n",
        "\n",
        "\n",
        "if delete_endpoint and \"vertexai_endpoint\" in globals():\n",
        "    vertexai_endpoint.undeploy_all()\n",
        "    vertexai_endpoint.delete()\n",
        "    print(\"Endpoint deleted.\")\n",
        "\n",
        "\n",
        "if delete_model and \"vertexai_model\" in globals():\n",
        "    vertexai_model.delete()\n",
        "    print(\"Model deleted.\")\n",
        "\n",
        "if delete_docker_repo:\n",
        "    !gcloud artifacts repositories delete {DOCKER_REPOSITORY} \\\n",
        "        --location={LOCATION} \\\n",
        "        --quiet\n",
        "\n",
        "if delete_bucket:\n",
        "    !gcloud storage rm --recursive {BUCKET_URI} && gcloud storage buckets delete {BUCKET_URI} --quiet"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "get_started_with_vllm_lora_serving_on_vertex_ai.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
