{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Use Gemini and OSS Text-Embedding Models Against Your BigQuery Data\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fembeddings%2Fbigquery_ml_gemini_and_oss_text_embedding.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/bigquery/import?url=https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/bigquery/v1/32px.svg\" alt=\"BigQuery Studio logo\"><br> Open in BigQuery Studio\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/bigquery_ml_gemini_and_oss_text_embedding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Author(s) |\n",
        "| --- |\n",
        "| [Jasper Xu](https://github.com/ZehaoXU), [Haiyang Qi](https://github.com/pursuitdan) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook showcases a simple end-to-end process for generating text embeddings using BigQuery in conjunction with both Gemini and OSS text embedding models. We use Google `gemini-embedding-001` and the open-source `multilingual-e5-small` model as examples, and the process involves:\n",
        "\n",
        "- Deploying the `multilingual-e5-small` model from HuggingFace to Vertex AI.\n",
        "- Creating a remote model in BigQuery for against the Gemini embedding model, and the deployed OSS model endpoint.\n",
        "- Employing the ML.GENERATE_EMBEDDING function to generate embeddings from text data using both models.\n",
        "- Cleaning up deployed resources to manage costs.\n",
        "\n",
        "## Costs\n",
        "This tutorial uses billable components of Google Cloud:\n",
        "\n",
        "- Vertex AI\n",
        "- BigQuery\n",
        "\n",
        "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [BigQuery pricing](https://cloud.google.com/bigquery/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Google Vertex AI SDK and other required packages\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade google-cloud-aiplatform"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI and BigQuery, you must have an existing Google Cloud project and enable the [Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) & [BigQuery API](https://console.cloud.google.com/flows/enableapi?apiid=bigquery.googleapis.com)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = \"us-central1\"  # @param {type: \"string\", placeholder: \"[your-preferred-location]\", isTemplate: true}\n",
        "# fmt: on\n",
        "\n",
        "import vertexai\n",
        "from google.cloud import aiplatform\n",
        "\n",
        "aiplatform.init(project=PROJECT_ID, location=LOCATION)\n",
        "\n",
        "vertexai.init(\n",
        "    project=PROJECT_ID,\n",
        "    location=LOCATION,\n",
        ")\n",
        "\n",
        "from google.cloud import bigquery\n",
        "\n",
        "bq_client = bigquery.Client()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sPffJxgjNDST"
      },
      "source": [
        "### Create a New BigQuery Dataset\n",
        "\n",
        "This will house any tables and models created throughout this notebook"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "i1r_oSxWNRzn"
      },
      "outputs": [],
      "source": [
        "!bq mk --location={LOCATION} --dataset --project_id={PROJECT_ID} demo_dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EdvJRUWRNGHE"
      },
      "source": [
        "## Use Gemini Embedding Model in BigQuery\n",
        "\n",
        "First, let's explore how to generate embeddings using the state-of-the-art [Gemini embedding model](https://developers.googleblog.com/en/gemini-embedding-available-gemini-api/) directly in BigQuery. This process involves two simple steps: creating a remote model and then using it for inference."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e43229f3ad4f"
      },
      "source": [
        "### Create a Remote Model in BigQuery\n",
        "\n",
        "Before you can generate embeddings, you need to create a REMOTE MODEL against the `gemini-embedding-001` in BigQuery, using the statement below:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cf93d5f0ce00"
      },
      "outputs": [],
      "source": [
        "%%bigquery --project $PROJECT_ID\n",
        "\n",
        "CREATE OR REPLACE MODEL demo_dataset.gemini_embedding_model\n",
        "REMOTE WITH CONNECTION DEFAULT\n",
        "OPTIONS(endpoint=\"gemini-embedding-001\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DxbUnNK-OONb"
      },
      "source": [
        "### Generate Embeddings\n",
        "\n",
        "Once the model is created, you can call the [ML.GENERATE_EMBEDDING](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-generate-embedding#syntax) function to generate embeddings. The following code will generate embeddings for 10,000 records from the public `bigquery-public-data.hacker_news.full` dataset."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-fFY0BC3OobB"
      },
      "outputs": [],
      "source": [
        "%%bigquery --project $PROJECT_ID\n",
        "\n",
        "SELECT\n",
        "  *\n",
        "FROM\n",
        "  ML.GENERATE_EMBEDDING(\n",
        "    MODEL demo_dataset.gemini_embedding_model,\n",
        "    (\n",
        "      SELECT\n",
        "        text AS content\n",
        "      FROM\n",
        "        bigquery-public-data.hacker_news.full\n",
        "      WHERE\n",
        "        text IS NOT NULL\n",
        "      LIMIT 10000\n",
        "    )\n",
        "  );\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jENGtOIrQkxf"
      },
      "source": [
        "## Use an OSS Text Embedding Model in BigQuery\n",
        "\n",
        "Now, let's walk through using an open-source model. This process gives you maximum flexibility and control over quality and scalability. Unlike using the managed Gemini model, this workflow involves hosting the model yourself on a Vertex AI endpoint."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BZVe7iNsStX2"
      },
      "source": [
        "### Deploy an OSS Model to a Vertex AI Endpoint\n",
        "\n",
        "First, you need to choose an open-source model from a repository like [Hugging Face](https://huggingface.co/models?other=text-embeddings-inference&sort=trending) and deploy it to a Vertex AI endpoint. For this example, we use [`intfloat/multilingual-e5-small`](https://huggingface.co/intfloat/multilingual-e5-small), which delivers respectable performance (ranking 38th on the [Massive Text Embedding Benchmark](https://huggingface.co/spaces/mteb/leaderboard)) while being massively scalable and cost-effective. The following code will deploy the model, which creates prediction server with dedicated-resource for your use.\n",
        "\n",
        "The model is served by default on a single `g2-standard-12` machine replica with one `NVIDIA_L4` GPU. You can adjust the `min_replica_count`, `max_replica_count`, and `machine_type` to balance scalability and cost."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "L_t-4z0oTIuC"
      },
      "outputs": [],
      "source": [
        "from vertexai import model_garden\n",
        "\n",
        "model = model_garden.OpenModel(\"publishers/intfloat/models/e5@multilingual-e5-small\")\n",
        "\n",
        "# BigQuery only support public shared endpoint currently. Dedicated endpoint is not supported\n",
        "endpoint = model.deploy(dedicated_endpoint_disabled=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "81nDd8F1UFYd"
      },
      "source": [
        "### Create a Remote Model in BigQuery\n",
        "\n",
        "Similar to the Gemini workflow, you need to create a remote model in BigQuery. However, this time the model will point to the URL of the Vertex AI endpoint you just created. This tells BigQuery where to send the data for embedding generation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cm9_Pi5HUZBl"
      },
      "outputs": [],
      "source": [
        "ENDPOINT_ID = f\"https://{LOCATION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{endpoint.name}\"\n",
        "print(\"Endpoint ID: \", ENDPOINT_ID)\n",
        "\n",
        "query = f\"\"\"\n",
        "CREATE OR REPLACE MODEL demo_dataset.multilingual_e5_small\n",
        "REMOTE WITH CONNECTION DEFAULT\n",
        "OPTIONS(\n",
        "  endpoint='{ENDPOINT_ID}'\n",
        ");\n",
        "\"\"\"\n",
        "\n",
        "bq_client.query_and_wait(query).to_dataframe()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uCYhdnH5YuvU"
      },
      "source": [
        "### Generate Embeddings\n",
        "\n",
        "With the model created, you can use the exact same ML.GENERATE_EMBEDDING function as before. For this particular E5 model with default deployment settings, it takes around 2 hour and 10 minutes to embed over 38M non-null rows in the Hacker News dataset."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_1Elsdt5ZX3i"
      },
      "outputs": [],
      "source": [
        "%%bigquery --project $PROJECT_ID\n",
        "\n",
        "SELECT\n",
        "  *\n",
        "FROM\n",
        "  ML.GENERATE_EMBEDDING(\n",
        "    MODEL demo_dataset.multilingual_e5_small,\n",
        "    (\n",
        "      SELECT\n",
        "        text AS content\n",
        "      FROM\n",
        "        bigquery-public-data.hacker_news.full\n",
        "      WHERE\n",
        "        text IS NOT NULL\n",
        "      LIMIT 10000\n",
        "    )\n",
        "  );"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WAKZMJcfbdyG"
      },
      "source": [
        "### Delete the Vertex AI Model and Endpoint\n",
        "\n",
        "This is a critical step for cost management. Since you deployed an OSS model to an active endpoint, it will continue to incur costs even when idle. The following code will \"undeploy\" the model from the endpoint, which stops the billing.\n",
        "\n",
        "For batch workloads, the most cost-effective pattern is to deploy the model, run your inference job, and immediately undeploy it, achieving by run these steps sequentially."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "CwNyBYsmbvxI"
      },
      "outputs": [],
      "source": [
        "endpoint.undeploy_all()\n",
        "endpoint.delete()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2a4e033321ad"
      },
      "source": [
        "## Cleaning up\n",
        "\n",
        "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
        "\n",
        "Otherwise, you can delete BigQuery dataset created in this demo, assuming you have already ran the above code block to delete resources on the Vertex AI side:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YUFLtfZqctBq"
      },
      "outputs": [],
      "source": [
        "!bq rm -r -f --dataset {PROJECT_ID}:demo_dataset"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "bigquery_ml_gemini_and_oss_text_embedding.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
