{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YJy8qKC5Zwmx"
   },
   "outputs": [],
   "source": [
    "# Copyright 2025 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Introduction to Vertex AI Vector Search 2.0\n",
    "\n",
    "This notebook provides a comprehensive introduction to **[Vertex AI Vector Search 2.0](https://cloud.google.com/vertex-ai/docs/vector-search-2/overview)** for developers who are familiar with vector search and embeddings concepts, but new to this Google Cloud service.\n",
    "\n",
    "**New to vector search and embeddings?** If you're looking to learn the basics, please refer to: [Introduction to Text Embeddings and Vector Search](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb)\n",
    "\n",
    "<table align=\"left\">\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-intro.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fembeddings%2Fvector-search-2-intro.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-intro.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
    "    </a>\n",
    "  </td>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What is Vector Search 2.0?\n",
    "\n",
    "[Vector Search 2.0](https://cloud.google.com/vertex-ai/docs/vector-search-2/overview) is Google Cloud's fully managed, self-tuning vector database built on Google's [ScaNN (Scalable Nearest Neighbors)](https://github.com/google-research/google-research/tree/master/scann) algorithm - the same technology powering Google Search, YouTube, and Google Play.\n",
    "\n",
    "### Key Differentiators\n",
    "\n",
    "- **Zero Indexing to Billion-Scale Index**: Start developing immediately with zero indexing time using [kNN (k-Nearest Neighbors)](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm), then scale to billions of vectors with millisec latency with Google-scale [ANN (Approximate Nearest Neighbor)](https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor) indexes for production - all with the same API and same dataset\n",
    "- **Unified Data Storage**: Store both [vector embeddings](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings) and user provided data together (no separate database or feature store needed)\n",
    "- **Auto-Embeddings**: Automatically generate semantic embeddings using [Vertex AI embedding models](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings#google-models)\n",
    "- **Built-in Full Text Search**: Provides a built-in [full-text search](https://cloud.google.com/discover/what-is-full-text-search?e=48754805&hl=en) without needing to generate sparse embeddings by yourself. You can also choose to use your own sparse embeddings (e.g., BM25, SPLADE) with Vector Search for a customized full-text search.\n",
    "- **Hybrid Search**: Combine semantic and keyword/token-based search in a single query with intelligent [RRF](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) ranking\n",
    "- **Self-Tuning**: Auto-optimized performance without manual configuration\n",
    "- **Enterprise-Ready**: Built-in scalability, security, and compliance\n",
    "\n",
    "### Core Architecture\n",
    "\n",
    "Vector Search 2.0 has three main components:\n",
    "\n",
    "1. **[Collections](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections)**: Schema-enforced containers for your data\n",
    "2. **[Data Objects](https://cloud.google.com/vertex-ai/docs/vector-search-2/data-objects/data-objects)**: Individual items with data and vector embeddings\n",
    "3. **[Indexes](https://cloud.google.com/vertex-ai/docs/vector-search-2/indexes/indexes)**: Instant nearest neighbor search your data with kNN. For low latency nearest neighbor search use an ANN index.\n",
    "   - **Start fast**: Use kNN immediately with zero setup time - perfect for development and small datasets\n",
    "   - **Scale to production**: Use ANN indexes for billion-scale search with sub-second latency powered by ScaNN algorithm\n",
    "\n",
    "Let's explore each concept with hands-on examples!\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example Scenario: E-Commerce Product Search\n",
    "\n",
    "To demonstrate Vector Search 2.0's capabilities, we'll build a **product search and recommendation system** using the TheLook e-commerce dataset.\n",
    "\n",
    "**For this demo**: We'll use a **10,000 product sample** for faster processing and lower costs. The techniques shown here scale seamlessly to the full dataset (~30K products) or even larger catalogs.\n",
    "\n",
    "### Business Use Cases:\n",
    "\n",
    "1. **Product Discovery**: Find similar products based on product name semantics\n",
    "2. **Semantic Search**: \"Find products for 'Men's outfit for beach'\"\n",
    "3. **Filtered Shopping**: \"Show me Dresses under $100\"\n",
    "4. **Hybrid Search**: Combine semantic similarity with keyword matching for better product recommendations\n",
    "\n",
    "### Dataset Overview:\n",
    "\n",
    "The full TheLook dataset contains **29,120 fashion products** from an e-commerce platform with the following attributes:\n",
    "\n",
    "- `id`: Product ID (e.g., \"8037\")\n",
    "- `name`: Product name (e.g., \"Jostar Short Sleeve Solid Stretchy Capri Pants Set\")\n",
    "- `category`: Product category (26 categories: Dresses, Jeans, Tops & Tees, etc.)\n",
    "- `retail_price`: Product price in USD (e.g., 38.99)\n",
    "\n",
    "### Sample Data:\n",
    "\n",
    "| ID | Name | Category | Price |\n",
    "|-----|------|----------|-------|\n",
    "| 8037 | Jostar Short Sleeve Solid Stretchy Capri Pants Set | Clothing Sets | 38.99 |\n",
    "| 8036 | Womens Top Stitch Jacket and Pant Set by City Lights | Clothing Sets | 199.95 |\n",
    "| 8035 | Ulla Popken Plus Size 3-Piece Duster and Pants Set | Clothing Sets | 159.00 |\n",
    "\n",
    "Throughout this notebook, we'll walk through each step from setting up Collections and adding products, to performing various types of searches, and finally optimizing with indexes. Let's get started!\n",
    "\n",
    "-----"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8rNNxNcaaQRb"
   },
   "source": [
    "## Prerequisites\n",
    "\n",
    "This tutorial requires a Google Cloud project that is linked with a billing account. To create a new project, take a look at [this document](https://cloud.google.com/vertex-ai/docs/start/cloud-environment) to create a project and setup a billing account for it.\n",
    "To get the permissions that you need to give a service account access to enable APIs and interact with Vertex AI resources, ask your administrator to grant you the [Security Admin](https://cloud.google.com/iam/docs/roles-permissions/iam#iam.securityAdmin) (`roles/iam.securityAdmin`) IAM role on your project. For more information about granting roles, see [Manage access to projects, folders, and organizations](https://cloud.google.com/iam/docs/granting-changing-revoking-access).\n",
    "\n",
    "### Important: Resource Cleanup\n",
    "\n",
    "Vector Search 2.0 resources incur costs when active. Make sure to run the cleanup section at the end of this tutorial to delete all Collections and Indexes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZBbY9yJvbuL1"
   },
   "source": [
    "## Install the Vector Search SDK\n",
    "\n",
    "First, we'll install the `google-cloud-vectorsearch` Python package.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "69744e32"
   },
   "outputs": [],
   "source": [
    "%pip install google-cloud-vectorsearch tqdm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then, set `PROJECT_ID` for your Google Cloud project. You can leave the `LOCATION` as the default value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set PROJECT_ID and LOCATION\n",
    "PROJECT_ID = \"your-project-id\"  # @param {type:\"string\"}\n",
    "LOCATION = \"us-central1\"  # @param {type:\"string\"}\n",
    "\n",
    "# Validate PROJECT_ID is set\n",
    "if PROJECT_ID == \"your-project-id\" or not PROJECT_ID or PROJECT_ID == \"\":\n",
    "    raise ValueError(\n",
    "        \"⚠️ Please set PROJECT_ID to your actual Google Cloud project ID in the cell above\"\n",
    "    )\n",
    "\n",
    "print(f\"✅ Using project: {PROJECT_ID}\")\n",
    "print(f\"✅ Using location: {LOCATION}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nitj2X54bgQC"
   },
   "source": [
    "## Authentication\n",
    "\n",
    "On Colab, run the following to authenticate calls to the Vector Search APIs. For Colab Enterprise and Cloud Workbench, you can skip this part."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "id": "zQKHCit5vo-o"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "\n",
    "# Additional authentication is required for Google Colab\n",
    "if \"google.colab\" in sys.modules:\n",
    "    # Authenticate user to Google Cloud\n",
    "    from google.colab import auth\n",
    "\n",
    "    auth.authenticate_user()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WycjwYB2Djp1"
   },
   "source": [
    "## Enable APIs\n",
    "\n",
    "Run the following commands to enable APIs for Vector Search and, if using Auto-Embeddings or Semantic Search, the Vertex AI API with this Google Cloud project.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "e3VZ0UNJDkg-"
   },
   "outputs": [],
   "source": [
    "! gcloud services enable vectorsearch.googleapis.com aiplatform.googleapis.com --project \"{PROJECT_ID}\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "# Part 1: Collections - Your Data Container\n",
    "\n",
    "In this part, you'll learn how to create and configure **Collections** - the foundation of Vector Search 2.0. Collections are schema-enforced containers that define the structure of your data and embeddings.\n",
    "\n",
    "**What you'll accomplish:**\n",
    "- Initialize the Vector Search SDK clients\n",
    "- Understand Collection schemas (data schema + vector schema)\n",
    "- Create a product Collection with auto-embeddings\n",
    "- Inspect and verify your Collection configuration\n",
    "\n",
    "Let's start by setting up the SDK clients!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "bQ5q_KMkbXSi"
   },
   "source": [
    "## SDK Clients Overview\n",
    "\n",
    "The Vector Search 2.0 SDK uses a **modular client architecture** to organize operations by function. Instead of one monolithic client, you'll work with three specialized service clients throughout this tutorial:\n",
    "\n",
    "1. **VectorSearchServiceClient**: Manages Collections and Indexes (CRUD operations)\n",
    "2. **DataObjectServiceClient**: Manages Data Objects (create, update, delete)\n",
    "3. **DataObjectSearchServiceClient**: Performs search and query operations\n",
    "\n",
    "This separation provides clear boundaries between data management and search operations. For more details, see the [Python SDK Documentation](https://cloud.google.com/python/docs/reference/vectorsearch/latest).\n",
    "\n",
    "To begin with the product, let's create client objects for them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "zfiSJTIJvzI1"
   },
   "outputs": [],
   "source": [
    "from google.cloud import vectorsearch_v1beta\n",
    "\n",
    "vector_search_service_client = vectorsearch_v1beta.VectorSearchServiceClient()\n",
    "data_object_service_client = vectorsearch_v1beta.DataObjectServiceClient()\n",
    "data_object_search_service_client = vectorsearch_v1beta.DataObjectSearchServiceClient()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AOQkdB5mYtbU"
   },
   "source": [
    "\n",
    "## What is a Collection?\n",
    "\n",
    "A **[Collection](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections)** is a schema-enforced container for your data in Vector Search 2.0. Think of it as a table in a traditional database, but optimized for vector operations.\n",
    "\n",
    "### Key Concepts:\n",
    "\n",
    "- **[Data Schema](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections#data-schema)**: Defines the structure of your data ([JSON Schema](https://json-schema.org/) format). Note: We do not currently support `additionalProperties=True`.\n",
    "- **[Vector Schema](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections#vector-schema)**: Defines your embedding fields with their dimensions and types\n",
    "- **Schema Enforcement**: All Data Objects must conform to the defined schemas\n",
    "- **Multiple Embeddings**: You can have multiple vector fields per object (e.g., text_embedding, image_embedding)\n",
    "\n",
    "### Collection Features:\n",
    "\n",
    "1. **Dense Vectors**: Standard continuous embeddings (e.g., [0.1, 0.2, 0.3, ...])\n",
    "3. **[Auto-Embeddings](https://cloud.google.com/vertex-ai/docs/vector-search-2/data-objects/data-objects#auto-populate-embeddings)**: Automatic embedding generation using Vertex AI models\n",
    "4. **Flexible Data**: Store any JSON-compatible data alongside vectors\n",
    "\n",
    "Now let's create our product Collection with all the schemas we need!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creating the Product Collection\n",
    "\n",
    "Let's create our first Collection for the e-commerce product catalog. We'll define schemas that match our TheLook dataset structure and configure auto-embeddings for product names.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "AOTKBWwjYvJj"
   },
   "outputs": [],
   "source": [
    "\n",
    "import getpass\n",
    "from datetime import datetime\n",
    "\n",
    "collection_id = f\"products-demo-{getpass.getuser()}-{datetime.now().strftime('%m%d%y-%H%M%S')}\"\n",
    "print(f\"Collection ID: {collection_id}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Nv1wUdXvY4kZ"
   },
   "outputs": [],
   "source": [
    "# Create the product Collection with schemas that match our dataset\n",
    "\n",
    "request = vectorsearch_v1beta.CreateCollectionRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}\",\n",
    "    collection_id=collection_id,\n",
    "    collection={\n",
    "        # Data Schema: Product data (id, name, category, retail_price)\n",
    "        \"data_schema\": {\n",
    "            \"type\": \"object\",\n",
    "            \"properties\": {\n",
    "                \"id\": {\"type\": \"string\"},           # Product ID\n",
    "                \"name\": {\"type\": \"string\"},         # Product name\n",
    "                \"category\": {\"type\": \"string\"},     # Product category (Dresses, Jeans, etc.)\n",
    "                \"retail_price\": {\"type\": \"number\"}, # Product price in USD\n",
    "            },\n",
    "        },\n",
    "        # Vector Schema: Product name-based embeddings for semantic and keyword search\n",
    "        \"vector_schema\": {\n",
    "            # Dense embedding: Captures semantic meaning of product names\n",
    "            # Auto-generated by Vertex AI using gemini-embedding-001 model\n",
    "            \"name_dense_embedding\": {\n",
    "                \"dense_vector\": {\n",
    "                    \"dimensions\": 768,  # Using 768 dimensions for gemini-embedding-001\n",
    "                    \"vertex_embedding_config\": {\n",
    "                        # Auto-generate dense embeddings from product name\n",
    "                        \"model_id\": \"gemini-embedding-001\",\n",
    "                        \"text_template\": \"{name}\",\n",
    "                        \"task_type\": \"RETRIEVAL_DOCUMENT\",\n",
    "                    },\n",
    "                },\n",
    "            },\n",
    "        },\n",
    "    }\n",
    ")\n",
    "\n",
    "operation = vector_search_service_client.create_collection(request=request)\n",
    "operation.result()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Key Points:\n",
    "\n",
    "What we just accomplished:\n",
    "\n",
    "✅ **Created a Collection** named `products-demo-{user}-{date}` with strict schemas  \n",
    "✅ **Defined data schema** for 4 data fields: id, name, category, retail_price  \n",
    "✅ **Configured dense vector embedding for product name-based semantic search**:\n",
    "   - `name_dense_embedding` - **Auto-generated** by Vertex AI from product name using gemini-embedding-001 model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Auto-Embeddings Feature\n",
    "\n",
    "One powerful feature of Vector Search 2.0 is **automatic embedding generation**. When you create a Data Object without providing vectors for fields configured with `vertex_embedding_config` as we've done with `name_dense_embedding`, the service automatically generates them using Vertex AI models. This requires the Vertex AI API to be enabled (done in the setup section)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "mBji6kevajeO"
   },
   "source": [
    "## Inspecting Collections\n",
    "\n",
    "You can retrieve and list Collections to verify their configuration:\n",
    "\n",
    "Let's check the Collection we just created to confirm all our schemas are in place."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "mHT5ef44akz3"
   },
   "outputs": [],
   "source": [
    "request = vectorsearch_v1beta.GetCollectionRequest(\n",
    "    name=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\"\n",
    ")\n",
    "vector_search_service_client.get_collection(request)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "z8JMlDDgbkOB"
   },
   "source": [
    "-----\n",
    "# Part 2: Data Objects - Your Actual Data\n",
    "\n",
    "Now that we have a Collection set up, it's time to populate it with actual data! In this part, you'll learn how to add **Data Objects** - the individual items stored in your Collection.\n",
    "\n",
    "**What you'll accomplish:**\n",
    "- Download and prepare the TheLook e-commerce dataset (10,000 products)\n",
    "- Understand the Data Object structure (id, data, vectors)\n",
    "- Create individual Data Objects with auto-generated embeddings\n",
    "- Perform efficient batch imports with rate limiting\n",
    "- Learn best practices for managing embedding API quotas\n",
    "\n",
    "By the end of this section, you'll have a fully populated Collection with 10,000 products, each with automatically generated semantic embeddings!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "c3n0ofiYbSOR"
   },
   "source": [
    "## Downloading theLook dataset\n",
    "\n",
    "Now that our Collection is ready, we need data! Let's download the TheLook e-commerce dataset.\n",
    "\n",
    "For this demo, we'll use a **randomly sampled 10,000 products** (from the full ~30K dataset) for faster demo. Random sampling ensures better category distribution compared to sequential selection. You can easily switch to the full dataset by changing `MAX_PRODUCTS = None` in the code below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "BE4UEvwbbU_t"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "import urllib.request\n",
    "import random\n",
    "\n",
    "# Download and load TheLook e-commerce dataset with error handling\n",
    "print(\"📥 Downloading TheLook e-commerce dataset...\")\n",
    "dataset_url = \"https://storage.googleapis.com/gcp-samples-ic0-vs20demo/thelook_dataset.jsonl\"\n",
    "\n",
    "# For faster demo: randomly sample 10,000 products (full dataset has ~30K)\n",
    "# To use the full dataset, set MAX_PRODUCTS = None\n",
    "MAX_PRODUCTS = 10000\n",
    "\n",
    "all_products = []\n",
    "required_fields = ['id', 'name', 'category', 'retail_price']\n",
    "\n",
    "# Load all products first\n",
    "with urllib.request.urlopen(dataset_url) as response:\n",
    "    for i, line in enumerate(response, 1):\n",
    "        product_data = json.loads(line.decode('utf-8'))\n",
    "        \n",
    "        # Check if all required fields are present\n",
    "        if all(field in product_data for field in required_fields):\n",
    "            all_products.append({\n",
    "                \"id\": product_data[\"id\"],\n",
    "                \"data\": {\n",
    "                    \"id\": product_data[\"id\"],\n",
    "                    \"name\": product_data[\"name\"],\n",
    "                    \"category\": product_data[\"category\"],\n",
    "                    \"retail_price\": product_data[\"retail_price\"],\n",
    "                }\n",
    "            })\n",
    "\n",
    "# Random sampling for better category distribution\n",
    "if MAX_PRODUCTS and len(all_products) > MAX_PRODUCTS:\n",
    "    random.seed(42)  # Set seed for reproducibility\n",
    "    products = random.sample(all_products, MAX_PRODUCTS)\n",
    "    print(f\"✅ Loaded and randomly sampled {len(products):,} products from {len(all_products):,} total\")\n",
    "    print(f\"   (Random sampling ensures better category distribution)\")\n",
    "else:\n",
    "    products = all_products\n",
    "    print(f\"✅ Loaded {len(products):,} products from TheLook dataset\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "VKBjHXWzb3bA"
   },
   "outputs": [],
   "source": [
    "# Inspect the first five product's structure\n",
    "products[:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will load this `products` data to the Collection with Data Objects."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What is a Data Object?\n",
    "\n",
    "A **Data Object** represents a single item in your Collection. Each Data Object consists of:\n",
    "\n",
    "1. **data_object_id**: Unique identifier\n",
    "2. **data**: Data fields (as defined in data_schema)\n",
    "3. **vectors**: Embedding vectors (as defined in vector_schema)\n",
    "\n",
    "In our e-commerce scenario, each Data Object represents one product with its data (id, name, category, retail_price) and product name-based vector embedding (dense semantic embedding, auto-generated).\n",
    "\n",
    "\n",
    "### Creating Data Objects\n",
    "\n",
    "You can add Data Objects in three ways:\n",
    "1. **Single Create**: Add one product at a time (useful for new inventory) - **covered in this section**\n",
    "2. **Batch Create**: Add multiple products efficiently (bulk catalog import) - **covered later in this section**\n",
    "3. **GCS Import**: Bulk import from Google Cloud Storage (large-scale datasets) - **see [Vector Search 2.0 Quickstart](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-quickstart.ipynb)**\n",
    "\n",
    "In this tutorial, we'll focus on single object creation to understand the fundamentals. For production use cases with larger datasets, refer to the batch import method covered later in this section, or the GCS import method in the [Vector Search 2.0 Quickstart notebook](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-quickstart.ipynb).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "O_u_MqM1bnJO"
   },
   "source": [
    "## Create Single Data Object\n",
    "\n",
    "Let's start by creating a single Data Object to understand the basic structure. This is useful for real-time updates or adding individual items one at a time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Lyl9n6tSblgk"
   },
   "outputs": [],
   "source": [
    "# Add the first product as a demonstration\n",
    "\n",
    "request = vectorsearch_v1beta.CreateDataObjectRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    data_object_id=products[0][\"id\"],\n",
    "    data_object={\n",
    "        \"data\": products[0][\"data\"],  # Data: id, name, category, retail_price\n",
    "        \"vectors\": {},  # Empty vectors - dense embedding will be auto-generated!\n",
    "    },\n",
    ")\n",
    "result = data_object_service_client.create_data_object(request=request)\n",
    "print(f\"✅ Created Data Object: {products[0]['data']['name']}\")\n",
    "print(f\"   Category: {products[0]['data']['category']} | Price: ${products[0]['data']['retail_price']:.2f}\")\n",
    "print(f\"   💡 Dense embedding auto-generated by Vector Search 2.0 from product name\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gq8lt1vwJZZg"
   },
   "source": [
    "### Get Data Object\n",
    "\n",
    "Retrieve a specific Data Object by its ID for checking:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "request = vectorsearch_v1beta.GetDataObjectRequest(\n",
    "        name=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}/dataObjects/{products[0][\"id\"]}\",\n",
    "    )\n",
    "data_object_service_client.get_data_object(request=request)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can see that the `name_dense_embedding` field is filled with embedding automatically."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Delete Data Object\n",
    "\n",
    "Make sure to delete this Data Object to avoid duplication with the data loaded with the following sample."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "delete_request = vectorsearch_v1beta.DeleteDataObjectRequest(\n",
    "    name=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}/dataObjects/{products[0][\"id\"]}\"\n",
    ")\n",
    "data_object_service_client.delete_data_object(delete_request)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Batch Create Data Objects\n",
    "\n",
    "Now let's import our **entire 10,000 product catalog** using batch operations. By using `BatchCreateDataObjectsRequest`, you can reduce API calls from 10,000 to just 40 requests (~250x more efficient) compared to adding them one by one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Import products in batches\n",
    "# Batch size must not exceed the embedding model's \"max texts per request\" limit (250 for gemini-embedding-001)\n",
    "from tqdm.auto import tqdm\n",
    "\n",
    "batch_size = 250  # Max texts per request for gemini-embedding-001\n",
    "\n",
    "for batch_start in tqdm(range(0, len(products), batch_size), desc=\"Importing products\", unit=\"batch\"):\n",
    "    batch_end = min(batch_start + batch_size, len(products))\n",
    "    \n",
    "    batch_request = [\n",
    "        {\n",
    "            \"data_object_id\": product[\"id\"],\n",
    "            \"data_object\": {\"data\": product[\"data\"], \"vectors\": {}},  # Empty vectors = auto-generate\n",
    "        }\n",
    "        for product in products[batch_start:batch_end]\n",
    "    ]\n",
    "    \n",
    "    try:\n",
    "        request = vectorsearch_v1beta.BatchCreateDataObjectsRequest(\n",
    "            parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "            requests=batch_request,\n",
    "        )\n",
    "        data_object_service_client.batch_create_data_objects(request)\n",
    "    except Exception as e:\n",
    "        if \"already exists\" not in str(e).lower():\n",
    "            tqdm.write(f\"⚠️ Batch {batch_start//batch_size + 1} error: {str(e)[:80]}\")\n",
    "\n",
    "print(f\"✅ Import complete!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What This Code Does\n",
    "\n",
    "1. **Batching**: 250 products per request (max for `gemini-embedding-001`)\n",
    "2. **Auto-Embeddings**: Empty `vectors: {}` triggers automatic embedding generation\n",
    "3. **Error Handling**: Skips duplicates, reports other errors\n",
    "4. **Progress Tracking**: Real-time progress with tqdm\n",
    "\n",
    "**Note:** When using auto-embeddings, be aware of the Vertex AI Embeddings API quotas (5M tokens/min, 250 texts/request). For the details of the quotas and rate limiting techniques for large scale imports, see [Appendix: Embedding API Quotas and Rate Limiting](#appendix-embedding-api-quotas-and-rate-limiting).\n",
    "\n",
    "-----"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part 3: Querying and Filtering Data\n",
    "\n",
    "Now that we have our product catalog populated, let's learn how to retrieve data! This section covers filtering and querying based on data.\n",
    "\n",
    "## Query vs Search\n",
    "\n",
    "Vector Search 2.0 distinguishes between two operations:\n",
    "\n",
    "- **Query**: Filter and retrieve Data Objects based on data (like SQL WHERE clause)\n",
    "- **Search**: Find similar items based on vector similarity (ANN search)\n",
    "\n",
    "In our e-commerce scenario:\n",
    "- **Query** is used for: \"Show me all Jeans under $100\" or \"Find products in the Dresses category\"\n",
    "- **Search** is used for: \"Find products with similar names\" or \"Recommend products like this one\"\n",
    "\n",
    "Let's start with querying to understand our product catalog."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4NYnmElNJkEk"
   },
   "source": [
    "## Filtering with Query Operators\n",
    "\n",
    "Vector Search 2.0 supports a [rich query language](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/query#filter-syntax) for filtering:\n",
    "\n",
    "**Comparison operators**: `$eq`, `$ne`, `$gt`, `$gte`, `$lt`, `$lte`  \n",
    "**Logical operators**: `$and`, `$or`  \n",
    "**Array operators**: `$in`, `$nin`, `$all`\n",
    "\n",
    "Let's see some examples:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "oHizcC4HJnMZ"
   },
   "outputs": [],
   "source": [
    "# Example 1: Browse by category - Find all products in Jeans category\n",
    "jeans_request = vectorsearch_v1beta.QueryDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    filter={\"category\": {\"$eq\": \"Jeans\"}},\n",
    "    output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"*\"]),\n",
    ")\n",
    "jeans = data_object_search_service_client.query_data_objects(jeans_request)\n",
    "print(\"All Jeans products:\")\n",
    "print([p.data[\"name\"][:50] + \"...\" for p in jeans][:5])  # Show first 5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Example 2: Price-based filtering - Affordable Jeans (under $75)\n",
    "# Useful for: \"What are affordable jeans in our catalog?\"\n",
    "affordable_jeans_request = vectorsearch_v1beta.QueryDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    filter={\"$and\": [{\"category\": {\"$eq\": \"Jeans\"}}, {\"retail_price\": {\"$lt\": 75}}]},\n",
    "    output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"*\"]),\n",
    ")\n",
    "affordable_jeans = data_object_search_service_client.query_data_objects(\n",
    "    affordable_jeans_request\n",
    ")\n",
    "print(\"Jeans under $75:\")\n",
    "print([f\"{p.data['name'][:40]}... (${p.data['retail_price']:.2f})\" for p in affordable_jeans][:5])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Example 3: Category browsing with price exclusion\n",
    "# Useful for: \"Show me Dresses or premium Clothing Sets (over $150)\"\n",
    "nested_conditionals_request = vectorsearch_v1beta.QueryDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    filter={\n",
    "        \"$or\": [\n",
    "            {\"category\": {\"$eq\": \"Dresses\"}},\n",
    "            {\n",
    "                \"$and\": [\n",
    "                    {\"category\": {\"$eq\": \"Clothing Sets\"}},\n",
    "                    {\"retail_price\": {\"$gte\": 150}},\n",
    "                ]\n",
    "            },\n",
    "        ]\n",
    "    },\n",
    "    output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"*\"]),\n",
    ")\n",
    "nested_conditionals = data_object_search_service_client.query_data_objects(\n",
    "    nested_conditionals_request\n",
    ")\n",
    "print(\"Dresses OR (Clothing Sets >= $150):\")\n",
    "print([f\"{p.data['name'][:40]}... | {p.data['category']} | ${p.data['retail_price']:.2f}\" for p in nested_conditionals][:5])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "VdQbEYvVKl5K"
   },
   "source": [
    "### Query with Aggregates\n",
    "\n",
    "Beyond filtering individual products, you can also get aggregate statistics about your Collection - like counting total products, or analyzing distributions by category."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "daWCB6x7KpLE"
   },
   "outputs": [],
   "source": [
    "aggregate_request = vectorsearch_v1beta.AggregateDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    aggregate=\"COUNT\",\n",
    ")\n",
    "data_object_search_service_client.aggregate_data_objects(aggregate_request)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uf4IcjeDKryZ"
   },
   "source": [
    "# Part 4: Vector Search\n",
    "\n",
    "This is where Vector Search 2.0 truly shines! We'll now move beyond data filtering to semantic similarity search using vector embeddings.\n",
    "\n",
    "## Using kNN (k-Nearest Neighbors) Search\n",
    "\n",
    "In this section, we'll use the **[kNN (k-Nearest Neighbors)](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm)** algorithm for vector search. The best part of kNN is that **you don't need to build any index** - you can search on the Collection as soon as you import the data, with **zero indexing time**!\n",
    "\n",
    "**kNN Advantages:**\n",
    "- ✅ **Instant search**: No waiting for index creation\n",
    "- ✅ **Perfect for development**: Test and iterate quickly\n",
    "- ✅ **Ideal for small datasets**: Works great with up to tens of thousands of rows\n",
    "\n",
    "**kNN Limitations:**\n",
    "- ⚠️ **Latency increases with data size**: For datasets with over tens of thousands of rows, you'll see longer latency\n",
    "\n",
    "**Production Recommendation:**\n",
    "For production deployments with large-scale data, we **strongly recommend using ANN (Approximate Nearest Neighbor)** indexes, which provide blazingly fast vector search even with billions of rows. We'll cover ANN indexes in **Part 5**.\n",
    "\n",
    "## Types of Search in Vector Search 2.0\n",
    "\n",
    "Vector Search 2.0 supports multiple search modalities:\n",
    "\n",
    "1. **[Semantic Search](https://docs.cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#semantic-search)**: Natural language queries (with auto-generated embeddings)\n",
    "2. **[Text Search](https://docs.cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#text-search)**: Traditional keyword search\n",
    "3. **[Hybrid Search](https://docs.cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#hybrid-search)**: Combine multiple search types with ranking\n",
    "4. **[Vector Search](https://docs.cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#vector-search)**: Provide your own query vector for similarity search\n",
    "\n",
    "Let's explore each type with our product data!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Q-AbPHrEMQaf"
   },
   "source": [
    "## 1. Semantic Search\n",
    "\n",
    "With Semantic Search, you can use **natural language queries**. Vector Search 2.0 automatically converts your text to an embedding and runs a vector search.\n",
    "\n",
    "**E-Commerce Scenario**: A user types \"Men's outfit for beach\" - the system finds products with semantically relevant item names."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_text = \"Men's outfit for beach\"\n",
    "\n",
    "# Semantic search automatically generates embeddings from the query text\n",
    "semantic_search_request = vectorsearch_v1beta.SearchDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    semantic_search=vectorsearch_v1beta.SemanticSearch(\n",
    "        search_text=query_text,\n",
    "        search_field=\"name_dense_embedding\",  # The vector field to search\n",
    "        task_type=\"QUESTION_ANSWERING\",\n",
    "        top_k=10,\n",
    "        output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"name\", \"category\", \"retail_price\"]),\n",
    "    ),\n",
    ")\n",
    "\n",
    "results = data_object_search_service_client.search_data_objects(semantic_search_request)\n",
    "\n",
    "print(f\"Semantic search results for '{query_text}':\")\n",
    "for i, result in enumerate(results, 1):\n",
    "    name = result.data_object.data['name']\n",
    "    price = result.data_object.data['retail_price']\n",
    "    print(f\"{i:2}. {name} - ${price:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Why Task Type Embeddings Matter\n",
    "\n",
    "Notice that the results above are not just *similar* items - they are **relevant** items that answer the user's query. This is thanks to **task type embeddings**.\n",
    "\n",
    "When we indexed the product names, we used `task_type=\"RETRIEVAL_DOCUMENT\"`. When searching, we use `task_type=\"QUESTION_ANSWERING\"`. This pairing allows the embedding model to understand the asymmetric relationship between queries and documents - a query like \"Men's outfit for beach\" has different intent than a document describing a product.\n",
    "\n",
    "This solves the classic \"question is not the answer\" problem in RAG systems, where a question and its answer have different meanings as standalone statements.\n",
    "\n",
    "For more details on task type embeddings and how they improve search quality, see: [Improve Gen AI Search with Vertex AI Embeddings and Task Types](https://cloud.google.com/blog/products/ai-machine-learning/improve-gen-ai-search-with-vertex-ai-embeddings-and-task-types)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Semantic Search: Pros and Cons\n",
    "\n",
    "**Pros:**\n",
    "- ✅ **Understanding the query intent**: Finds \"Swim Trunks\" and \"Board Shorts\" when searching for \"Men's outfit for beach\"\n",
    "- ✅ **Handles natural language**: Users can search conversationally without knowing exact product names\n",
    "- ✅ **Relevance-ranked results**: Returns items ordered by semantic relevance\n",
    "\n",
    "**Cons:**\n",
    "- ❌ **May miss exact keywords**: Searching for a specific SKU like \"ABC-123\" may not work well\n",
    "- ❌ **\"Out of domain\" limitations**: New product names, proprietary codes, or brand-new terms not in the model's training data may not be understood"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Text Search (Keyword Matching)\n",
    "\n",
    "Text Search provides traditional full-text search.\n",
    "\n",
    "**Note**: For customized full-text search, you can also use your own sparse embeddings (e.g., BM25, SPLADE) with Vector Search.\n",
    "\n",
    "**E-Commerce Scenario**: \"Find products with 'Short' in the name\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jx3lejMPMwVJ"
   },
   "outputs": [],
   "source": [
    "query_text = \"Short\"\n",
    "\n",
    "text_search_request = vectorsearch_v1beta.SearchDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    text_search=vectorsearch_v1beta.TextSearch(\n",
    "        search_text=query_text,\n",
    "        data_field_names=[\"name\"],  # Search in product name field\n",
    "        top_k=10,\n",
    "        output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"name\", \"category\", \"retail_price\"]),\n",
    "    ),\n",
    ")\n",
    "results = data_object_search_service_client.search_data_objects(text_search_request)\n",
    "\n",
    "print(f\"Text search results for '{query_text}':\")\n",
    "for i, result in enumerate(results, 1):\n",
    "    name = result.data_object.data['name']\n",
    "    price = result.data_object.data['retail_price']\n",
    "    print(f\"{i:2}. {name} - ${price:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Text Search: Pros and Cons\n",
    "\n",
    "**Pros:**\n",
    "- ✅ **Keyword matching**: Guarantees results contain the search keywords\n",
    "- ✅ **Works with \"out of domain\" terms**: Handles product codes, SKUs, brand names, and new terms the embedding model hasn't seen\n",
    "- ✅ **Predictable results**: Users know exactly what they're searching for\n",
    "\n",
    "**Cons:**\n",
    "- ❌ **No semantic understanding**: Won't find \"Bermuda\" or \"Swim Trunks\" when searching for \"Short\"\n",
    "- ❌ **Can't find synonyms**: Searching for \"pants\" won't find \"trousers\" or \"slacks\"\n",
    "- ❌ **No concept of relevance**: All keyword matches are treated equally\n",
    "\n",
    "This is why combining Text Search with Semantic Search in **Hybrid Search** often yields the best results - you get both keyword matching and semantic understanding."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "3X2TuDEnNvm2"
   },
   "source": [
    "## 3. Hybrid Search - Combining Multiple Search Strategies\n",
    "\n",
    "One of the most powerful features of Vector Search 2.0 is **Hybrid Search** - combining multiple search strategies using [Reciprocal Rank Fusion (RRF)](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) to produce better, more balanced results."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Semantic Search + Text Search Example"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bvhd0j-PN0u0"
   },
   "outputs": [],
   "source": [
    "# Hybrid search: combine semantic and text searches with built-in RRF\n",
    "query_text = \"Men's short for beach\"\n",
    "\n",
    "batch_search_request = vectorsearch_v1beta.BatchSearchDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    searches=[\n",
    "        vectorsearch_v1beta.Search(\n",
    "            semantic_search=vectorsearch_v1beta.SemanticSearch(\n",
    "                search_text=query_text,\n",
    "                search_field=\"name_dense_embedding\",\n",
    "                task_type=\"QUESTION_ANSWERING\",\n",
    "                top_k=20,\n",
    "                output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"id\", \"name\", \"category\", \"retail_price\"]),\n",
    "            )\n",
    "        ),\n",
    "        vectorsearch_v1beta.Search(\n",
    "            text_search=vectorsearch_v1beta.TextSearch(\n",
    "                search_text=query_text,\n",
    "                data_field_names=[\"name\"],\n",
    "                top_k=20,\n",
    "                output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"id\", \"name\", \"category\", \"retail_price\"]),\n",
    "            )\n",
    "        ),\n",
    "    ],\n",
    "    combine=vectorsearch_v1beta.BatchSearchDataObjectsRequest.CombineResultsOptions(\n",
    "        ranker=vectorsearch_v1beta.Ranker(\n",
    "            rrf=vectorsearch_v1beta.ReciprocalRankFusion(weights=[1.0, 1.0])\n",
    "        )\n",
    "    ),\n",
    ")\n",
    "\n",
    "batch_results = data_object_search_service_client.batch_search_data_objects(batch_search_request)\n",
    "\n",
    "print(f\"Hybrid search results for '{query_text}' (Semantic + Text with built-in RRF):\")\n",
    "print(\"=\"*80)\n",
    "\n",
    "# When a ranker is used, batch_results.results contains a single ranked list\n",
    "# results[0] is the SearchDataObjectsResponse with the combined RRF-ranked results\n",
    "if batch_results.results:\n",
    "    # Get the first (and only) result which contains the RRF-ranked combined results\n",
    "    combined_results = batch_results.results[0]\n",
    "    \n",
    "    for i, result in enumerate(combined_results.results[:10], 1):\n",
    "        name = result.data_object.data['name']\n",
    "        price = result.data_object.data['retail_price']\n",
    "        print(f\"{i:2}. {name} - ${price:.2f}\")\n",
    "else:\n",
    "    print(\"No results found\")\n",
    "\n",
    "print(\"\\n💡 Hybrid search combines semantic understanding with keyword precision using built-in RRF!\")\n",
    "print(\"   Results are ranked by RRF score - products appearing high in both searches rank highest.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PHFGTj_5Q7uK"
   },
   "source": [
    "The example above demonstrates practical hybrid search by combining semantic and text search with built-in RRF:\n",
    "\n",
    "**How it works:**\n",
    "1. **Same Query, Different Approaches**: Both searches use `\"Men's short for beach\"` but process it differently\n",
    "   - **Semantic Search**: Understands query intent (e.g., \"beach\" relates to swimwear, casual wear)\n",
    "   - **Text Search**: Finds keyword matches like \"short\" in the product name field\n",
    "2. **Built-in RRF Combining**: \n",
    "   - `BatchSearchDataObjectsRequest` executes both semantic and text searches in parallel\n",
    "   - The `combine` parameter with `ReciprocalRankFusion` automatically fuses the results\n",
    "   - The `weights` parameter (here `[1.0, 1.0]`) controls the relative importance of each search\n",
    "\n",
    "**Why Hybrid Search is the Best of Both Worlds:**\n",
    "\n",
    "| Challenge | Semantic Search | Text Search | Hybrid Search |\n",
    "|-----------|-----------------|-------------|---------------|\n",
    "| \"Men's outfit for beach\" | ✅ Understands intent | ❌ No keyword match | ✅ Works |\n",
    "| SKU \"ABC-123\" | ❌ Out of domain | ✅ Keyword match | ✅ Works |\n",
    "| New brand name | ❌ Not in training data | ✅ Keyword match | ✅ Works |\n",
    "| Typo \"shrt\" | ✅ May understand | ❌ No match | ✅ Partial |\n",
    "\n",
    "**RRF Algorithm Benefits:**\n",
    "- Products ranking high in **both** searches get the highest combined scores\n",
    "- Balances semantic understanding (query intent) with keyword precision (token matches)\n",
    "- Handles the weaknesses of each approach by leveraging the other's strengths"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "# Part 5: Production-Ready Search with ANN Indexes\n",
    "\n",
    "In Part 4, we used **kNN (k-Nearest Neighbors)** search, which works instantly without any index creation. While kNN is perfect for development and small datasets, it has a critical limitation: **search latency increases significantly as your dataset grows**.\n",
    "\n",
    "For production deployments with large-scale data (hundreds of thousands to billions of vectors), you need **ANN (Approximate Nearest Neighbor)** indexes.\n",
    "\n",
    "## Why ANN Indexes?\n",
    "\n",
    "**The Challenge with kNN:**\n",
    "- kNN performs **brute-force comparison** against every vector in your Collection\n",
    "- With 10,000 products: Fast (milliseconds)\n",
    "- With 100,000 products: Slower (hundreds of milliseconds)\n",
    "- With 1,000,000+ products: Too slow (seconds or more)\n",
    "\n",
    "**The ANN Solution:**\n",
    "ANN indexes use Google's [ScaNN (Scalable Nearest Neighbors)](https://github.com/google-research/google-research/tree/master/scann) algorithm - the same technology powering Google Search, YouTube, and Google Play - to enable **blazingly fast similarity search at billion-scale**.\n",
    "\n",
    "## ANN vs kNN: The Trade-off\n",
    "\n",
    "| Feature | kNN (Part 4) | ANN (Part 5) |\n",
    "|---------|-------------|--------------|\n",
    "| **Index Creation** | None (instant) | Required (5-60+ minutes) |\n",
    "| **Search Latency** | Increases with data size | Sub-second even at billion scale |\n",
    "| **Accuracy** | 100% exact | ~99% (approximate, configurable) |\n",
    "| **Best For** | Development, small datasets | Production, large-scale deployments |\n",
    "| **Dataset Size** | < tens of thousands | Hundreds of thousands to billions |\n",
    "\n",
    "## Key Benefits of ANN Indexes\n",
    "\n",
    "1. **Blazing Fast Search**: Sub-second latency even with billions of vectors\n",
    "2. **Advanced Filtering**: Pre-filter by data fields (category, price, etc.) during vector search\n",
    "3. **Optimized Storage**: Store frequently accessed fields directly in the index for faster retrieval\n",
    "4. **Production-Ready**: Built on battle-tested Google technology powering major products\n",
    "\n",
    "## What We'll Build\n",
    "\n",
    "In this section, we'll create an **ANN index** on our product name embeddings (`name_dense_embedding`) to enable:\n",
    "- Lightning-fast semantic product search at scale\n",
    "- Filtered searches like \"Find Jeans under $100 for men\"\n",
    "- Production-ready performance for e-commerce applications\n",
    "\n",
    "Let's get started!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creating an ANN Index for Dense Embeddings\n",
    "\n",
    "Now let's create our first ANN index! This index will dramatically speed up semantic search on the `name_dense_embedding` field, making it ready for production-scale deployments."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "C4zKyCJSQ-mM"
   },
   "outputs": [],
   "source": [
    "## Creating an ANN Index for Dense Embeddings\n",
    "request = vectorsearch_v1beta.CreateIndexRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",\n",
    "    index_id=\"name-dense-index\",  # Use hyphens instead of underscores\n",
    "    index={\n",
    "        \"index_field\": \"name_dense_embedding\",  # Index the product name dense embeddings\n",
    "        \"filter_fields\": [\"category\", \"retail_price\"],  # Enable filtering by category and price\n",
    "        \"store_fields\": [\"name\"],  # Store product name for quick retrieval\n",
    "    },\n",
    ")\n",
    "dense_index_lro = vector_search_service_client.create_index(request)\n",
    "dense_index_operation_name = dense_index_lro.operation.name\n",
    "print(f\"✅ Creating dense ANN index on 'name_dense_embedding'\")\n",
    "print(f\"   LRO: {dense_index_operation_name}\")\n",
    "print(f\"   This operation takes several minutes. We'll poll it later.\")\n",
    "print(f\"\\n💡 Once ready, searches like 'Find products with similar name semantics' will be lightning fast!\")\n",
    "dense_index_operation_name"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8kmxG_AHTLhI"
   },
   "source": [
    "### Waiting for Index Creation\n",
    "\n",
    "Index creation is an asynchronous operation (LRO = Long Running Operation). For this 10,000 product dataset, indexing typically takes around **30 minutes**. For larger datasets, it can take hours.\n",
    "\n",
    "The code below includes automatic retry logic to handle Colab's 900-second cell execution timeout. If you encounter a timeout, simply re-run the cell to continue waiting."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8Fvw-J5fTEZP"
   },
   "outputs": [],
   "source": [
    "# Wait for index creation with automatic retry for Colab timeout\n",
    "import time\n",
    "\n",
    "print(f\"⏳ Waiting for dense index creation...\")\n",
    "print(f\"   LRO: {dense_index_operation_name}\")\n",
    "print(f\"   This typically takes around 30 minutes for this dataset.\")\n",
    "print(f\"   If you see a timeout error, simply re-run this cell.\\n\")\n",
    "\n",
    "max_retries = 100  # Allow many retries to handle multiple timeouts\n",
    "poll_interval = 60  # Check every 60 seconds\n",
    "\n",
    "for attempt in range(max_retries):\n",
    "    try:\n",
    "        # Check if operation is done using the LRO object directly\n",
    "        if dense_index_lro.done():\n",
    "            print(\"✅ Dense index ready!\")\n",
    "            break\n",
    "        else:\n",
    "            elapsed = attempt * poll_interval\n",
    "            print(f\"   Still creating... ({elapsed // 60} min {elapsed % 60} sec elapsed)\")\n",
    "            time.sleep(poll_interval)\n",
    "            \n",
    "    except Exception as e:\n",
    "        if \"timeout\" in str(e).lower() or \"deadline\" in str(e).lower():\n",
    "            print(f\"⚠️ Timeout occurred, retrying... (attempt {attempt + 1})\")\n",
    "            continue\n",
    "        else:\n",
    "            raise e\n",
    "else:\n",
    "    print(\"⚠️ Max retries reached. Please re-run this cell to continue waiting.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Inspecting the Created Index\n",
    "\n",
    "Before we start searching, let's verify that our index was created successfully and check its configuration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get and inspect the created index\n",
    "get_index_request = vectorsearch_v1beta.GetIndexRequest(\n",
    "    name=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}/indexes/name-dense-index\"\n",
    ")\n",
    "index_info = vector_search_service_client.get_index(get_index_request)\n",
    "\n",
    "print(\"Index Information:\")\n",
    "print(\"=\"*70)\n",
    "print(f\"Name: {index_info.name}\")\n",
    "print(f\"Index Field: {index_info.index_field}\")\n",
    "print(f\"Filter Fields: {list(index_info.filter_fields) if index_info.filter_fields else 'None'}\")\n",
    "print(f\"Store Fields: {list(index_info.store_fields) if index_info.store_fields else 'None'}\")\n",
    "print(\"\\nThe index is ready to accelerate searches on the 'name_dense_embedding' field!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_text = \"Men's outfit for beach\"\n",
    "\n",
    "# Run semantic search on the Collection - the ANN index is automatically used\n",
    "semantic_search_request = vectorsearch_v1beta.SearchDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",  # Search on Collection\n",
    "    semantic_search=vectorsearch_v1beta.SemanticSearch(\n",
    "        search_text=query_text,\n",
    "        search_field=\"name_dense_embedding\",  # The indexed field\n",
    "        task_type=\"QUESTION_ANSWERING\",\n",
    "        top_k=10,\n",
    "        output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"name\", \"category\", \"retail_price\"]),\n",
    "    ),\n",
    ")\n",
    "\n",
    "results = data_object_search_service_client.search_data_objects(semantic_search_request)\n",
    "\n",
    "print(f\"ANN-Accelerated Search Results for '{query_text}':\")\n",
    "print(\"=\"*80)\n",
    "for i, result in enumerate(results, 1):\n",
    "    name = result.data_object.data['name']\n",
    "    price = result.data_object.data['retail_price']\n",
    "    print(f\"{i:2}. {name} - ${price:.2f}\")\n",
    "\n",
    "print(f\"\\n💡 The ANN index on 'name_dense_embedding' automatically accelerates this search!\")\n",
    "print(f\"   Same API, same results - just faster at scale.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_text = \"Women's winter jacket\"\n",
    "\n",
    "# Run semantic search on the Collection - the ANN index is automatically used\n",
    "semantic_search_request = vectorsearch_v1beta.SearchDataObjectsRequest(\n",
    "    parent=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\",  # Search on Collection\n",
    "    semantic_search=vectorsearch_v1beta.SemanticSearch(\n",
    "        search_text=query_text,\n",
    "        search_field=\"name_dense_embedding\",  # The indexed field\n",
    "        task_type=\"QUESTION_ANSWERING\",\n",
    "        top_k=10,\n",
    "        output_fields=vectorsearch_v1beta.OutputFields(data_fields=[\"name\", \"category\", \"retail_price\"]),\n",
    "    ),\n",
    ")\n",
    "\n",
    "results = data_object_search_service_client.search_data_objects(semantic_search_request)\n",
    "\n",
    "print(f\"ANN-Accelerated Search Results for '{query_text}':\")\n",
    "print(\"=\"*80)\n",
    "for i, result in enumerate(results, 1):\n",
    "    name = result.data_object.data['name']\n",
    "    price = result.data_object.data['retail_price']\n",
    "    print(f\"{i:2}. {name} - ${price:.2f}\")\n",
    "\n",
    "print(\"\\nEvidence that ANN index is used:\")\n",
    "print(f\"- Index 'name-dense-index' exists in the Collection\")\n",
    "print(f\"- Index is configured for field: {index_info.index_field}\")\n",
    "print(f\"- Search uses the same field: name_dense_embedding\")\n",
    "print(\"- When field names match, the index is automatically used for faster search\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Key Takeaways: ANN Indexes\n",
    "\n",
    "**What We Accomplished:**\n",
    "\n",
    "1. **Created ANN Index**: Built a ScaNN-powered index on `name_dense_embedding`\n",
    "2. **Automatic Acceleration**: Searches on the Collection now automatically use the index for better performance\n",
    "3. **Same API**: The search API remains identical - you still use `semantic_search` on the Collection\n",
    "4. **Transparent Upgrade**: No code changes needed from Part 4 to Part 5 - just faster performance\n",
    "\n",
    "**How ANN Indexes Work:**\n",
    "\n",
    "- **Not a separate search endpoint**: You search on the Collection, not on the index\n",
    "- **Automatic optimization**: When searching an indexed field, the index is used automatically\n",
    "- **Transparent upgrade**: No code changes needed - just faster performance\n",
    "- **Production-ready**: Built on Google's ScaNN algorithm, the same technology powering Google Search\n",
    "\n",
    "**Performance Benefits:**\n",
    "\n",
    "- **Scalability**: Same code works for 3K products or 3 billion products\n",
    "- **Speed**: Sub-second latency even at massive scale vs. seconds/minutes with kNN\n",
    "- **Efficiency**: ScaNN algorithm provides approximate results with ~99% accuracy at much faster speeds\n",
    "\n",
    "**When to Use ANN vs kNN:**\n",
    "\n",
    "| Scenario | Use This | Why |\n",
    "|----------|----------|-----|\n",
    "| Development & prototyping | kNN (Part 4) | Instant - no index build wait time |\n",
    "| Small datasets (< 10K rows) | kNN (Part 4) | Fast enough without indexing overhead |\n",
    "| Production with large data | ANN (Part 5) | Worth the index build time for better performance |\n",
    "| Billions of vectors | ANN (Part 5) | Only viable option for acceptable query latency |\n",
    "\n",
    "**Note on Filtering:**\n",
    "While we configured `filter_fields` during index creation, filtering with `semantic_search` is not currently supported in the API. Filtering is available when using `vector_search` with manually generated embeddings."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "# Part 6: Clean Up\n",
    "\n",
    "## Important: Delete Resources to Avoid Costs\n",
    "\n",
    "Vector Search 2.0 resources incur costs when active. To avoid unexpected charges, you must delete:\n",
    "\n",
    "1. **ANN Indexes** (if created)\n",
    "2. **Collections** (and all associated Data Objects)\n",
    "\n",
    "**Cost Warning**: Leaving these resources running can result in significant charges. Always clean up when you're done with the tutorial!\n",
    "\n",
    "## Cleanup Process\n",
    "\n",
    "The cleanup must follow this order:\n",
    "1. Delete all ANN indexes first\n",
    "2. Then delete the Collection (this also deletes all Data Objects)\n",
    "\n",
    "Let's clean up all resources created in this tutorial."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 1: Delete ANN Index\n",
    "\n",
    "First, we need to delete the ANN index. This is a long-running operation (LRO) that may take a few minutes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Delete the ANN index\n",
    "request = vectorsearch_v1beta.DeleteIndexRequest(\n",
    "    name=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}/indexes/name-dense-index\"\n",
    ")\n",
    "delete_index_lro = vector_search_service_client.delete_index(request)\n",
    "print(f\"🗑️ Deleting ANN index 'name-dense-index'...\")\n",
    "print(f\"   LRO: {delete_index_lro.operation.name}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Wait for index deletion to complete\n",
    "print(f\"Waiting for index deletion LRO: {delete_index_lro.operation.name}\")\n",
    "delete_index_lro.result()\n",
    "print(\"✅ ANN index deleted successfully!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 2: Delete Collection\n",
    "\n",
    "Now that the index is deleted, we can delete the Collection. This will also delete all Data Objects stored in the Collection."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Delete the Collection (and all Data Objects inside it)\n",
    "request = vectorsearch_v1beta.DeleteCollectionRequest(\n",
    "    name=f\"projects/{PROJECT_ID}/locations/{LOCATION}/collections/{collection_id}\"\n",
    ")\n",
    "vector_search_service_client.delete_collection(request)\n",
    "print(f\"🗑️ Deleted Collection: {collection_id}\")\n",
    "print(f\"   All Data Objects in the Collection have been deleted as well.\")\n",
    "print(\"\\n✅ Cleanup complete! All Vector Search 2.0 resources have been deleted.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Cleanup Summary\n",
    "\n",
    "You've successfully deleted all resources created in this tutorial:\n",
    "\n",
    "✅ **ANN Index** (`name_dense_index`) - Deleted  \n",
    "✅ **Collection** (`{collection_id}`) - Deleted  \n",
    "✅ **All Data Objects** (10,000 products) - Deleted"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "# Summary\n",
    "\n",
    "Congratulations! You've completed a comprehensive introduction to **Vertex AI Vector Search 2.0**. Let's recap what you've learned and where to go next.\n",
    "\n",
    "## What You've Learned\n",
    "\n",
    "### 1. **Collections** - Schema-Enforced Data Containers\n",
    "- Created Collections with data schemas (data) and vector schemas (embeddings)\n",
    "- Configured auto-embeddings using Vertex AI's `gemini-embedding-001` model\n",
    "- Learned how Collections enforce data structure for consistency\n",
    "\n",
    "### 2. **Data Objects** - Your Actual Data\n",
    "- Created individual Data Objects with data and vectors\n",
    "- Used batch operations for efficient bulk imports (100x faster than individual creates)\n",
    "- Implemented rate limiting to stay within API quotas (100,000 RPM for gemini-embedding-001)\n",
    "- Understood how auto-embeddings work with empty `vectors: {}`\n",
    "\n",
    "### 3. **Querying and Filtering** - SQL-Like Metadata Search\n",
    "- Used query operators (`$eq`, `$lt`, `$gte`, `$and`, `$or`) to filter data\n",
    "- Retrieved products by category, price range, and complex conditions\n",
    "- Performed aggregate queries to get collection statistics\n",
    "\n",
    "### 4. **Vector Search** - Semantic Similarity with kNN\n",
    "- **Semantic Search**: Natural language queries with auto-generated embeddings\n",
    "- **Text Search**: Traditional keyword matching\n",
    "- **Hybrid Search**: Combined semantic + text search with manual RRF ranking\n",
    "- Learned when to use kNN (development, small datasets) vs when to scale to ANN\n",
    "\n",
    "### 5. **Production-Ready Search** - ANN Indexes for Scale\n",
    "- Created ANN indexes powered by Google's ScaNN algorithm\n",
    "- Understood the kNN vs ANN trade-off (instant vs fast, exact vs approximate)\n",
    "- Learned how indexes automatically accelerate searches on indexed fields\n",
    "- Configured filter fields and store fields for optimized queries\n",
    "\n",
    "### 6. **Resource Management** - Cost Control\n",
    "- Properly deleted indexes before collections (order matters!)\n",
    "- Understood that collection deletion cascades to all data objects\n",
    "- Learned the importance of cleanup to avoid unexpected costs\n",
    "\n",
    "## Key Takeaways\n",
    "\n",
    "✅ **Unified Storage**: Vectors + user provided data together (no separate database needed)  \n",
    "✅ **Auto-Embeddings**: Automatic generation via Vertex AI models  \n",
    "✅ **Flexible Search**: Semantic, text, and hybrid search in one platform  \n",
    "✅ **Development to Production**: Start with kNN, scale with ANN indexes  \n",
    "✅ **Battle-Tested Technology**: Built on ScaNN (powers Google Search, YouTube, Google Play)  \n",
    "✅ **Enterprise-Ready**: Scalability from 10K to 3B vectors with same API\n",
    "\n",
    "## Architecture Recap\n",
    "\n",
    "```\n",
    "Collection (Schema-Enforced Container)\n",
    "├── Data Schema (Data structure)\n",
    "├── Vector Schema (Embedding configurations)\n",
    "├── Data Objects (Individual items with user provided data + vectors)\n",
    "└── Indexes (Optional performance optimization)\n",
    "    ├── kNN (Instant, perfect for dev/small datasets)\n",
    "    └── ANN (Production, billion-scale with sub-second latency)\n",
    "```\n",
    "\n",
    "## Get Started with Your Own Projects\n",
    "\n",
    "### 1. **Documentation and Guides**\n",
    "- [Vector Search 2.0 Overview](https://cloud.google.com/vertex-ai/docs/vector-search-2/overview)\n",
    "- [Collections Guide](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections)\n",
    "- [Data Objects Guide](https://cloud.google.com/vertex-ai/docs/vector-search-2/data-objects/data-objects)\n",
    "- [Search Guide](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search)\n",
    "- [Indexes Guide](https://cloud.google.com/vertex-ai/docs/vector-search-2/indexes/indexes)\n",
    "\n",
    "### 2. **Additional Tutorials**\n",
    "- [Vector Search 2.0 Quickstart](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-quickstart.ipynb) - Includes sparse embeddings and true hybrid search with built-in RRF\n",
    "- [Introduction to Text Embeddings and Vector Search](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb) - Learn the fundamentals of embeddings\n",
    "\n",
    "### 3. **SDK and API References**\n",
    "- [Python SDK Documentation](https://cloud.google.com/python/docs/reference/vectorsearch/latest)\n",
    "- Install: `pip install google-cloud-vectorsearch`\n",
    "\n",
    "### 4. **Sample Datasets**\n",
    "- [TheLook E-Commerce Dataset](https://console.cloud.google.com/marketplace/product/bigquery-public-data/thelook-ecommerce) (used in this tutorial)\n",
    "- Bring your own data: Any JSON-compatible data + text for embeddings\n",
    "\n",
    "### 5. **Use Cases to Explore**\n",
    "- **E-Commerce**: Product recommendations, semantic search, visual similarity\n",
    "- **Content Discovery**: Article/video recommendations, duplicate detection\n",
    "- **Customer Support**: FAQ matching, ticket routing, knowledge base search\n",
    "- **Enterprise Search**: Document retrieval, code search, internal knowledge graphs\n",
    "\n",
    "### 6. **Community and Support**\n",
    "- [Google Cloud Community](https://www.googlecloudcommunity.com/)\n",
    "- [Stack Overflow - google-cloud-platform](https://stackoverflow.com/questions/tagged/google-cloud-platform)\n",
    "- [GitHub Issues](https://github.com/googleapis/python-vector-search/issues)\n",
    "\n",
    "## Next Steps\n",
    "\n",
    "1. **Experiment with the full dataset**: Change `MAX_PRODUCTS = None` in the data loading section to use all 30K products\n",
    "2. **Try your own data**: Replace the TheLook dataset with your own JSON data\n",
    "3. **Explore sparse embeddings**: Add sparse vector fields for true hybrid search with built-in RRF\n",
    "4. **Build a production app**: Integrate Vector Search 2.0 into your application using the Python SDK\n",
    "5. **Optimize for scale**: Create ANN indexes for production workloads with billions of vectors\n",
    "\n",
    "**Ready to build something amazing?** Start by modifying this notebook with your own data, or explore the [Vector Search 2.0 documentation](https://cloud.google.com/vertex-ai/docs/vector-search-2/overview) for advanced features!\n",
    "\n",
    "**Questions or feedback?** Open an issue on [GitHub](https://github.com/GoogleCloudPlatform/generative-ai/issues) or ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/google-cloud-platform)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "# Appendix: Embedding API Quotas and Rate Limiting\n",
    "\n",
    "When using auto-embeddings, Vector Search calls the [Vertex AI Embeddings API](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings). The `gemini-embedding-001` model uses [Dynamic Shared Quota (DSQ)](https://cloud.google.com/vertex-ai/generative-ai/docs/dynamic-shared-quota) with the following limits:\n",
    "\n",
    "| Type | Limit | Description |\n",
    "|------|-------|-------------|\n",
    "| **Per-minute quotas** | | |\n",
    "| Tokens per minute | 5,000,000 | Primary quota for gemini-embedding |\n",
    "| Requests per minute | 100,000 | Secondary quota |\n",
    "| **Per-request limits** | | |\n",
    "| Max texts per request | 250 | Batch size limit |\n",
    "| Max tokens per request | 20,000 | Exceeding returns 400 error |\n",
    "| Max tokens per input | 2,048 | Excess tokens silently truncated |\n",
    "\n",
    "**Batch size considerations:** The batch size should not exceed 250 (max texts per request). For long texts, you may need smaller batches to stay under 20,000 tokens per request:\n",
    "- Short texts (~10 tokens): 250 items × 10 = 2,500 tokens ✓\n",
    "- Long texts (~500 tokens): 40 items × 500 = 20,000 tokens (max)\n",
    "\n",
    "For details, see [Text Embedding Quotas](https://cloud.google.com/vertex-ai/generative-ai/docs/quotas#text_embedding_limits) and [API Limits](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings#api_limits).\n",
    "\n",
    "## Rate Limiting to Avoid Resource Exhausted Errors\n",
    "\n",
    "To stay within the per-minute quotas, you may need to add delays between batches. Calculate the delay based on your token usage:\n",
    "\n",
    "1. Estimate tokens per batch: `batch_size × avg_tokens_per_item` (e.g., 250 × 10 = 2,500 tokens)\n",
    "2. Calculate batches per minute: `5,000,000 / tokens_per_batch` (e.g., 5M / 2,500 = 2,000 batches/min)\n",
    "3. Calculate delay: `60 / batches_per_minute` (e.g., 60 / 2,000 = 0.03 seconds)\n",
    "\n",
    "For short texts like product names, the quota is generous and minimal delays are needed. For longer texts, calculate the appropriate delay based on your average token count.\n",
    "\n",
    "## Large-Scale Imports (100K+ items)\n",
    "\n",
    "For very large datasets, consider these techniques:\n",
    "- **Multithreading**: Process batches concurrently within quota limits\n",
    "- **Checkpointing**: Save progress to resume after errors\n",
    "- **Queue-based processing**: Decouple batch creation from API calls\n",
    "\n",
    "For a complete example, see: [Handling large-scale embedding generation for Vertex AI Vector Search](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/large-embs-generation-for-vvs.ipynb)"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "vector-search-2-intro.ipynb",
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
