{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": 50,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Data Curation Pipeline: Splitting and Transcoding \n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fuse-cases%2Fmultimodal-data-curation%2Fsemantic-deduplication.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/multimodal-data-curation/semantic-deduplication.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Author(s) |\n",
        "| --- |\n",
        "| [John Semerdjian](https://github.com/semerj) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "Data deduplication is a critical step in any data curation pipeline. Even after splitting, filtering, and captioning video clips, our dataset will still contain redundant information, especially since much of the clips are derived from the same source material. Imagine that you're building a specialized foundation model for sports videos — do you really need thousands of clips of athletes shooting free throws? While more high quality data usually leads to better downstream modeling performance, it's we're likely not spending our compute budget efficiently. Redundant examples in particular do little to enhance a model's ability to generalize to new tasks, and instead these examples waste precious FLOPs.  Depending on the dataset, some researchers have show that eliminating [50% of training datasets using straightforward deduplication techniques can lead to a model of similar performance but at half the cost and twice as fast](https://arxiv.org/pdf/2303.09540).\n",
        "\n",
        "In this post we'll show one simple approach for detecting semantic duplicates using [multimodal embeddings](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-multimodal-embeddings#video-modes) and [Approximate Nearest Neighbors using BigQuery Vector Search](https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximation_methods). There are m"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Google Gen AI SDK and other required packages\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet datasets google-cloud-bigquery sentencepiece pandas-gbq"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "PROJECT_ID = \"genai-scratchpad\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "# PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5303c05f7aa6"
      },
      "source": [
        "### Import libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "6fc324893334"
      },
      "outputs": [],
      "source": [
        "from concurrent.futures import ThreadPoolExecutor, as_completed\n",
        "import io\n",
        "import threading\n",
        "\n",
        "from PIL import Image\n",
        "import av\n",
        "from google.cloud import aiplatform, bigquery, storage\n",
        "import pandas as pd\n",
        "import pandas_gbq\n",
        "import torch\n",
        "from torch import nn\n",
        "from tqdm import tqdm\n",
        "from transformers import AutoModel, AutoProcessor\n",
        "import vertexai\n",
        "from vertexai.vision_models import MultiModalEmbeddingModel, Video\n",
        "\n",
        "vertexai.init(project=PROJECT_ID, location=LOCATION)\n",
        "aiplatform.init(project=PROJECT_ID, location=LOCATION)\n",
        "\n",
        "bq_client = bigquery.Client()\n",
        "storage_client = storage.Client()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f0b05899f92f"
      },
      "source": [
        "## Video Embedding Generation\n",
        "\n",
        "### Load Video Data\n",
        "\n",
        "This notebook will use a subset of videos from the [VidGen-1M dataset](https://arxiv.org/abs/2408.02629). This authors of this dataset have already done some preliminary deduplication of the dataset, but let's see if we can identify some new duplicates with a small sample of records. \n",
        "\n",
        "In order to use the Multimodal Embeddings API the videos must be stored on Google Cloud Storage. You can download the compressed files and transfer them to a bucket here: https://huggingface.co/datasets/Fudan-FUXI/VIDGEN-1M/tree/main\n",
        "\n",
        "Alternatively, you can also load a subset of the dataset into memory using the `datasets` library:\n",
        "\n",
        "```\n",
        "data = load_dataset(\"Fudan-FUXI/VIDGEN-1M\", data_files=\"VidGen_video_0.zip\")\n",
        "```\n",
        "\n",
        "We'll read the clips directly from Cloud Storage below:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "89779759ac65"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "5000"
            ]
          },
          "execution_count": 4,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "def load_video_paths(\n",
        "    bucket: str,\n",
        "    num_videos: int,\n",
        ") -> list[str]:\n",
        "    video_paths = []\n",
        "    for i, blob in enumerate(storage_client.list_blobs(bucket)):\n",
        "        if i >= num_videos:\n",
        "            break\n",
        "        video_paths.append(f\"gs://{bucket}/\" + blob.name)\n",
        "    return video_paths\n",
        "\n",
        "\n",
        "video_paths = load_video_paths(\"vidgen-1m\", 5000)\n",
        "len(video_paths)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c6e0d65af056"
      },
      "source": [
        "### Vertex AI Multimodal Embeddings\n",
        "\n",
        "We will generate embeddings using the Vertex AI multimodal embedding API. We will parallelize the calls to the embedding API (you may need to request a quota increase to get better throughput). In order to use this API we must store our videos on Cloud Storage. Video embeddings from this model have 1408 dimensions that encode a rich amount of semantic information about the content. It supports common video formats, e.g. mp4, webm, mov, and more. Our clips are already quite short so we don't need to worry about chunking the embeddings. Only 2 minutes of contents can be analyzed at a time, but there is no max video length. Audio data is not considered in the embeddings."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "976959ccd898"
      },
      "outputs": [],
      "source": [
        "def get_embedding_for_single_video(\n",
        "    model: vertexai.vision_models.MultiModalEmbeddingModel, video_file: str\n",
        ") -> dict[str, str | vertexai.vision_models.MultiModalEmbeddingResponse]:\n",
        "    \"\"\"Get the initial set of embeddings of a video file.\n",
        "\n",
        "    Args:\n",
        "        model: MultiModalEmbeddingModel instance.\n",
        "        video_file: URI for video on Cloud Storage.\n",
        "\n",
        "    Returns:\n",
        "        Dictionary containing the URI, start and end offsets, and embeddings.\n",
        "\n",
        "    \"\"\"\n",
        "    try:\n",
        "        video = Video.load_from_file(video_file)\n",
        "        embeddings = model.get_embeddings(video=video)\n",
        "        return {\n",
        "            \"uri\": video_file,\n",
        "            \"embedding\": embeddings.video_embeddings[0].embedding,\n",
        "            \"start_offset_sec\": embeddings.video_embeddings[0].start_offset_sec,\n",
        "            \"end_offset_sec\": embeddings.video_embeddings[0].end_offset_sec,\n",
        "        }\n",
        "    except Exception as e:\n",
        "        print(f\"Error processing {video_file}: {e}\")\n",
        "        return {\n",
        "            \"uri\": video_file,\n",
        "            \"embedding\": None,\n",
        "            \"error\": str(e),\n",
        "        }"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "52c10f375a5c"
      },
      "outputs": [],
      "source": [
        "def get_embeddings_with_semaphore(\n",
        "    video_files: list[str],\n",
        "    model_name: str = \"multimodalembedding@001\",\n",
        "    max_workers: int = 2,\n",
        ") -> list[dict[str, str | vertexai.vision_models.MultiModalEmbeddingResponse]]:\n",
        "    \"\"\"Get embeddings for a list of video files.\n",
        "\n",
        "    Args:\n",
        "        video_files: List of URIs for video files on Cloud Storage.\n",
        "        model_name: Name of the Vertex AI Multimodal EmbeddingModel to use.\n",
        "        max_workers: The maximum number of concurrent requests.\n",
        "\n",
        "    Returns:\n",
        "        A list of dictionaries containing the URI and the embedding response or an error.\n",
        "\n",
        "    \"\"\"\n",
        "    model = MultiModalEmbeddingModel.from_pretrained(model_name)\n",
        "    all_embeddings = []\n",
        "    semaphore = threading.Semaphore(max_workers)\n",
        "\n",
        "    def rate_limited_embedding_task(video_file: str):\n",
        "        \"\"\"Acquires the semaphore before running the embedding task.\"\"\"\n",
        "        with semaphore:\n",
        "            return get_embedding_for_single_video(model, video_file)\n",
        "\n",
        "    with ThreadPoolExecutor(max_workers=max_workers) as executor:\n",
        "        futures = [\n",
        "            executor.submit(rate_limited_embedding_task, video_file)\n",
        "            for video_file in video_files\n",
        "        ]\n",
        "\n",
        "        for future in tqdm(\n",
        "            as_completed(futures), total=len(video_files), desc=\"Processing videos\"\n",
        "        ):\n",
        "            all_embeddings.append(future.result())\n",
        "\n",
        "    return all_embeddings"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "1baa554e6f0f"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Processing videos: 100%|██████████| 5000/5000 [13:25<00:00,  6.21it/s]\n"
          ]
        }
      ],
      "source": [
        "embeddings = get_embeddings_with_semaphore(\n",
        "    video_paths,\n",
        "    model_name=\"multimodalembedding@001\",\n",
        "    max_workers=30,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "16a97a42a6ae"
      },
      "source": [
        "### Side Quest: Open Source Embeddings\n",
        "\n",
        "We can also try out open source embedding models designed for images. We can sample N frames from each video, pool their embeddings, and perform the same deduplication steps. We will use average pooling of frame embeddings as a robust and computationally efficient baseline, but other approaches can also be used. A drawback of average pooling ignore the temporal nature of the video frames. Depending on the approach, the number of frames may not be completely accurate. Since (1) each video has a different duration and (2) we're pooling the embeddings anyway, sampling exact same number per video is less important."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "dc02f310e7c3"
      },
      "outputs": [],
      "source": [
        "def sample_frames(video_path: str, num_frames: int = 10) -> list[Image.Image]:\n",
        "    \"\"\"Samples n frames from a video, supporting both local paths and GCS URIs.\n",
        "\n",
        "    Args:\n",
        "        video_path: Cloud storage URI.\n",
        "        num_frames: Number of frames to sample.\n",
        "\n",
        "    Returns:\n",
        "        A list of PIL Image objects.\n",
        "\n",
        "    \"\"\"\n",
        "    pil_images: list[Image.Image] = []\n",
        "    video_source_for_av = None\n",
        "    total_frames_reported_by_stream = 0\n",
        "\n",
        "    try:\n",
        "        bucket_name, blob_name = video_path.replace(\"gs://\", \"\").split(\"/\", 1)\n",
        "        bucket = storage_client.bucket(bucket_name)\n",
        "        blob = bucket.blob(blob_name)\n",
        "\n",
        "        video_bytes = blob.download_as_bytes()\n",
        "        video_source_for_av = io.BytesIO(video_bytes)\n",
        "        container = av.open(video_source_for_av)\n",
        "\n",
        "        if not container.streams.video:\n",
        "            print(f\"No video streams found in {video_path}\")\n",
        "            return []\n",
        "\n",
        "        video_stream = container.streams.video[0]\n",
        "        # total_frames_reported_by_stream can sometimes be 0 or inaccurate.\n",
        "        # PyAV's video_stream.frames gives an often reliable count.\n",
        "        total_frames_reported_by_stream = video_stream.frames\n",
        "\n",
        "        if total_frames_reported_by_stream == 0 or num_frames == 0:\n",
        "            return []\n",
        "\n",
        "        target_indices_to_sample = []\n",
        "        if num_frames == 1:\n",
        "            # Sample the middle frame index\n",
        "            target_indices_to_sample.append(\n",
        "                round((total_frames_reported_by_stream - 1) / 2)\n",
        "            )\n",
        "        elif num_frames >= total_frames_reported_by_stream:\n",
        "            # If more or equal frames are requested than available, sample all\n",
        "            target_indices_to_sample = list(range(total_frames_reported_by_stream))\n",
        "        else:\n",
        "            # Sample num_frames evenly distributed, aiming to include first and last.\n",
        "            # Using a set to ensure uniqueness if rounding collapses indices, then sort.\n",
        "            _indices = set()\n",
        "            for i in range(num_frames):\n",
        "                idx = round(\n",
        "                    i * (total_frames_reported_by_stream - 1) / (num_frames - 1)\n",
        "                )\n",
        "                _indices.add(int(idx))\n",
        "            target_indices_to_sample = sorted(list(_indices))\n",
        "\n",
        "        # Ensure the list is not empty if logic somehow failed or num_frames was valid\n",
        "        if (\n",
        "            not target_indices_to_sample\n",
        "            and num_frames > 0\n",
        "            and total_frames_reported_by_stream > 0\n",
        "        ):\n",
        "            # This case should ideally not be hit if calculations are correct\n",
        "            # Default to sampling just the first frame if something went wrong with index calculation\n",
        "            target_indices_to_sample.append(0)\n",
        "\n",
        "        current_frame_idx = 0\n",
        "        # Iterate through frames and pick the ones at target_indices_to_sample\n",
        "        for frame in container.decode(video_stream):\n",
        "            if not target_indices_to_sample:\n",
        "                break\n",
        "\n",
        "            if current_frame_idx == target_indices_to_sample[0]:\n",
        "                pil_images.append(frame.to_image())\n",
        "                target_indices_to_sample.pop(0)\n",
        "\n",
        "            current_frame_idx += 1\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"General error processing video '{video_path}': {e}\")\n",
        "        raise\n",
        "    finally:\n",
        "        if container:\n",
        "            try:\n",
        "                container.close()\n",
        "            except Exception as ce:\n",
        "                print(f\"Error closing video container for '{video_path}': {ce}\")\n",
        "        if isinstance(video_source_for_av, io.BytesIO):\n",
        "            video_source_for_av.close()\n",
        "\n",
        "    if not pil_images and num_frames > 0 and total_frames_reported_by_stream > 0:\n",
        "        print(\n",
        "            f\"Warning: No frames were sampled from '{video_path}'. \"\n",
        "            \"Check video integrity, stream content, or frame selection logic.\"\n",
        "        )\n",
        "\n",
        "    return pil_images"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {
        "id": "622983c20244"
      },
      "outputs": [],
      "source": [
        "sampled_frames = sample_frames(video_paths[0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "91b7075467ca"
      },
      "source": [
        "Here is the first sample frame for the first video:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "89dd5fc60163"
      },
      "outputs": [],
      "source": [
        "sampled_frames[0]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "48ec86e18016"
      },
      "source": [
        "and the last sample frame:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0eba85742de2"
      },
      "outputs": [],
      "source": [
        "sampled_frames[-1]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 43,
      "metadata": {
        "id": "0da396bbb4d0"
      },
      "outputs": [],
      "source": [
        "class SiglipWithProjection(nn.Module):\n",
        "\n",
        "    def __init__(self, model_name, target_dim):\n",
        "        super().__init__()\n",
        "        self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
        "        self.model = AutoModel.from_pretrained(model_name).to(self.device)\n",
        "        for param in self.model.parameters():\n",
        "            param.requires_grad = False\n",
        "        original_dim = self.model.config.vision_config.hidden_size\n",
        "        self.projection = nn.Linear(original_dim, target_dim)\n",
        "\n",
        "    def forward(self, **inputs):\n",
        "        \"\"\"Apply a projection layer to the image features.\"\"\"\n",
        "        image_features = self.model.get_image_features(**inputs)\n",
        "        low_dim_features = self.projection(image_features)\n",
        "        return low_dim_features\n",
        "\n",
        "\n",
        "def get_video_embeddings(\n",
        "    video_paths: list[str],\n",
        "    model_name: str,\n",
        ") -> list[dict[str, str | torch.Tensor]]:\n",
        "    \"\"\"Get the pooled image embeddings of videos from a list of video paths.\n",
        "\n",
        "    Args:\n",
        "        video_paths: Cloud storage video URIs.\n",
        "        model_name: The name of the model to use.\n",
        "\n",
        "    Returns:\n",
        "        A tensor of the average of the image embeddings of the sampled frames of a video.\n",
        "\n",
        "    \"\"\"\n",
        "    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
        "\n",
        "    model = SiglipWithProjection(model_name, target_dim=256)\n",
        "    processor = AutoProcessor.from_pretrained(model_name, use_fast=False)\n",
        "\n",
        "    embeddings = []\n",
        "    for video_path in video_paths:\n",
        "        sampled_frames = sample_frames(video_path)\n",
        "        inputs = processor(images=sampled_frames, return_tensors=\"pt\").to(device)\n",
        "        outputs = model(**inputs)\n",
        "        embeddings.append(\n",
        "            {\n",
        "                \"uri\": video_path,\n",
        "                \"embedding\": outputs.mean(dim=0).tolist(),\n",
        "            },\n",
        "        )\n",
        "    return embeddings"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bb283f61361d"
      },
      "outputs": [],
      "source": [
        "# \"laion/CLIP-ViT-B-16-laion2B-s34B-b88K\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "03a37437e81d"
      },
      "outputs": [],
      "source": [
        "embeddings = get_video_embeddings(video_paths, \"google/siglip2-base-patch16-512\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1db7a5caacc7"
      },
      "source": [
        "## Storing Embeddings in BigQuery\n",
        "\n",
        "We'll store the embeddings in a BigQuery table. We'll use the `bq_client` to create a table and insert the embeddings."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "7881ac7f12d7"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(5000, 4)"
            ]
          },
          "execution_count": 8,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "dataset_id = \"video_data_curation\"\n",
        "table_id = \"clip_embeddings\"\n",
        "\n",
        "df_embedding = pd.DataFrame(embeddings)\n",
        "df_embedding.shape"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "ac72d205625f"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "100%|██████████| 1/1 [00:00<00:00, 5645.09it/s]\n"
          ]
        }
      ],
      "source": [
        "pandas_gbq.to_gbq(\n",
        "    df_embedding,\n",
        "    f\"{PROJECT_ID}.{dataset_id}.{table_id}\",\n",
        "    if_exists=\"replace\",\n",
        "    project_id=PROJECT_ID,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a881ac879601"
      },
      "source": [
        "## Deduplication in BigQuery\n",
        "\n",
        "Once the embeddings are in BigQuery we will use its Vector Search functionality to perform the deduplication step. While there are more sophisticated strategies that run K-means clustering on the embeddings, perform pairwise comparisons within each cluster, and prune based on similarity, this approach is can be done entirely within BigQuery and easy to understand and execute.\n",
        "\n",
        "First, let's create the test table that contains duplicate rows."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "d0a9ac076c88"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "genai-scratchpad.video_data_curation.clip_embeddings\n"
          ]
        }
      ],
      "source": [
        "full_table_id = f\"{PROJECT_ID}.{dataset_id}.{table_id}\"\n",
        "print(full_table_id)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b85526e5f962"
      },
      "source": [
        "We will not create vector index to accelerate the vector search query. This steps is particularly important if the table is large, but we can still do brute force search since our table is small."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "9cf81881f64f"
      },
      "outputs": [],
      "source": [
        "embedding_column = \"embedding\"\n",
        "\n",
        "index_job = bq_client.query(\n",
        "    f\"\"\"\n",
        "CREATE VECTOR INDEX my_index ON \"{full_table_id}\"({embedding_column})\n",
        "OPTIONS(index_type='TREE_AH', distance_type='COSINE');\n",
        "\"\"\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6f34ddaa5ac7"
      },
      "source": [
        "The vector index creation takes a few minutes. You can monitor the vector index status using the following query. Once the coverage is 100, we can proceed to the next step."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "6a17ab551a0a"
      },
      "outputs": [],
      "source": [
        "index_status_job = bq_client.query(\n",
        "    f\"\"\"\n",
        "SELECT table_name, index_status, coverage_percentage\n",
        "FROM '{PROJECT_ID}.{dataset_id}'.INFORMATION_SCHEMA.VECTOR_INDEXES\n",
        "WHERE table_name = \"{table_id}\";\n",
        "\"\"\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "id": "f6296f192723"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "pyarrow.Table\n",
              "table_name: string\n",
              "index_status: string\n",
              "coverage_percentage: int64\n",
              "----\n",
              "table_name: [[\"clip_embeddings\"]]\n",
              "index_status: [[\"ACTIVE\"]]\n",
              "coverage_percentage: [[0]]"
            ]
          },
          "execution_count": 16,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "index_status_job.result().to_arrow()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "46b25bea8e9d"
      },
      "source": [
        "We can now perform the deduplication step. For each embedding, we perform an approximate nearest neighbor (ANN) search and retrieve matching embeddings with a cosine distance. For records that are within the provided threshold - which should be tuned for each dataset and use case -  these are the semantic duplicates and we will remove them. While this approach is simple it comes with some tradeoffs, specifically that we ignore any transitive links between the embeddings (e.g. if record A is near both record B and record C, then B and C may be nearby as well), and that we don't intelligently decide which embedding to include or exclude between matches. Nevertheless, this is a straightforward deduplication approach that leverages the foundational power of the [ScaNN algorithm](https://research.google/blog/announcing-scann-efficient-vector-similarity-search/) for semantic search."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "id": "62e27c32b895"
      },
      "outputs": [],
      "source": [
        "index_column = \"uri\"\n",
        "top_k = 10\n",
        "distance_threshold = 0.05\n",
        "\n",
        "dedupe_job = bq_client.query(\n",
        "    f\"\"\"\n",
        "CREATE OR REPLACE TABLE '{full_table_id}_dedupe' AS\n",
        "WITH dupes AS (\n",
        "    SELECT \n",
        "        DISTINCT query.{index_column}\n",
        "    FROM VECTOR_SEARCH(\n",
        "        Table '{full_table_id}', \"embedding\",\n",
        "        Table '{full_table_id}', top_k => {top_k})\n",
        "    WHERE \n",
        "        distance < {distance_threshold} AND query.{index_column} > base.{index_column}\n",
        ")\n",
        "SELECT * FROM '{full_table_id}'\n",
        "WHERE {index_column} NOT IN (SELECT {index_column} FROM dupes);\n",
        "\"\"\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "id": "435e9d2fa246"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "pyarrow.Table\n",
              "f0_: int64\n",
              "----\n",
              "f0_: [[4989]]"
            ]
          },
          "execution_count": 18,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "bq_client.query(f\"SELECT COUNT(*) FROM '{full_table_id}_dedupe'\").result().to_arrow()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "id": "f1530a5c5ac5"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "pyarrow.Table\n",
              "f0_: int64\n",
              "----\n",
              "f0_: [[5000]]"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "bq_client.query(f\"SELECT COUNT(*) FROM '{full_table_id}'\").result().to_arrow()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bd0ccdb6e46e"
      },
      "source": [
        "Let's query the duplicate records"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {
        "id": "58d8dd121475"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Downloading: 100%|\u001b[32m██████████\u001b[0m|\n"
          ]
        }
      ],
      "source": [
        "df_dupes = pandas_gbq.read_gbq(\n",
        "    f\"\"\"\n",
        "SELECT \n",
        "    DISTINCT query.uri,\n",
        "    base.uri,\n",
        "    distance\n",
        "FROM VECTOR_SEARCH(\n",
        "    Table '{full_table_id}', \"embedding\",\n",
        "    Table '{full_table_id}', top_k => 10)\n",
        "WHERE \n",
        "    query.uri > base.uri\n",
        "    AND distance < .05\n",
        "ORDER BY \n",
        "    distance DESC;\n",
        "\"\"\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {
        "id": "09be1f6545d7"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "['gs://vidgen-1m/VidGen_video_1002/1L2Ib7XbFl0-Scene-0002.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1001/7BdLCNVP3vc-Scene-0018.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1003/EumrLe0lv2o-Scene-0055.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1003/26ez1C6FHTU-Scene-0029.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1005/fGgj5Xwhca0-Scene-0220.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1004/BytDnUzySCc-Scene-0047.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1003/frcOHC7TLdA-Scene-0034.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1001/9yMovThU5Hg-Scene-1171.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1005/CVT7j05IHN4-Scene-0061.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1004/8gDxAwsHZzI-Scene-0151.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_0/Copy of -Beq3x4K-xA-Scene-0001.mp4']"
            ]
          },
          "execution_count": 21,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "df_dupes[\"uri\"].tolist()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {
        "id": "a93d421d611d"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "['gs://vidgen-1m/VidGen_video_1001/LVnVfhcFL3g-Scene-0005.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_10/7BdLCNVP3vc-Scene-0030.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1001/8LAfQyTauYo-Scene-0053.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1002/OQWeq_TUMDE-Scene-0012.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1000/ekKb8hixxFM-Scene-0010.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1001/doPS18DtqLU-Scene-0390.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1/xzn_rmla6yU-Scene-0200.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1000/-_agFJmVJXk-Scene-0068.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1/JfkzqohuutQ-Scene-0178.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_1003/06dEjljE80Q-Scene-0840.mp4',\n",
              " 'gs://vidgen-1m/VidGen_video_0/-Beq3x4K-xA-Scene-0001.mp4']"
            ]
          },
          "execution_count": 23,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "df_dupes[\"uri_1\"].tolist()"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "semantic-deduplication.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
