{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WtxoQixAqoNu"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ke8cM4GQln_c"
      },
      "source": [
        "# Unlocking Multimodal Video Transcription with Gemini\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BPRBl_orqoNv"
      },
      "source": [
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fuse-cases%2Fvideo-analysis%2Fmultimodal_video_transcription.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<p><div style=\"clear: both;\"></div></p>\n",
        "\n",
        "<p>\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/video-analysis/multimodal_video_transcription.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n",
        "</p>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C0AcsmQ5hl9a"
      },
      "source": [
        "| Author                                           |\n",
        "| ------------------------------------------------ |\n",
        "| [Laurent Picard](https://github.com/PicardParis) |\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b0Hl-l4rhdvV"
      },
      "source": [
        "---\n",
        "\n",
        "## ✨ Overview\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZoN_-ofRhl9b"
      },
      "source": [
        "![intro image](https://storage.googleapis.com/github-repo/generative-ai/gemini/use-cases/video-analysis/multimodal_video_transcription/unlocking-multimodal-video-transcription.gif)\n",
        "\n",
        "Traditional machine learning (ML) perception models typically focus on specific features and single modalities, deriving insights solely from natural language, speech, or vision analysis. Historically, extracting and consolidating information from multiple modalities has been challenging due to siloed processing, complex architectures, and the risk of data being \"lost in translation.\" However, multimodal and long-context large language models (LLMs) like Gemini can overcome these issues by processing all modalities within the same context, opening new possibilities.\n",
        "\n",
        "Moving beyond speech-to-text, this notebook explores how to achieve comprehensive video transcription by leveraging all available modalities. It covers the following topics:\n",
        "\n",
        "- A methodology for addressing new or complex problems with a multimodal LLM\n",
        "- A prompt technique for decoupling data and preserving attention: tabular extraction\n",
        "- Strategies for making the most of Gemini's 1M-token context in a single request\n",
        "- Practical examples of multimodal video transcriptions\n",
        "- Tips & optimizations\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DwKb-__qK02C"
      },
      "source": [
        "---\n",
        "\n",
        "## 🔥 Challenge\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "35E-CpC6qoNw"
      },
      "source": [
        "To fully transcribe a video, we're looking to answer the following questions:\n",
        "\n",
        "- 1️⃣ What was said and when?\n",
        "- 2️⃣ Who are the speakers?\n",
        "- 3️⃣ Who said what?\n",
        "\n",
        "Can we solve this problem in a straightforward and efficient way?\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oArL-WR6qoNx"
      },
      "source": [
        "---\n",
        "\n",
        "## 🌟 State of the art\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JEqIf_8wqoNx"
      },
      "source": [
        "### 1️⃣ What was said and when?\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s8QHpIophj3y"
      },
      "source": [
        "This is a known problem with an existing solution:\n",
        "\n",
        "- **Speech-to-Text** (STT) is a process that takes an audio input and transforms speech into text. STT can provide timestamps at the word level. It is also known as automatic speech recognition (ASR).\n",
        "\n",
        "In the last decade, task-specific ML models have most effectively addressed this.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ASAjs9XIhj3y"
      },
      "source": [
        "### 2️⃣ Who are the speakers?\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qPVOzTOkhj3y"
      },
      "source": [
        "We can retrieve speaker names in a video from two sources:\n",
        "\n",
        "- **What's written** (e.g., speakers can be introduced with on-screen information when they first speak)\n",
        "- **What's spoken** (e.g., \"Hello Bob! Alice! How are you doing?\")\n",
        "\n",
        "Vision and Natural Language Processing (NLP) models can help with the following features:\n",
        "\n",
        "- Vision: **Optical Character Recognition** (OCR), also called text detection, extracts the text visible in images.\n",
        "- Vision: **Person Detection** identifies if and where people are in an image.\n",
        "- NLP: **Entity Extraction** can identify named entities in text.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1XXLvBn7hj3y"
      },
      "source": [
        "### 3️⃣ Who said what?\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mTCIysHbhj3z"
      },
      "source": [
        "This is another known problem with a partial solution (complementary to Speech-to-Text):\n",
        "\n",
        "- **Speaker Diarization** (also known as speaker turn segmentation) is a process that splits an audio stream into segments for the different detected speakers (\"Speaker A\", \"Speaker B\", etc.).\n",
        "\n",
        "Researchers have made significant progress in this field for decades, particularly with ML models in recent years, but this is still an active field of research. Existing solutions have shortcomings: they often require human supervision and hints (e.g., the minimum and maximum number of speakers, the language spoken) and typically support only a limited set of languages.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-FaQEMUFln_e"
      },
      "source": [
        "---\n",
        "\n",
        "## 🏺 Traditional ML pipeline\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "T23-7OyFln_e"
      },
      "source": [
        "Solving all of 1️⃣, 2️⃣, and 3️⃣ isn't straightforward. This would likely involve setting up an elaborate supervised processing pipeline, based on a few state-of-the-art ML models, such as the following:\n",
        "\n",
        "![a traditional ml pipeline](https://storage.googleapis.com/github-repo/generative-ai/gemini/use-cases/video-analysis/multimodal_video_transcription/traditional-ml-pipeline.png)\n",
        "\n",
        "We might need days or weeks to design and set up such a pipeline. Additionally, at the time of writing, our multimodal-video-transcription challenge is not a solved problem, so there's absolutely no certainty of reaching a viable solution.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "U5D5Mrt9qoNx"
      },
      "source": [
        "---\n",
        "\n",
        "## 💡 A new problem-solving toolbox\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3keeSV5f1EAc"
      },
      "source": [
        "Gemini allows for rapid prompt-based problem solving. With just text instructions, we can extract information and transform it into new insights, through a straightforward and automated workflow.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cebBAgoMhj3z"
      },
      "source": [
        "### 🎬 Multimodal\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YdKTFUhWhj3z"
      },
      "source": [
        "Gemini is natively multimodal, which means it can process different types of inputs:\n",
        "\n",
        "- text\n",
        "- image\n",
        "- audio\n",
        "- video\n",
        "- document\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61K-rN1Whj3z"
      },
      "source": [
        "### 🌐 Multilingual\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Hy3C66nwhj3z"
      },
      "source": [
        "Gemini is also [multilingual](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#languages-gemini):\n",
        "\n",
        "- It can process inputs and generate outputs in 100+ languages\n",
        "- If we can solve the video challenge for one language, that solution should naturally extend to all other languages\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vcyl7KUrhj3z"
      },
      "source": [
        "### 🧰 A natural-language toolbox\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ch5tuuaNhj3z"
      },
      "source": [
        "Multimodal and multilingual understanding in a single model lets us shift from relying on task-specific ML models to using a single versatile LLM.\n",
        "\n",
        "Our challenge now looks a lot simpler:\n",
        "\n",
        "![natural-language toolbox with gemini](https://storage.googleapis.com/github-repo/generative-ai/gemini/use-cases/video-analysis/multimodal_video_transcription/gemini-natural-language-toolbox.png)\n",
        "\n",
        "In other words, let's rephrase our challenge: Can we fully transcribe a video with just the following?\n",
        "\n",
        "- 1 video\n",
        "- 1 prompt\n",
        "- 1 request\n",
        "\n",
        "Let's try with Gemini…\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W0_VsUthqoNx"
      },
      "source": [
        "---\n",
        "\n",
        "## 🏁 Setup\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5wbID7ORqoNx"
      },
      "source": [
        "### 🐍 Python packages\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MZCjtEXjhj3z"
      },
      "source": [
        "We'll use the following packages:\n",
        "\n",
        "- `google-genai`: the [Google Gen AI Python SDK](https://pypi.org/project/google-genai) lets us call Gemini with a few lines of code\n",
        "- `pandas` for data visualization\n",
        "\n",
        "We'll also use these packages (dependencies of `google-genai`):\n",
        "\n",
        "- `pydantic` for data management\n",
        "- `tenacity` for request management\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kZBN80r7qtgs"
      },
      "outputs": [],
      "source": [
        "%pip install --quiet \"google-genai>=1.49.0\" \"pandas[output-formatting]\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yGgmHVdQqoNz"
      },
      "source": [
        "### 🔗 Gemini API\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zOJv5DjZhj3z"
      },
      "source": [
        "We have two main options to send requests to Gemini:\n",
        "\n",
        "- [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs): Build enterprise-ready projects on Google Cloud\n",
        "- [Google AI Studio](https://aistudio.google.com): Experiment, prototype, and deploy small projects\n",
        "\n",
        "The Google Gen AI SDK provides a unified interface to these APIs and we can use environment variables for the configuration.\n",
        "\n",
        "**Option A - Gemini API via Vertex AI**\n",
        "\n",
        "Requirement:\n",
        "\n",
        "- A Google Cloud project\n",
        "- The [Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) must be enabled for this project\n",
        "\n",
        "Gen AI SDK environment variables:\n",
        "\n",
        "- `GOOGLE_GENAI_USE_VERTEXAI=\"True\"`\n",
        "- `GOOGLE_CLOUD_PROJECT=\"<PROJECT_ID>\"`\n",
        "- `GOOGLE_CLOUD_LOCATION=\"<LOCATION>\"` (see [Google model endpoint locations](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations#google_model_endpoint_locations))\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).\n",
        "\n",
        "**Option B - Gemini API via Google AI Studio**\n",
        "\n",
        "Requirement:\n",
        "\n",
        "- A Gemini API key\n",
        "\n",
        "Gen AI SDK environment variables:\n",
        "\n",
        "- `GOOGLE_GENAI_USE_VERTEXAI=\"False\"`\n",
        "- `GOOGLE_API_KEY=\"<API_KEY>\"`\n",
        "\n",
        "Learn more about [getting a Gemini API key from Google AI Studio](https://aistudio.google.com/app/apikey).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wZcmLBKqhdvX"
      },
      "source": [
        "💡 You can store your environment configuration outside of the source code:\n",
        "\n",
        "| Environment         | Method                                                      |\n",
        "| ------------------- | ----------------------------------------------------------- |\n",
        "| IDE                 | `.env` file (or equivalent)                                 |\n",
        "| Colab               | Colab Secrets (🗝️ icon in left panel, see code below)       |\n",
        "| Colab Enterprise    | Google Cloud project and location are automatically defined |\n",
        "| Vertex AI Workbench | Google Cloud project and location are automatically defined |\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ky2Escg3a1E2"
      },
      "source": [
        "Define the following environment detection functions. You can also define your configuration manually if needed.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VTov81qlqoNz"
      },
      "outputs": [],
      "source": [
        "# @title {display-mode: \"form\"}\n",
        "\n",
        "import os\n",
        "import sys\n",
        "from collections.abc import Callable\n",
        "\n",
        "from google import genai\n",
        "\n",
        "# Manual setup (leave unchanged if setup is environment-defined)\n",
        "\n",
        "# @markdown **Which API: Vertex AI or Google AI Studio?**\n",
        "GOOGLE_GENAI_USE_VERTEXAI = True  # @param {type: \"boolean\"}\n",
        "\n",
        "# @markdown **Option A - Google Cloud project [+location]**\n",
        "GOOGLE_CLOUD_PROJECT = \"\"  # @param {type: \"string\"}\n",
        "GOOGLE_CLOUD_LOCATION = \"global\"  # @param {type: \"string\"}\n",
        "\n",
        "# @markdown **Option B - Google AI Studio API key**\n",
        "GOOGLE_API_KEY = \"\"  # @param {type: \"string\"}\n",
        "\n",
        "\n",
        "def check_environment() -> bool:\n",
        "    check_colab_user_authentication()\n",
        "    return check_manual_setup() or check_vertex_ai() or check_colab() or check_local()\n",
        "\n",
        "\n",
        "def check_manual_setup() -> bool:\n",
        "    return check_define_env_vars(\n",
        "        GOOGLE_GENAI_USE_VERTEXAI,\n",
        "        GOOGLE_CLOUD_PROJECT.strip(),  # Might have been pasted with line return\n",
        "        GOOGLE_CLOUD_LOCATION,\n",
        "        GOOGLE_API_KEY,\n",
        "    )\n",
        "\n",
        "\n",
        "def check_vertex_ai() -> bool:\n",
        "    # Workbench and Colab Enterprise\n",
        "    match os.getenv(\"VERTEX_PRODUCT\", \"\"):\n",
        "        case \"WORKBENCH_INSTANCE\":\n",
        "            pass\n",
        "        case \"COLAB_ENTERPRISE\":\n",
        "            if not running_in_colab_env():\n",
        "                return False\n",
        "        case _:\n",
        "            return False\n",
        "\n",
        "    return check_define_env_vars(\n",
        "        True,\n",
        "        os.getenv(\"GOOGLE_CLOUD_PROJECT\", \"\"),\n",
        "        os.getenv(\"GOOGLE_CLOUD_REGION\", \"\"),\n",
        "        \"\",\n",
        "    )\n",
        "\n",
        "\n",
        "def check_colab() -> bool:\n",
        "    if not running_in_colab_env():\n",
        "        return False\n",
        "\n",
        "    # Colab Enterprise was checked before, so this is Colab only\n",
        "    from google.colab import auth as colab_auth  # type: ignore\n",
        "\n",
        "    colab_auth.authenticate_user()\n",
        "\n",
        "    # Use Colab Secrets (🗝️ icon in left panel) to store the environment variables\n",
        "    # Secrets are private, visible only to you and the notebooks that you select\n",
        "    # - Vertex AI: Store your settings as secrets\n",
        "    # - Google AI: Directly import your Gemini API key from the UI\n",
        "    vertexai, project, location, api_key = get_vars(get_colab_secret)\n",
        "\n",
        "    return check_define_env_vars(vertexai, project, location, api_key)\n",
        "\n",
        "\n",
        "def check_local() -> bool:\n",
        "    vertexai, project, location, api_key = get_vars(os.getenv)\n",
        "\n",
        "    return check_define_env_vars(vertexai, project, location, api_key)\n",
        "\n",
        "\n",
        "def running_in_colab_env() -> bool:\n",
        "    # Colab or Colab Enterprise\n",
        "    return \"google.colab\" in sys.modules\n",
        "\n",
        "\n",
        "def check_colab_user_authentication() -> None:\n",
        "    if running_in_colab_env():\n",
        "        from google.colab import auth as colab_auth  # type: ignore\n",
        "\n",
        "        colab_auth.authenticate_user()\n",
        "\n",
        "\n",
        "def get_colab_secret(secret_name: str, default: str) -> str:\n",
        "    from google.colab import errors, userdata  # type: ignore\n",
        "\n",
        "    try:\n",
        "        return userdata.get(secret_name)\n",
        "    except errors.SecretNotFoundError:\n",
        "        return default\n",
        "\n",
        "\n",
        "def get_vars(getenv: Callable[[str, str], str]) -> tuple[bool, str, str, str]:\n",
        "    # Limit getenv calls to the minimum (may trigger UI confirmation for secret access)\n",
        "    vertexai_str = getenv(\"GOOGLE_GENAI_USE_VERTEXAI\", \"\")\n",
        "    if vertexai_str:\n",
        "        vertexai = vertexai_str.lower() in [\"true\", \"1\"]\n",
        "    else:\n",
        "        vertexai = bool(getenv(\"GOOGLE_CLOUD_PROJECT\", \"\"))\n",
        "\n",
        "    project = getenv(\"GOOGLE_CLOUD_PROJECT\", \"\") if vertexai else \"\"\n",
        "    location = getenv(\"GOOGLE_CLOUD_LOCATION\", \"\") if project else \"\"\n",
        "    api_key = getenv(\"GOOGLE_API_KEY\", \"\") if not project else \"\"\n",
        "\n",
        "    return vertexai, project, location, api_key\n",
        "\n",
        "\n",
        "def check_define_env_vars(\n",
        "    vertexai: bool,\n",
        "    project: str,\n",
        "    location: str,\n",
        "    api_key: str,\n",
        ") -> bool:\n",
        "    match (vertexai, bool(project), bool(location), bool(api_key)):\n",
        "        case (True, True, _, _):\n",
        "            # Vertex AI - Google Cloud project [+location]\n",
        "            location = location or \"global\"\n",
        "            define_env_vars(vertexai, project, location, \"\")\n",
        "        case (True, False, _, True):\n",
        "            # Vertex AI - API key\n",
        "            define_env_vars(vertexai, \"\", \"\", api_key)\n",
        "        case (False, _, _, True):\n",
        "            # Google AI Studio - API key\n",
        "            define_env_vars(vertexai, \"\", \"\", api_key)\n",
        "        case _:\n",
        "            return False\n",
        "\n",
        "    return True\n",
        "\n",
        "\n",
        "def define_env_vars(vertexai: bool, project: str, location: str, api_key: str) -> None:\n",
        "    os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"] = str(vertexai)\n",
        "    os.environ[\"GOOGLE_CLOUD_PROJECT\"] = project\n",
        "    os.environ[\"GOOGLE_CLOUD_LOCATION\"] = location\n",
        "    os.environ[\"GOOGLE_API_KEY\"] = api_key\n",
        "\n",
        "\n",
        "def check_configuration(client: genai.Client) -> None:\n",
        "    service = \"Vertex AI\" if client.vertexai else \"Google AI Studio\"\n",
        "    print(f\"Using the {service} API\", end=\"\")\n",
        "\n",
        "    if client._api_client.project:\n",
        "        print(f' with project \"{client._api_client.project[:7]}…\"', end=\"\")\n",
        "        print(f' in location \"{client._api_client.location}\"')\n",
        "    elif client._api_client.api_key:\n",
        "        api_key = client._api_client.api_key\n",
        "        print(f' with API key \"{api_key[:5]}…{api_key[-5:]}\"', end=\"\")\n",
        "        print(f\" (in case of error, make sure it was created for {service})\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GEYSAFqrqoNz"
      },
      "source": [
        "### 🤖 Gen AI SDK\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1Q_Irs4D1EAf"
      },
      "source": [
        "To send Gemini requests, create a `google.genai` client:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "DR77aUhzqoNz"
      },
      "outputs": [],
      "source": [
        "from google import genai\n",
        "\n",
        "check_environment()\n",
        "\n",
        "client = genai.Client()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NcH5fQGBhdvY"
      },
      "source": [
        "Check your configuration:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ANgm_y6_hdvY"
      },
      "outputs": [],
      "source": [
        "check_configuration(client)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9vk9e4V_qoNz"
      },
      "source": [
        "### 🧠 Gemini model\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Acdi0aoIqoNz"
      },
      "source": [
        "Gemini comes in different [versions](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models).\n",
        "\n",
        "Let's get started with Gemini 2.0 Flash, as it offers both high performance and low latency:\n",
        "\n",
        "- `GEMINI_2_0_FLASH = \"gemini-2.0-flash\"`\n",
        "\n",
        "> 💡 We select Gemini 2.0 Flash intentionally. The Gemini 2.5 model family is generally available and even more capable, but we want to experiment and understand Gemini's core multimodal behavior. If we complete our challenge with 2.0, this should also work with newer models.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_9_PY-nla1E3"
      },
      "source": [
        "### ⚙️ Gemini configuration\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2AsCht_9a1E3"
      },
      "source": [
        "Gemini can be used in different ways, ranging from factual to creative mode. The problem we're trying to solve is a **data extraction** use case. We want results as factual and deterministic as possible. For this, we can change the [content generation parameters](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/content-generation-parameters).\n",
        "\n",
        "We'll set the `temperature`, `top_p`, and `seed` parameters to minimize randomness:\n",
        "\n",
        "- `temperature=0.0`\n",
        "- `top_p=0.0`\n",
        "- `seed=42` (arbitrary fixed value)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B5dimkaC-CFe"
      },
      "source": [
        "### 🎞️ Video sources\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NgrGTaPT-CFe"
      },
      "source": [
        "Here are the main video sources that Gemini can analyze:\n",
        "\n",
        "| source               | URI                                          | Vertex AI | Google AI Studio |\n",
        "| -------------------- | -------------------------------------------- | :-------: | :--------------: |\n",
        "| Google Cloud Storage | `gs://bucket/path/to/video.*`                |    ✅     |                  |\n",
        "| Web URL              | `https://path/to/video.*`                    |    ✅     |                  |\n",
        "| YouTube              | `https://www.youtube.com/watch?v=YOUTUBE_ID` |    ✅     |        ✅        |\n",
        "\n",
        "⚠️ Important notes\n",
        "\n",
        "- Our video test suite primarily uses public YouTube videos. This is for simplicity.\n",
        "- When analyzing YouTube sources, Gemini receives raw audio/video streams without any additional metadata, exactly as if processing the corresponding video files from Cloud Storage.\n",
        "- YouTube does offer caption/subtitle/transcript features (user-provided or auto-generated). However, these features focus on word-level speech-to-text and are limited to 40+ languages. Gemini does not receive any of this data and you'll see that a multimodal transcription with Gemini provides additional benefits.\n",
        "- Furthermore, our challenge also involves identifying speakers and extracting speaker data, a unique new capability.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eUBe2u8IqoNz"
      },
      "source": [
        "### 🛠️ Helpers\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jsuIT4f1-CFe"
      },
      "source": [
        "Define our helper functions and data:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 43,
      "metadata": {
        "id": "Bus2ODLIK02F"
      },
      "outputs": [],
      "source": [
        "# @title {display-mode: \"form\"}\n",
        "\n",
        "import enum\n",
        "from dataclasses import dataclass\n",
        "from datetime import timedelta\n",
        "\n",
        "import IPython.display\n",
        "import tenacity\n",
        "from google.genai.errors import ClientError\n",
        "from google.genai.types import (\n",
        "    FileData,\n",
        "    FinishReason,\n",
        "    GenerateContentConfig,\n",
        "    GenerateContentResponse,\n",
        "    Part,\n",
        "    VideoMetadata,\n",
        ")\n",
        "\n",
        "\n",
        "class Model(enum.Enum):\n",
        "    # Generally Available (GA)\n",
        "    GEMINI_2_0_FLASH = \"gemini-2.0-flash\"\n",
        "    GEMINI_2_5_FLASH = \"gemini-2.5-flash\"\n",
        "    GEMINI_2_5_PRO = \"gemini-2.5-pro\"\n",
        "    # Default model\n",
        "    DEFAULT = GEMINI_2_0_FLASH\n",
        "\n",
        "\n",
        "# Default configuration for more deterministic outputs\n",
        "DEFAULT_CONFIG = GenerateContentConfig(\n",
        "    temperature=0.0,\n",
        "    top_p=0.0,\n",
        "    seed=42,  # Arbitrary fixed value\n",
        ")\n",
        "\n",
        "YOUTUBE_URL_PREFIX = \"https://www.youtube.com/watch?v=\"\n",
        "CLOUD_STORAGE_URI_PREFIX = \"gs://\"\n",
        "\n",
        "\n",
        "def url_for_youtube_id(youtube_id: str) -> str:\n",
        "    return f\"{YOUTUBE_URL_PREFIX}{youtube_id}\"\n",
        "\n",
        "\n",
        "class Video(enum.Enum):\n",
        "    pass\n",
        "\n",
        "\n",
        "class TestVideo(Video):\n",
        "    # For testing purposes, video duration is statically specified in the enum name\n",
        "    # Suffix (ISO 8601 based): _PT[<h>H][<m>M][<s>S]\n",
        "\n",
        "    # Google DeepMind | The Podcast | Season 3 Trailer | 59s\n",
        "    GDM_PODCAST_TRAILER_PT59S = url_for_youtube_id(\"0pJn3g8dfwk\")\n",
        "    # Google Maps | Walk in the footsteps of Jane Goodall | 2min 42s\n",
        "    JANE_GOODALL_PT2M42S = \"gs://cloud-samples-data/video/JaneGoodall.mp4\"\n",
        "    # Google DeepMind | AlphaFold | The making of a scientific breakthrough | 7min 54s\n",
        "    GDM_ALPHAFOLD_PT7M54S = url_for_youtube_id(\"gg7WjuFs8F4\")\n",
        "    # Brut | French reportage | 8min 28s\n",
        "    BRUT_FR_DOGS_WATER_LEAK_PT8M28S = url_for_youtube_id(\"U_yYkb-ureI\")\n",
        "    # Google DeepMind | The Podcast | AI for science | 54min 23s\n",
        "    GDM_AI_FOR_SCIENCE_FRONTIER_PT54M23S = url_for_youtube_id(\"nQKmVhLIGcs\")\n",
        "    # Google I/O 2025 | Developer Keynote | 1h 10min 03s\n",
        "    GOOGLE_IO_DEV_KEYNOTE_PT1H10M03S = url_for_youtube_id(\"GjvgtwSOCao\")\n",
        "    # Google Cloud | Next 2025 | Opening Keynote | 1h 40min 03s\n",
        "    GOOGLE_CLOUD_NEXT_PT1H40M03S = url_for_youtube_id(\"Md4Fs-Zc3tg\")\n",
        "    # Google I/O 2025 | Keynote | 1h 56min 35s\n",
        "    GOOGLE_IO_KEYNOTE_PT1H56M35S = url_for_youtube_id(\"o8NiE3XMPrM\")\n",
        "\n",
        "\n",
        "class ShowAs(enum.Enum):\n",
        "    DONT_SHOW = enum.auto()\n",
        "    TEXT = enum.auto()\n",
        "    MARKDOWN = enum.auto()\n",
        "\n",
        "\n",
        "@dataclass\n",
        "class VideoSegment:\n",
        "    start: timedelta\n",
        "    end: timedelta\n",
        "\n",
        "\n",
        "def generate_content(\n",
        "    prompt: str,\n",
        "    video: Video | None = None,\n",
        "    video_segment: VideoSegment | None = None,\n",
        "    model: Model | None = None,\n",
        "    config: GenerateContentConfig | None = None,\n",
        "    show_as: ShowAs = ShowAs.TEXT,\n",
        ") -> None:\n",
        "    prompt = prompt.strip()\n",
        "    model = model or Model.DEFAULT\n",
        "    config = config or DEFAULT_CONFIG\n",
        "\n",
        "    model_id = model.value\n",
        "    if video:\n",
        "        if not (video_part := get_video_part(video, video_segment)):\n",
        "            return\n",
        "        contents = [video_part, prompt]\n",
        "        caption = f\"{video.name} / {model_id}\"\n",
        "    else:\n",
        "        contents = prompt\n",
        "        caption = f\"{model_id}\"\n",
        "    print(f\" {caption} \".center(80, \"-\"))\n",
        "\n",
        "    for attempt in get_retrier():\n",
        "        with attempt:\n",
        "            response = client.models.generate_content(\n",
        "                model=model_id,\n",
        "                contents=contents,\n",
        "                config=config,\n",
        "            )\n",
        "            display_response_info(response)\n",
        "            display_response(response, show_as)\n",
        "\n",
        "\n",
        "def get_video_part(\n",
        "    video: Video,\n",
        "    video_segment: VideoSegment | None = None,\n",
        "    fps: float | None = None,\n",
        ") -> Part | None:\n",
        "    video_uri: str = video.value\n",
        "\n",
        "    if not client.vertexai:\n",
        "        video_uri = convert_to_https_url_if_cloud_storage_uri(video_uri)\n",
        "        if not video_uri.startswith(YOUTUBE_URL_PREFIX):\n",
        "            print(\"Google AI Studio API: Only YouTube URLs are currently supported\")\n",
        "            return None\n",
        "\n",
        "    file_data = FileData(file_uri=video_uri, mime_type=\"video/*\")\n",
        "    video_metadata = get_video_part_metadata(video_segment, fps)\n",
        "\n",
        "    return Part(file_data=file_data, video_metadata=video_metadata)\n",
        "\n",
        "\n",
        "def get_video_part_metadata(\n",
        "    video_segment: VideoSegment | None = None,\n",
        "    fps: float | None = None,\n",
        ") -> VideoMetadata:\n",
        "    def offset_as_str(offset: timedelta) -> str:\n",
        "        return f\"{offset.total_seconds()}s\"\n",
        "\n",
        "    if video_segment:\n",
        "        start_offset = offset_as_str(video_segment.start)\n",
        "        end_offset = offset_as_str(video_segment.end)\n",
        "    else:\n",
        "        start_offset = None\n",
        "        end_offset = None\n",
        "\n",
        "    return VideoMetadata(start_offset=start_offset, end_offset=end_offset, fps=fps)\n",
        "\n",
        "\n",
        "def convert_to_https_url_if_cloud_storage_uri(uri: str) -> str:\n",
        "    if uri.startswith(CLOUD_STORAGE_URI_PREFIX):\n",
        "        return f\"https://storage.googleapis.com/{uri.removeprefix(CLOUD_STORAGE_URI_PREFIX)}\"\n",
        "\n",
        "    return uri\n",
        "\n",
        "\n",
        "def get_retrier() -> tenacity.Retrying:\n",
        "    return tenacity.Retrying(\n",
        "        stop=tenacity.stop_after_attempt(7),\n",
        "        wait=tenacity.wait_incrementing(start=10, increment=1),\n",
        "        retry=should_retry_request,\n",
        "        reraise=True,\n",
        "    )\n",
        "\n",
        "\n",
        "def should_retry_request(retry_state: tenacity.RetryCallState) -> bool:\n",
        "    if not retry_state.outcome:\n",
        "        return False\n",
        "    err = retry_state.outcome.exception()\n",
        "    if not isinstance(err, ClientError):\n",
        "        return False\n",
        "    print(f\"❌ ClientError {err.code}: {err.message}\")\n",
        "\n",
        "    retry = False\n",
        "    match err.code:\n",
        "        case 400 if err.message is not None and \" try again \" in err.message:\n",
        "            # Workshop: project accessing Cloud Storage for the first time (service agent provisioning)\n",
        "            retry = True\n",
        "        case 429:\n",
        "            # Workshop: temporary project with 1 QPM quota\n",
        "            retry = True\n",
        "    print(f\"🔄 Retry: {retry}\")\n",
        "\n",
        "    return retry\n",
        "\n",
        "\n",
        "def display_response_info(response: GenerateContentResponse) -> None:\n",
        "    if usage_metadata := response.usage_metadata:\n",
        "        if usage_metadata.prompt_token_count:\n",
        "            print(f\"Input tokens   : {usage_metadata.prompt_token_count:9,d}\")\n",
        "        if usage_metadata.candidates_token_count:\n",
        "            print(f\"Output tokens  : {usage_metadata.candidates_token_count:9,d}\")\n",
        "        if usage_metadata.thoughts_token_count:\n",
        "            print(f\"Thoughts tokens: {usage_metadata.thoughts_token_count:9,d}\")\n",
        "    if not response.candidates:\n",
        "        print(\"❌ No `response.candidates`\")\n",
        "        return\n",
        "    if (finish_reason := response.candidates[0].finish_reason) != FinishReason.STOP:\n",
        "        print(f\"❌ {finish_reason = }\")\n",
        "    if not response.text:\n",
        "        print(\"❌ No `response.text`\")\n",
        "        return\n",
        "\n",
        "\n",
        "def display_response(\n",
        "    response: GenerateContentResponse,\n",
        "    show_as: ShowAs,\n",
        ") -> None:\n",
        "    if show_as == ShowAs.DONT_SHOW:\n",
        "        return\n",
        "    if not (response_text := response.text):\n",
        "        return\n",
        "    response_text = response.text.strip()\n",
        "\n",
        "    print(\" start of response \".center(80, \"-\"))\n",
        "    match show_as:\n",
        "        case ShowAs.TEXT:\n",
        "            print(response_text)\n",
        "        case ShowAs.MARKDOWN:\n",
        "            display_markdown(response_text)\n",
        "    print(\" end of response \".center(80, \"-\"))\n",
        "\n",
        "\n",
        "def display_markdown(markdown: str) -> None:\n",
        "    IPython.display.display(IPython.display.Markdown(markdown))\n",
        "\n",
        "\n",
        "def display_video(video: Video) -> None:\n",
        "    video_url = convert_to_https_url_if_cloud_storage_uri(video.value)\n",
        "    assert video_url.startswith(\"https://\")\n",
        "\n",
        "    video_width = 600\n",
        "    if video_url.startswith(YOUTUBE_URL_PREFIX):\n",
        "        youtube_id = video_url.removeprefix(YOUTUBE_URL_PREFIX)\n",
        "        # Add referrerpolicy to fix video player configuration error 153\n",
        "        extras = ['referrerpolicy=\"strict-origin-when-cross-origin\"']\n",
        "        ipython_video = IPython.display.YouTubeVideo(\n",
        "            youtube_id, width=video_width, extras=extras\n",
        "        )\n",
        "    else:\n",
        "        ipython_video = IPython.display.Video(video_url, width=video_width)\n",
        "\n",
        "    display_markdown(f\"### Video ([source]({video_url}))\")\n",
        "    IPython.display.display(ipython_video)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8B499IGYqoN0"
      },
      "source": [
        "---\n",
        "\n",
        "## 🧪 Prototyping\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Wbw8w44lqoN0"
      },
      "source": [
        "### 🌱 Natural behavior\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZGi_TzFzhj30"
      },
      "source": [
        "Before diving any deeper, it's interesting to see how Gemini responds to simple instructions, to develop some intuition about its natural behavior.\n",
        "\n",
        "Let's first see what we get with minimalistic prompts and a short English video.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "gF3MCXLTqoN0"
      },
      "outputs": [],
      "source": [
        "video = TestVideo.GDM_PODCAST_TRAILER_PT59S\n",
        "display_video(video)\n",
        "\n",
        "prompt = \"Transcribe the video's audio with time information.\"\n",
        "generate_content(prompt, video)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4Jw70MYlqoN0"
      },
      "source": [
        "Results:\n",
        "\n",
        "- Gemini naturally outputs a list of `[time] transcript` lines.\n",
        "- That's Speech-to-Text in one line!\n",
        "- It looks like we can answer \"1️⃣ What was said and when?\".\n",
        "\n",
        "Now, what about \"2️⃣ Who are the speakers?\"\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jOaSzf4TqoN0"
      },
      "outputs": [],
      "source": [
        "prompt = \"List the speakers identifiable in the video.\"\n",
        "generate_content(prompt, video)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GPX61D20qoN0"
      },
      "source": [
        "Results:\n",
        "\n",
        "- Gemini can consolidate the names visible on title cards during the video.\n",
        "- That's OCR + entity extraction in one line!\n",
        "- \"2️⃣ Who are the speakers?\" looks solved too!\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mQtQ1vNiqoN0"
      },
      "source": [
        "### ⏩ Not so fast!\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FkgHVqt2hj35"
      },
      "source": [
        "The natural next step is to jump to the final instructions, to solve our problem once and for all.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qF6BM_HfqoN1"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "Transcribe the video's audio including speaker names (use `?` if not found).\n",
        "\n",
        "Format example:\n",
        "[00:02] John Doe - Hello Alice!\n",
        "\"\"\"\n",
        "generate_content(prompt, video)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I68y1ZBUqoN1"
      },
      "source": [
        "This is almost correct. The first segment is not attributed to the host (who is only introduced a bit later), but everything else looks correct.\n",
        "\n",
        "Nonetheless, these are not real-world conditions:\n",
        "\n",
        "- The video is very short (less than a minute)\n",
        "- The video is also rather simple (speakers are clearly introduced with on-screen title cards)\n",
        "\n",
        "Let's try with this 8-minute (and more complex) video:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ycy--DT4qoN1"
      },
      "outputs": [],
      "source": [
        "generate_content(prompt, TestVideo.GDM_ALPHAFOLD_PT7M54S)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Jh0XSXOtqoN1"
      },
      "source": [
        "This falls apart: Most segments have no identified speaker!\n",
        "\n",
        "As we are trying to solve a new complex problem, LLMs haven't been trained on any known solution. This is likely why direct instructions do not yield the expected answer.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "La1_j-f7f0uH"
      },
      "source": [
        "### 🚧 Experiment\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pJEQDPGGhj35"
      },
      "source": [
        "Let's take a few minutes to experiment:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hRPVMKZCf0uH"
      },
      "outputs": [],
      "source": [
        "# Experiment with your own instructions to improve the response\n",
        "prompt = \"\"\"\n",
        "Transcribe the video, including speaker names (use `?` if not found).\n",
        "\n",
        "Format example:\n",
        "[00:02] John Doe - Hello Alice!\n",
        "\"\"\"\n",
        "# Write a more elaborate prompt and uncomment the next line to check it on our short test video\n",
        "# generate_content(prompt, TestVideo.GDM_PODCAST_TRAILER_PT59S)\n",
        "\n",
        "# If it works on the short video, also check your prompt on this more complex video\n",
        "# generate_content(prompt, TestVideo.GDM_ALPHAFOLD_PT7M54S)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8bUYsYBff0uH"
      },
      "source": [
        "Did you find a solution? Does it also work with the more complex video? Did you double-check by watching the whole video? If so, congratulations, you can proceed to \"Structured output\".\n",
        "\n",
        "Otherwise, at this stage:\n",
        "\n",
        "- We might conclude that we can't solve the problem with real-world videos.\n",
        "- Persevering by trying more and more elaborate prompts for this unsolved problem might result in a waste of time.\n",
        "\n",
        "Let's take a step back and think about what happens under the hood…\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "784a-K1wqoN1"
      },
      "source": [
        "---\n",
        "\n",
        "## ⚛️ Under the hood\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YmvfnIAwa1E4"
      },
      "source": [
        "Modern LLMs are mostly based on the Transformer architecture, a new neural network design detailed in a 2017 paper by Google researchers titled [Attention Is All You Need](https://arxiv.org/abs/1706.03762). The paper introduced the self-attention mechanism, a key innovation that fundamentally changed the way machines process language.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "APobp3nihj35"
      },
      "source": [
        "### 🪙 Tokens\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e2gSrBCPqoN1"
      },
      "source": [
        "Tokens are the LLM building blocks. We can consider a token to represent a piece of information.\n",
        "\n",
        "Examples of Gemini multimodal tokens (with default parameters):\n",
        "\n",
        "| content                 |   tokens    | details                                               |\n",
        "| ----------------------- | :---------: | ----------------------------------------------------- |\n",
        "| `hello`                 |      1      | 1 token for common words/sequences                    |\n",
        "| `passionately`          |      2      | `passion•ately`                                       |\n",
        "| `passionnément`         |      3      | `passion•né•ment` (same adverb in French)             |\n",
        "| image                   |     258     | per image (or per tile depending on image resolution) |\n",
        "| audio without timecodes | 25 / second | handled by the audio tokenizer                        |\n",
        "| video without audio     | 258 / frame | handled by the video tokenizer at 1 frame per second  |\n",
        "| `MM:SS` timecode        |      5      | audio chunk or video frame temporal reference         |\n",
        "| `H:MM:SS` timecode      |      7      | similarly, for content longer than 1 hour             |\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BE8QUvL2hj35"
      },
      "source": [
        "### 🎞️ Sampling frame rate\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DgYLJiWGhj35"
      },
      "source": [
        "By default, video frames are sampled at 1 frame per second (1 FPS). These frames are included in the context with their corresponding timecodes.\n",
        "\n",
        "You can use a custom sampling frame rate with the `Part.video_metadata.fps` parameter:\n",
        "\n",
        "| video type    | change                  | `fps` range         |\n",
        "| ------------- | ----------------------- | ------------------- |\n",
        "| static, slow  | decrease the frame rate | `0.0 < fps < 1.0`   |\n",
        "| dynamic, fast | increase the frame rate | `1.0 < fps <= 24.0` |\n",
        "\n",
        "> 💡 For `1.0 < fps`, Gemini was trained to understand `MM:SS.sss` and `H:MM:SS.sss` timecodes.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eHomr1bGhj35"
      },
      "source": [
        "### 🔍 Media resolution\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "L2hjOIOkhj35"
      },
      "source": [
        "By default, each sampled frame is represented with 258 tokens.\n",
        "\n",
        "You can specify a medium or low media resolution with the `GenerateContentConfig.media_resolution` parameter:\n",
        "\n",
        "| `media_resolution` for video inputs | tokens/frame | benefit                                                  |\n",
        "| ----------------------------------- | -----------: | -------------------------------------------------------- |\n",
        "| `MEDIA_RESOLUTION_MEDIUM` (default) |          258 | higher precision, allows more detailed understanding     |\n",
        "| `MEDIA_RESOLUTION_LOW`              |           66 | faster and cheaper inference, allowing for longer videos |\n",
        "\n",
        "> 💡 The \"media resolution\" can be seen as the \"image token resolution\": the number of tokens used to represent an image.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GpWr_dwDhj36"
      },
      "source": [
        "### 🧮 Probabilities all the way down\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PzkrIwznqoN1"
      },
      "source": [
        "The ability of LLMs to communicate in flawless natural language is very impressive, but it's easy to get carried away and make incorrect assumptions.\n",
        "\n",
        "Keep in mind how LLMs work:\n",
        "\n",
        "- An LLM is trained on a massive tokenized dataset, which represents its knowledge (its long-term memory)\n",
        "- During the training, its neural network learns token patterns\n",
        "- When you send a request to an LLM, your inputs are transformed into tokens (tokenization)\n",
        "- To answer your request, the LLM predicts, token by token, the next likely tokens\n",
        "- Overall, LLMs are exceptional statistical token prediction machines that seem to mimic how some parts of our brain work\n",
        "\n",
        "This has a few consequences:\n",
        "\n",
        "- LLM outputs are just statistically likely follow-ups to your inputs\n",
        "- LLMs show some forms of reasoning: they can match complex patterns but have no actual deep understanding\n",
        "- LLMs have no consciousness: they are designed to generate tokens and will do so based on your instructions\n",
        "- Order matters: Tokens that are generated first will influence tokens that are generated next\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X-bN1DcQqoN2"
      },
      "source": [
        "For the next step, some methodical prompt crafting might help…\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5w2owFhrqoN2"
      },
      "source": [
        "---\n",
        "\n",
        "## 🏗️ Prompt crafting\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s3lIFIj5qoN2"
      },
      "source": [
        "### 🪜 Methodology\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jMklaeV3qoN2"
      },
      "source": [
        "Prompt crafting, also called prompt engineering, is a relatively new field. It involves designing and refining text instructions to guide LLMs towards generating desired outputs. Like writing, it is both an art and a science, a skill that everyone can develop with practice.\n",
        "\n",
        "We can find countless reference materials about prompt crafting. Some prompts can be very long, complex, and even scary. Crafting prompts with a high-performing LLM like Gemini is much simpler. Here are three key adjectives to keep in mind:\n",
        "\n",
        "- iterative\n",
        "- precise\n",
        "- concise\n",
        "\n",
        "**Iterative**\n",
        "\n",
        "Prompt crafting is typically an iterative process. Here are some recommendations:\n",
        "\n",
        "- Craft your prompt step by step\n",
        "- Keep track of your successive iterations\n",
        "- At every iteration, make sure to measure what's working versus what's not\n",
        "- If you reach a regression, backtrack to a successful iteration\n",
        "\n",
        "**Precise**\n",
        "\n",
        "Precision is key:\n",
        "\n",
        "- Use words as specific as possible\n",
        "- Words with multiple meanings can introduce variability, so use precise expressions\n",
        "- Precision will influence probabilities in your favor\n",
        "\n",
        "**Concise**\n",
        "\n",
        "Concision has additional advantages:\n",
        "\n",
        "- A short prompt is easier for us developers to understand (and maintain!)\n",
        "- The longer your prompt is, the more likely you are to introduce inconsistencies or even contradictions, which results in variable interpretations of your instructions\n",
        "- Test and trust the LLM's knowledge: this knowledge acts as an implicit context and can make your prompt shorter and clearer\n",
        "\n",
        "Overall, though this may seem contradictory, if you take the time to be iterative, precise, and concise, you are likely to save a lot of time.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Fq7RLbwohj36"
      },
      "source": [
        "> 💡 If you want to explore this topic, check out [Prompting strategies](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/prompt-design-strategies) (Google Cloud reference) and [Prompt engineering](https://www.kaggle.com/whitepaper-prompt-engineering) (68-page PDF by Lee Boonstra).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CWcKgdh9hj36"
      },
      "source": [
        "### 📚 Terminology\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EQOEU3wLqoN2"
      },
      "source": [
        "We're not experts in video transcription (yet!) but we want Gemini to behave as one. Consequently, we'd like to write prompts as specific as possible for this use case. While LLMs process instructions based on their training knowledge, they can also share this knowledge with us.\n",
        "\n",
        "We can learn a lot by directly asking Gemini:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "K8Q6dKvRqoN2"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "What is the terminology used for video transcriptions?\n",
        "Please show a typical output example.\n",
        "\"\"\"\n",
        "generate_content(prompt, show_as=ShowAs.MARKDOWN)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PpolKTaThj36"
      },
      "source": [
        "### 📝 Tabular extraction\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_KTjxDPoqoN2"
      },
      "source": [
        "So far, we've seen the following:\n",
        "\n",
        "- We didn't manage to get the full transcription with identified speakers all at once\n",
        "- Order matters (because a generated token influences the probabilities for subsequent tokens)\n",
        "\n",
        "To tackle our challenge, we need Gemini to infer from the following multimodal information:\n",
        "\n",
        "- text (our instructions + what may be written in the video)\n",
        "- audio cues (everything said or audible in the video's audio)\n",
        "- visual cues (everything visible in the video)\n",
        "- time (when things happen)\n",
        "\n",
        "That is quite a mixture of information types!\n",
        "\n",
        "As video transcription is a data extraction use case, if we think about the final result as a database, our final goal can be seen as the generation of two related tables (transcripts and speakers). If we write it down, our initial three sub-problems now look decoupled:\n",
        "\n",
        "![transcripts and speakers tables](https://storage.googleapis.com/github-repo/generative-ai/gemini/use-cases/video-analysis/multimodal_video_transcription/tabular-extraction-1.png)\n",
        "\n",
        "> 💡 In computer science, data decoupling enhances data locality, often yielding improved performance across areas such as cache utilization, data access, semantic understanding, or system maintenance. Within the LLM Transformer architecture, core performance relies heavily on the attention mechanism. However, the attention pool is finite and tokens compete for attention. Researchers sometimes refer to \"attention dilution\" for long-context, million-token-scale benchmarks. While we cannot directly debug LLMs as users, intuitively, data decoupling may improve the model's focus, leading to a better attention span.\n",
        "\n",
        "Since Gemini is extremely good with patterns, it can automatically generate identifiers to link our tables. In addition, since we eventually want an automated workflow, we can start reasoning in terms of data and fields:\n",
        "\n",
        "![transcripts and speakers tables with id](https://storage.googleapis.com/github-repo/generative-ai/gemini/use-cases/video-analysis/multimodal_video_transcription/tabular-extraction-2.png)\n",
        "\n",
        "Let's call this approach \"tabular extraction\", split our instructions into two tasks (tables), still in a single request, and arrange them in a meaningful order…\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uUSUffpYhj36"
      },
      "source": [
        "### 💬 Transcripts\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6Fw5nQJ9qoN2"
      },
      "source": [
        "First of all, let's focus on getting the audio transcripts:\n",
        "\n",
        "- Gemini has proven to be natively good at audio transcription\n",
        "- This requires less inference than image analysis\n",
        "- It is central and independent information\n",
        "\n",
        "> 💡 Generating an output that starts with correct answers should help to achieve an overall correct output.\n",
        "\n",
        "We've also seen what a typical transcription entry can look like:\n",
        "\n",
        "`00:02 speaker_1: Welcome!`\n",
        "\n",
        "But, right away, there can be some ambiguities in our multimodal use case:\n",
        "\n",
        "- What is a speaker?\n",
        "- Is it someone we see/hear?\n",
        "- What if the person visible in the video is not the one speaking?\n",
        "- What if the person speaking is never seen in the video?\n",
        "\n",
        "How do we unconsciously identify who is speaking in a video?\n",
        "\n",
        "- First, probably by identifying the different voices on the fly?\n",
        "- Then, probably by consolidating additional audio and visual cues?\n",
        "\n",
        "Can Gemini understand voice characteristics?\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BZpvtyEZqoN2"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "Using only the video's audio, list the following audible characteristics:\n",
        "- Voice tones\n",
        "- Voice pitches\n",
        "- Languages\n",
        "- Accents\n",
        "- Speaking styles\n",
        "\"\"\"\n",
        "video = TestVideo.GDM_PODCAST_TRAILER_PT59S\n",
        "\n",
        "generate_content(prompt, video, show_as=ShowAs.MARKDOWN)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nRy_zzFHqoN3"
      },
      "source": [
        "What about a French video?\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "x6bb7r_SqoN3"
      },
      "outputs": [],
      "source": [
        "video = TestVideo.BRUT_FR_DOGS_WATER_LEAK_PT8M28S\n",
        "\n",
        "generate_content(prompt, video, show_as=ShowAs.MARKDOWN)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "19qIqFbdqoN3"
      },
      "source": [
        "> ⚠️ We have to be cautious here: responses can consolidate multimodal information or even general knowledge. For example, if a person is famous, their name is most likely part of the LLM's knowledge. If they are known to be from the UK, a possible inference is that they have a British accent. This is why we made our prompt more specific by including \"using only the video's audio\".\n",
        "\n",
        "> 💡 If you conduct more tests, for example on private audio files (i.e., not part of common knowledge and with no additional visual cues), you'll see that Gemini's audio tokenizer performs exceptionally well and extracts semantic speech information!\n",
        "\n",
        "After a few iterations, we can arrive at a transcription prompt focusing on the audio and voices:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qWDIQMNmqoN3"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "Task:\n",
        "- Watch the video and listen carefully to the audio.\n",
        "- Identify the distinct voices using a `voice` ID (1, 2, 3, etc.).\n",
        "- Transcribe the video's audio verbatim with voice diarization.\n",
        "- Include the `start` timecode (MM:SS) for each speech segment.\n",
        "- Output a JSON array where each object has the following fields:\n",
        "  - `start`\n",
        "  - `text`\n",
        "  - `voice`\n",
        "\"\"\"\n",
        "video = TestVideo.GDM_PODCAST_TRAILER_PT59S\n",
        "\n",
        "generate_content(prompt, video, show_as=ShowAs.MARKDOWN)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oxiB4dK3qoN3"
      },
      "source": [
        "This is looking good! And if you test these instructions on more complex videos, you'll get similarly promising results.\n",
        "\n",
        "Notice how the prompt reuses cherry-picked terms from the terminology previously provided by Gemini, while aiming for precision and concision:\n",
        "\n",
        "- `verbatim` is unambiguous (unlike \"spoken words\")\n",
        "- `1, 2, 3, etc.` is an ellipsis (Gemini can infer the pattern)\n",
        "- `timecode` is specific (`timestamp` has more meanings)\n",
        "- `MM:SS` clarifies the timecode format\n",
        "\n",
        "> 💡 Gemini 2.0 was trained to understand the specific `MM:SS` timecode format. Gemini 2.5 also supports the `H:MM:SS` format for longer videos. For the latest updates, refer to the [video understanding documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/video-understanding).\n",
        "\n",
        "We're halfway there. Let's complete our database generation with a second task…\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DjSoAYomqoN3"
      },
      "source": [
        "### 🧑 Speakers\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "L4JRfMpOqoN3"
      },
      "source": [
        "The second task is pretty straightforward: we want to extract speaker information into a second table. The two tables are logically linked by the voice ID.\n",
        "\n",
        "After a few iterations, we can reach a two-task prompt like the following:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "52ysk17GqoN3"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "Generate a JSON object with keys `task1_transcripts` and `task2_speakers` for the following tasks.\n",
        "\n",
        "**Task 1 - Transcripts**\n",
        "\n",
        "- Watch the video and listen carefully to the audio.\n",
        "- Identify the distinct voices using a `voice` ID (1, 2, 3, etc.).\n",
        "- Transcribe the video's audio verbatim with voice diarization.\n",
        "- Include the `start` timecode (MM:SS) for each speech segment.\n",
        "- The `task1_transcripts` value is a JSON array where each object has the following fields:\n",
        "  - `start`\n",
        "  - `text`\n",
        "  - `voice`\n",
        "\n",
        "**Task 2 - Speakers**\n",
        "\n",
        "- For each `voice` ID from Task 1, extract the name of the corresponding speaker.\n",
        "- Use visual and audio cues.\n",
        "- If a speaker's name cannot be found, use `?` as the value.\n",
        "- The `task2_speakers` value is a JSON array where each object has the following fields:\n",
        "  - `voice`\n",
        "  - `name`\n",
        "\n",
        "JSON:\n",
        "\"\"\"\n",
        "video = TestVideo.GDM_PODCAST_TRAILER_PT59S\n",
        "\n",
        "generate_content(prompt, video, show_as=ShowAs.MARKDOWN)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BGslUhnAqoN3"
      },
      "source": [
        "Test this prompt on more complex videos: it's still looking good!\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "i68a20ekqoN4"
      },
      "source": [
        "---\n",
        "\n",
        "## 🚀 Finalization\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "slrAAtrwhj37"
      },
      "source": [
        "### 🧩 Structured output\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "h7hHAYumqoN4"
      },
      "source": [
        "We've iterated towards a precise and concise prompt. Now, we can focus on Gemini's response:\n",
        "\n",
        "- The response is plain text containing fenced code blocks\n",
        "- Instead, we'd like a structured output, to receive consistently formatted responses\n",
        "- Ideally, we'd also like to avoid having to parse the response, which can be a maintenance burden\n",
        "\n",
        "Getting structured outputs is an LLM feature also called \"controlled generation\". Since we've already crafted our prompt in terms of data tables and JSON fields, this is now a formality.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3PlFOzZ5GLM7"
      },
      "source": [
        "In our request, we can add the following parameters:\n",
        "\n",
        "- `response_mime_type=\"application/json\"`\n",
        "- `response_schema=\"YOUR_JSON_SCHEMA\"` ([docs](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output#fields))\n",
        "\n",
        "In Python, this gets even easier:\n",
        "\n",
        "- Use the `pydantic` library\n",
        "- Reflect your output structure with classes derived from `pydantic.BaseModel`\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "80OSaujFGcC4"
      },
      "source": [
        "We can simplify the prompt by removing the output specification parts:\n",
        "\n",
        "```markdown\n",
        "Generate a JSON object with keys `task1_transcripts` and `task2_speakers` for the following tasks.\n",
        "\n",
        "…\n",
        "\n",
        "- Output a JSON array where each object has the following fields:\n",
        "  - `start`\n",
        "  - `text`\n",
        "  - `voice`\n",
        "\n",
        "…\n",
        "\n",
        "- Output a JSON array where each object has the following fields:\n",
        "  - `voice`\n",
        "  - `name`\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iL4_hgRPBxUn"
      },
      "source": [
        "… to move them to matching Python classes instead:\n",
        "\n",
        "```python\n",
        "import pydantic\n",
        "\n",
        "class Transcript(pydantic.BaseModel):\n",
        "    start: str\n",
        "    text: str\n",
        "    voice: int\n",
        "\n",
        "class Speaker(pydantic.BaseModel):\n",
        "    voice: int\n",
        "    name: str\n",
        "\n",
        "class VideoTranscription(pydantic.BaseModel):\n",
        "    task1_transcripts: list[Transcript] = pydantic.Field(default_factory=list)\n",
        "    task2_speakers: list[Speaker] = pydantic.Field(default_factory=list)\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7Jji8A6iU2lR"
      },
      "source": [
        "… and request a structured response:\n",
        "\n",
        "```python\n",
        "response = client.models.generate_content(\n",
        "    # …\n",
        "    config=GenerateContentConfig(\n",
        "        # …\n",
        "        response_mime_type=\"application/json\",\n",
        "        response_schema=VideoTranscription,\n",
        "        # …\n",
        "    ),\n",
        ")\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tJyFqav5U_Iz"
      },
      "source": [
        "Finally, retrieving the objects from the response is also direct:\n",
        "\n",
        "```python\n",
        "if isinstance(response.parsed, VideoTranscription):\n",
        "    video_transcription = response.parsed\n",
        "else:\n",
        "    video_transcription = VideoTranscription()  # Empty transcription\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pqXL2ZqxVIUL"
      },
      "source": [
        "The interesting aspects of this approach are the following:\n",
        "\n",
        "- The prompt focuses on the logic and the classes focus on the output\n",
        "- It's easier to update and maintain typed classes\n",
        "- The JSON schema is automatically generated by the Gen AI SDK from the class provided in `response_schema` and dispatched to Gemini\n",
        "- The response is automatically parsed by the Gen AI SDK and deserialized into the corresponding Python objects\n",
        "\n",
        "> ⚠️ If you keep output specifications in your prompt, ensure there are no contradictions between the prompt and the schema (e.g., same field names and order), as this can negatively impact the quality of the responses.\n",
        "\n",
        "> 💡 It's possible to have more structural information directly in the schema (e.g., detailed field definitions). See [Controlled generation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4_WXb2Lmhj37"
      },
      "source": [
        "### ✨ Implementation\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DaCqQHfhhj37"
      },
      "source": [
        "Let's finalize our code. In addition, now that we have a stable prompt, we can even enrich our solution to extract each speaker's `company`, `position`, and `role_in_video`:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "RePBxIKqqoN4"
      },
      "outputs": [],
      "source": [
        "import re\n",
        "\n",
        "import pydantic\n",
        "from google.genai.types import MediaResolution, ThinkingConfig\n",
        "\n",
        "SamplingFrameRate = float\n",
        "NOT_FOUND = \"?\"\n",
        "VIDEO_TRANSCRIPTION_PROMPT = f\"\"\"\n",
        "**Task 1 - Transcripts**\n",
        "\n",
        "- Watch the video and listen carefully to the audio.\n",
        "- Identify the distinct voices using a `voice` ID (1, 2, 3, etc.).\n",
        "- Transcribe the video's audio verbatim with voice diarization.\n",
        "- Include the `start` timecode ({{timecode_spec}}) for each speech segment.\n",
        "\n",
        "**Task 2 - Speakers**\n",
        "\n",
        "- For each `voice` ID from Task 1, extract information about the corresponding speaker.\n",
        "- Use visual and audio cues.\n",
        "- If a piece of information cannot be found, use `{NOT_FOUND}` as the value.\n",
        "\"\"\"\n",
        "\n",
        "\n",
        "class Transcript(pydantic.BaseModel):\n",
        "    start: str\n",
        "    text: str\n",
        "    voice: int\n",
        "\n",
        "\n",
        "class Speaker(pydantic.BaseModel):\n",
        "    voice: int\n",
        "    name: str\n",
        "    company: str\n",
        "    position: str\n",
        "    role_in_video: str\n",
        "\n",
        "\n",
        "class VideoTranscription(pydantic.BaseModel):\n",
        "    task1_transcripts: list[Transcript] = pydantic.Field(default_factory=list)\n",
        "    task2_speakers: list[Speaker] = pydantic.Field(default_factory=list)\n",
        "\n",
        "\n",
        "def get_generate_content_config(model: Model, video: Video) -> GenerateContentConfig:\n",
        "    media_resolution = get_media_resolution_for_video(video)\n",
        "    thinking_config = get_thinking_config(model)\n",
        "\n",
        "    return GenerateContentConfig(\n",
        "        temperature=DEFAULT_CONFIG.temperature,\n",
        "        top_p=DEFAULT_CONFIG.top_p,\n",
        "        seed=DEFAULT_CONFIG.seed,\n",
        "        response_mime_type=\"application/json\",\n",
        "        response_schema=VideoTranscription,\n",
        "        media_resolution=media_resolution,\n",
        "        thinking_config=thinking_config,\n",
        "    )\n",
        "\n",
        "\n",
        "def get_video_duration(video: Video) -> timedelta | None:\n",
        "    # For testing purposes, video duration is statically specified in the enum name\n",
        "    # Suffix (ISO 8601 based): _PT[<h>H][<m>M][<s>S]\n",
        "    # For production,\n",
        "    # - fetch durations dynamically or store them separately\n",
        "    # - take into account video VideoMetadata.start_offset & VideoMetadata.end_offset\n",
        "    regex = r\"_PT(?:(\\d+)H)?(?:(\\d+)M)?(?:(\\d+)S)?$\"\n",
        "    if not (match := re.search(regex, video.name)):\n",
        "        print(f\"⚠️ No duration info in {video.name}. Will use defaults.\")\n",
        "        return None\n",
        "\n",
        "    h_str, m_str, s_str = match.groups()\n",
        "    return timedelta(\n",
        "        hours=int(h_str or 0), minutes=int(m_str or 0), seconds=int(s_str or 0)\n",
        "    )\n",
        "\n",
        "\n",
        "def get_media_resolution_for_video(video: Video) -> MediaResolution | None:\n",
        "    if not (video_duration := get_video_duration(video)):\n",
        "        return None  # Default\n",
        "\n",
        "    # For testing purposes, this is based on video duration, as our short videos tend to be more detailed\n",
        "    less_than_five_minutes = video_duration < timedelta(minutes=5)\n",
        "    if less_than_five_minutes:\n",
        "        media_resolution = MediaResolution.MEDIA_RESOLUTION_MEDIUM\n",
        "    else:\n",
        "        media_resolution = MediaResolution.MEDIA_RESOLUTION_LOW\n",
        "\n",
        "    return media_resolution\n",
        "\n",
        "\n",
        "def get_sampling_frame_rate_for_video(video: Video) -> SamplingFrameRate | None:\n",
        "    sampling_frame_rate = None  # Default (1 FPS for current models)\n",
        "\n",
        "    # [Optional] Define a custom FPS: 0.0 < sampling_frame_rate <= 24.0\n",
        "\n",
        "    return sampling_frame_rate\n",
        "\n",
        "\n",
        "def get_timecode_spec_for_model_and_video(model: Model, video: Video) -> str:\n",
        "    timecode_spec = \"MM:SS\"  # Default\n",
        "\n",
        "    match model:\n",
        "        case Model.GEMINI_2_0_FLASH:  # Supports MM:SS\n",
        "            pass\n",
        "        case Model.GEMINI_2_5_FLASH | Model.GEMINI_2_5_PRO:  # Support MM:SS and H:MM:SS\n",
        "            duration = get_video_duration(video)\n",
        "            one_hour_or_more = duration is not None and timedelta(hours=1) <= duration\n",
        "            if one_hour_or_more:\n",
        "                timecode_spec = \"MM:SS or H:MM:SS\"\n",
        "        case _:\n",
        "            raise NotImplementedError(f\"Undefined timecode spec for {model.name}.\")\n",
        "\n",
        "    return timecode_spec\n",
        "\n",
        "\n",
        "def get_thinking_config(model: Model) -> ThinkingConfig | None:\n",
        "    # Examples of thinking configurations (Gemini 2.5 models)\n",
        "    match model:\n",
        "        case Model.GEMINI_2_5_FLASH:  # Thinking disabled\n",
        "            return ThinkingConfig(thinking_budget=0, include_thoughts=False)\n",
        "        case Model.GEMINI_2_5_PRO:  # Minimum thinking budget and no summarized thoughts\n",
        "            return ThinkingConfig(thinking_budget=128, include_thoughts=False)\n",
        "        case _:\n",
        "            return None  # Default\n",
        "\n",
        "\n",
        "def get_video_transcription_from_response(\n",
        "    response: GenerateContentResponse,\n",
        ") -> VideoTranscription:\n",
        "    if isinstance(response.parsed, VideoTranscription):\n",
        "        return response.parsed\n",
        "\n",
        "    print(\"❌ Could not parse the JSON response\")\n",
        "    return VideoTranscription()  # Empty transcription\n",
        "\n",
        "\n",
        "def get_video_transcription(\n",
        "    video: Video,\n",
        "    video_segment: VideoSegment | None = None,\n",
        "    fps: float | None = None,\n",
        "    prompt: str | None = None,\n",
        "    model: Model | None = None,\n",
        ") -> VideoTranscription:\n",
        "    model = model or Model.DEFAULT\n",
        "    model_id = model.value\n",
        "\n",
        "    fps = fps or get_sampling_frame_rate_for_video(video)\n",
        "    video_part = get_video_part(video, video_segment, fps)\n",
        "    if not video_part:  # Unsupported source, return an empty transcription\n",
        "        return VideoTranscription()\n",
        "    if prompt is None:\n",
        "        timecode_spec = get_timecode_spec_for_model_and_video(model, video)\n",
        "        prompt = VIDEO_TRANSCRIPTION_PROMPT.format(timecode_spec=timecode_spec)\n",
        "    contents = [video_part, prompt.strip()]\n",
        "\n",
        "    config = get_generate_content_config(model, video)\n",
        "\n",
        "    print(f\" {video.name} / {model_id} \".center(80, \"-\"))\n",
        "    response = None\n",
        "    for attempt in get_retrier():\n",
        "        with attempt:\n",
        "            response = client.models.generate_content(\n",
        "                model=model_id,\n",
        "                contents=contents,\n",
        "                config=config,\n",
        "            )\n",
        "            display_response_info(response)\n",
        "\n",
        "    assert isinstance(response, GenerateContentResponse)\n",
        "    return get_video_transcription_from_response(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "j5s3EKrKqoN4"
      },
      "source": [
        "Test it:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "j0_pIQVHqoN4"
      },
      "outputs": [],
      "source": [
        "def test_structured_video_transcription(video: Video) -> None:\n",
        "    transcription = get_video_transcription(video)\n",
        "\n",
        "    print(\"-\" * 80)\n",
        "    print(f\"Transcripts : {len(transcription.task1_transcripts):3d}\")\n",
        "    print(f\"Speakers    : {len(transcription.task2_speakers):3d}\")\n",
        "    for speaker in transcription.task2_speakers:\n",
        "        print(f\"- {speaker}\")\n",
        "\n",
        "\n",
        "test_structured_video_transcription(TestVideo.GDM_PODCAST_TRAILER_PT59S)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Fa1MrRKsqoN4"
      },
      "source": [
        "### 📊 Data visualization\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tcGct03AqoN4"
      },
      "source": [
        "We started prototyping in natural language, crafted a prompt, and generated a structured output. Since reading raw data can be cumbersome, we can now present video transcriptions in a more visually appealing way.\n",
        "\n",
        "Here's a possible orchestrator function:\n",
        "\n",
        "```python\n",
        "def transcribe_video(video: Video, …) -> None:\n",
        "    display_video(video)\n",
        "    transcription = get_video_transcription(video, …)\n",
        "    display_speakers(transcription)\n",
        "    display_transcripts(transcription)\n",
        "```\n",
        "\n",
        "Let's add some data visualization functions…\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "W5u23XdeqoN4"
      },
      "outputs": [],
      "source": [
        "# @title {display-mode: \"form\"}\n",
        "\n",
        "import itertools\n",
        "from collections.abc import Callable, Iterator\n",
        "\n",
        "from pandas import DataFrame, Series\n",
        "from pandas.io.formats.style import Styler\n",
        "from pandas.io.formats.style_render import CSSDict\n",
        "\n",
        "BGCOLOR_COLUMN = \"bg_color\"  # Hidden column to store row background colors\n",
        "\n",
        "\n",
        "def yield_known_speaker_color() -> Iterator[str]:\n",
        "    PAL_40 = (\"#669DF6\", \"#EE675C\", \"#FCC934\", \"#5BB974\")\n",
        "    PAL_30 = (\"#8AB4F8\", \"#F28B82\", \"#FDD663\", \"#81C995\")\n",
        "    PAL_20 = (\"#AECBFA\", \"#F6AEA9\", \"#FDE293\", \"#A8DAB5\")\n",
        "    PAL_10 = (\"#D2E3FC\", \"#FAD2CF\", \"#FEEFC3\", \"#CEEAD6\")\n",
        "    PAL_05 = (\"#E8F0FE\", \"#FCE8E6\", \"#FEF7E0\", \"#E6F4EA\")\n",
        "    return itertools.cycle([*PAL_40, *PAL_30, *PAL_20, *PAL_10, *PAL_05])\n",
        "\n",
        "\n",
        "def yield_unknown_speaker_color() -> Iterator[str]:\n",
        "    GRAYS = [\"#80868B\", \"#9AA0A6\", \"#BDC1C6\", \"#DADCE0\", \"#E8EAED\", \"#F1F3F4\"]\n",
        "    return itertools.cycle(GRAYS)\n",
        "\n",
        "\n",
        "def get_color_for_voice_mapping(speakers: list[Speaker]) -> dict[int, str]:\n",
        "    known_speaker_color = yield_known_speaker_color()\n",
        "    unknown_speaker_color = yield_unknown_speaker_color()\n",
        "\n",
        "    mapping: dict[int, str] = {}\n",
        "    for speaker in speakers:\n",
        "        if speaker.name != NOT_FOUND:\n",
        "            color = next(known_speaker_color)\n",
        "        else:\n",
        "            color = next(unknown_speaker_color)\n",
        "        mapping[speaker.voice] = color\n",
        "\n",
        "    return mapping\n",
        "\n",
        "\n",
        "def get_table_styler(df: DataFrame) -> Styler:\n",
        "    def join_styles(styles: list[str]) -> str:\n",
        "        return \";\".join(styles)\n",
        "\n",
        "    table_css = [\n",
        "        \"color: #202124\",\n",
        "        \"background-color: #BDC1C6\",\n",
        "        \"border: 0\",\n",
        "        \"border-radius: 0.5rem\",\n",
        "        \"border-spacing: 0px\",\n",
        "        \"outline: 0.5rem solid #BDC1C6\",\n",
        "        \"margin: 1rem 0.5rem\",\n",
        "    ]\n",
        "    th_css = [\"background-color: #E8EAED\"]\n",
        "    th_td_css = [\"text-align:left\", \"padding: 0.25rem 1rem\"]\n",
        "    table_styles = [\n",
        "        CSSDict(selector=\"\", props=join_styles(table_css)),\n",
        "        CSSDict(selector=\"th\", props=join_styles(th_css)),\n",
        "        CSSDict(selector=\"th,td\", props=join_styles(th_td_css)),\n",
        "    ]\n",
        "\n",
        "    return df.style.set_table_styles(table_styles).hide()\n",
        "\n",
        "\n",
        "def change_row_bgcolor(row: Series) -> list[str]:\n",
        "    style = f\"background-color:{row[BGCOLOR_COLUMN]}\"\n",
        "    return [style] * len(row)\n",
        "\n",
        "\n",
        "def display_table(yield_rows: Callable[[], Iterator[list[str]]]) -> None:\n",
        "    data = yield_rows()\n",
        "    df = DataFrame(columns=next(data), data=data)\n",
        "    styler = get_table_styler(df)\n",
        "    styler.apply(change_row_bgcolor, axis=1)\n",
        "    styler.hide([BGCOLOR_COLUMN], axis=\"columns\")\n",
        "\n",
        "    html = styler.to_html()\n",
        "    IPython.display.display(IPython.display.HTML(html))\n",
        "\n",
        "\n",
        "def display_speakers(transcription: VideoTranscription) -> None:\n",
        "    def sanitize_field(s: str, symbol_if_unknown: str) -> str:\n",
        "        return symbol_if_unknown if s == NOT_FOUND else s\n",
        "\n",
        "    def yield_rows() -> Iterator[list[str]]:\n",
        "        yield [\"voice\", \"name\", \"company\", \"position\", \"role_in_video\", BGCOLOR_COLUMN]\n",
        "\n",
        "        color_for_voice = get_color_for_voice_mapping(transcription.task2_speakers)\n",
        "        for speaker in transcription.task2_speakers:\n",
        "            yield [\n",
        "                str(speaker.voice),\n",
        "                sanitize_field(speaker.name, NOT_FOUND),\n",
        "                sanitize_field(speaker.company, NOT_FOUND),\n",
        "                sanitize_field(speaker.position, NOT_FOUND),\n",
        "                sanitize_field(speaker.role_in_video, NOT_FOUND),\n",
        "                color_for_voice.get(speaker.voice, \"red\"),\n",
        "            ]\n",
        "\n",
        "    display_markdown(f\"### Speakers ({len(transcription.task2_speakers)})\")\n",
        "    display_table(yield_rows)\n",
        "\n",
        "\n",
        "def display_transcripts(transcription: VideoTranscription) -> None:\n",
        "    def yield_rows() -> Iterator[list[str]]:\n",
        "        yield [\"start\", \"speaker\", \"transcript\", BGCOLOR_COLUMN]\n",
        "\n",
        "        color_for_voice = get_color_for_voice_mapping(transcription.task2_speakers)\n",
        "        speaker_for_voice = {\n",
        "            speaker.voice: speaker for speaker in transcription.task2_speakers\n",
        "        }\n",
        "        previous_voice = None\n",
        "        for transcript in transcription.task1_transcripts:\n",
        "            current_voice = transcript.voice\n",
        "            speaker_label = \"\"\n",
        "            if speaker := speaker_for_voice.get(current_voice):\n",
        "                if speaker.name != NOT_FOUND:\n",
        "                    speaker_label = speaker.name\n",
        "                elif speaker.position != NOT_FOUND:\n",
        "                    speaker_label = f\"[voice {current_voice}][{speaker.position}]\"\n",
        "                elif speaker.role_in_video != NOT_FOUND:\n",
        "                    speaker_label = f\"[voice {current_voice}][{speaker.role_in_video}]\"\n",
        "            if not speaker_label:\n",
        "                speaker_label = f\"[voice {current_voice}]\"\n",
        "            yield [\n",
        "                transcript.start,\n",
        "                speaker_label if current_voice != previous_voice else '\"',\n",
        "                transcript.text,\n",
        "                color_for_voice.get(current_voice, \"red\"),\n",
        "            ]\n",
        "            previous_voice = current_voice\n",
        "\n",
        "    display_markdown(f\"### Transcripts ({len(transcription.task1_transcripts)})\")\n",
        "    display_table(yield_rows)\n",
        "\n",
        "\n",
        "def transcribe_video(\n",
        "    video: Video,\n",
        "    video_segment: VideoSegment | None = None,\n",
        "    fps: float | None = None,\n",
        "    prompt: str | None = None,\n",
        "    model: Model | None = None,\n",
        ") -> None:\n",
        "    display_video(video)\n",
        "    transcription = get_video_transcription(video, video_segment, fps, prompt, model)\n",
        "    display_speakers(transcription)\n",
        "    display_transcripts(transcription)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DSgqZuLDqoN4"
      },
      "source": [
        "---\n",
        "\n",
        "## ✅ Challenge complete\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mJ91H4VWYMQk"
      },
      "source": [
        "### 🎬 Short video\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zj2LutM-hj39"
      },
      "source": [
        "This video is a trailer for the Google DeepMind podcast. It features a fast-paced montage of 6 interviews. The multimodal transcription is excellent:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GPSuRE70Yhuk"
      },
      "outputs": [],
      "source": [
        "transcribe_video(TestVideo.GDM_PODCAST_TRAILER_PT59S)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "josZMk6UqoN5"
      },
      "source": [
        "### 🎬 Narrator-only video\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JyCvdvAMhj39"
      },
      "source": [
        "This video is a documentary that takes viewers on a virtual tour of the Gombe National Park in Tanzania. There's no visible speaker. Jane Goodall is correctly detected as the narrator, her name is extracted from the credits:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9a0squK0qoN5"
      },
      "outputs": [],
      "source": [
        "transcribe_video(TestVideo.JANE_GOODALL_PT2M42S)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ceCW9orEhj39"
      },
      "source": [
        "> 💡 Over the past few years, I have regularly used this video to test specialized ML models and these tests consistently resulted in various types of errors. Gemini's transcription, including punctuation, is perfect.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YJ_xX8DHqoN5"
      },
      "source": [
        "### 🎬 French video\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vMSycFsdhj39"
      },
      "source": [
        "This French reportage combines on-the-ground footage of a specialized team using trained dogs to detect leaks in underground drinking water pipes. The recording takes place entirely outdoors in a rural setting. The interviewed workers are introduced with on-screen text overlays. The audio, captured live on location, includes ambient noise. There are also some off-screen or unidentified speakers. This video is rather complex. The multimodal transcription provides excellent results with no false positives:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "UYd_6Am0qoN5"
      },
      "outputs": [],
      "source": [
        "transcribe_video(TestVideo.BRUT_FR_DOGS_WATER_LEAK_PT8M28S)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Obb6-g_phj39"
      },
      "source": [
        "> 💡 Our prompt was crafted and tested with English videos, but it works without modification with this French video. It should also work for videos in these [100+ different languages](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#languages-gemini).\n",
        "\n",
        "> 💡 In a multilingual solution, we might ask to translate our transcriptions into any of those 100+ languages and even perform text cleanup. This can be done in a second request, as the multimodal transcription is complex enough by itself.\n",
        "\n",
        "> 💡 Gemini's audio tokenizer detects more than speech. If you try to list non-speech sounds on audio tracks only (to ensure the response doesn't benefit from any visual cues), you'll see it can detect sounds such as \"dog bark\", \"music\", \"sound effect\", \"footsteps\", \"laughter\", \"applause\"…\n",
        "\n",
        "> 💡 In our data visualization tables, colored rows are inference positives (speakers identified by the model), while gray rows correspond to negatives (unidentified speakers). This makes it easier to understand the results. As the prompt we crafted favors accuracy over recall, colored rows are generally correct, and gray rows correspond either to unnamed/unidentifiable speakers (true negatives) or to speakers that should have been identified (false negatives).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XeC7dVBsqoN5"
      },
      "source": [
        "### 🎬 Complex video\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fDSSSGIphj3-"
      },
      "source": [
        "This Google DeepMind video is quite complex:\n",
        "\n",
        "- It is highly edited and very dynamic\n",
        "- Speakers are often off-screen and other people can be visible instead\n",
        "- The researchers are often in groups and it's not always obvious who's speaking\n",
        "- Some video shots were taken 2 years apart: the same speakers can sound and look different!\n",
        "\n",
        "Gemini 2.0 Flash generates an excellent transcription. However, the complexity of the video can lead to some missed consolidations. Gemini 2.5 Pro shows a deeper inference and manages to consolidate the differently-looking-and-sounding speakers:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "64-dl-B7ln_r"
      },
      "outputs": [],
      "source": [
        "transcribe_video(\n",
        "    TestVideo.GDM_ALPHAFOLD_PT7M54S,\n",
        "    model=Model.GEMINI_2_5_PRO,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1584917d8ffe"
      },
      "source": [
        "### 🎬 Long transcription\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6EgD258nln_r"
      },
      "source": [
        "The total length of the transcribed text can quickly reach the maximum number of output tokens. With our current JSON response schema, we can reach 8,192 output tokens (supported by Gemini 2.0) with transcriptions of ~25min videos. Gemini 2.5 models support up to 65,536 output tokens (8x more) and let us transcribe longer videos.\n",
        "\n",
        "For this 54-minute panel discussion, Gemini 2.5 Pro uses only ~30-35% of the input/output token limits:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oTksIBcvln_r"
      },
      "outputs": [],
      "source": [
        "transcribe_video(\n",
        "    TestVideo.GDM_AI_FOR_SCIENCE_FRONTIER_PT54M23S,\n",
        "    model=Model.GEMINI_2_5_PRO,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6nyN608L1EAz"
      },
      "source": [
        "> 💡 In this long video, the five panelists are correctly transcribed, diarized, and identified. In the second half of the video, unseen attendees ask questions to the panel. They are correctly identified as audience members and, though their names and companies are never written on the screen, Gemini correctly extracts and even consolidates the information from the audio cues.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z23MVFlAhdvj"
      },
      "source": [
        "### 🎬 1h+ video\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "citR2M-C-CF5"
      },
      "source": [
        "In the latest Google I/O keynote video (1h 10min):\n",
        "\n",
        "- ~35-40% of the token limit is used (383k/1M in, 25/64k out)\n",
        "- The dozen speakers are nicely identified, including the demo \"AI Voices\" (esp. \"Casey\")\n",
        "- Speaker names are extracted from slanted text on the background screen for the live keynote speakers (e.g., Josh Woodward at 0:07) and from lower-third on-screen text in the DolphinGemma reportage (e.g., Dr. Denise Herzing at 1:05:28)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2ZoTbx-lhdvj"
      },
      "outputs": [],
      "source": [
        "transcribe_video(\n",
        "    TestVideo.GOOGLE_IO_DEV_KEYNOTE_PT1H10M03S,\n",
        "    model=Model.GEMINI_2_5_PRO,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Hg3f5Wcrhdvj"
      },
      "source": [
        "### 🎬 40 speaker video\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O0w3xU-p-CF5"
      },
      "source": [
        "In this 1h 40min Google Cloud Next keynote video:\n",
        "\n",
        "- ~50-70% of the token limit is used (547k/1M in, 45/64k out)\n",
        "- 40 distinct voices are diarized\n",
        "- 29 speakers are identified, connected to their 21 respective companies or divisions\n",
        "- The transcription takes up to 8 minutes (approximately 4 minutes with video tokens cached), which is 13 to 23 times faster than watching the entire video without pauses.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "iLOUQVIJhdvj"
      },
      "outputs": [],
      "source": [
        "transcribe_video(\n",
        "    TestVideo.GOOGLE_CLOUD_NEXT_PT1H40M03S,\n",
        "    model=Model.GEMINI_2_5_PRO,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ymRS4G5yqoN5"
      },
      "source": [
        "### 🎬 Transcribe your videos\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8fK9OcMCqoN5"
      },
      "outputs": [],
      "source": [
        "class MyVideo(Video):\n",
        "    # Templates for supported video sources\n",
        "    # For testing purposes, video duration is statically specified in the enum name\n",
        "    # Examples: MY_VIDEO_PT42S, MY_VIDEO_PT4M56S, MY_VIDEO_PT1H23M45S\n",
        "\n",
        "    # YouTube video (Vertex AI and Google AI Studio)\n",
        "    # A_PTxHyMzS = url_for_youtube_id(\"\")\n",
        "    # Cloud Storage URL (Vertex AI only)\n",
        "    # B_PTxHyMzS = \"gs://bucket/path/to/video.*\"\n",
        "    # HTTPS URL (Vertex AI only)\n",
        "    # C_PTxHyMzS = \"https://path/to/video.*\"\n",
        "\n",
        "    # Add your own videos\n",
        "    ...\n",
        "\n",
        "\n",
        "video = None\n",
        "# video = MyVideo.\n",
        "\n",
        "# Whole video\n",
        "video_segment = None\n",
        "# Only a video segment\n",
        "# video_segment = VideoSegment(start=timedelta(minutes=0), end=timedelta(minutes=2))\n",
        "\n",
        "# Standard 0-25min video (up to 8,192 output tokens)\n",
        "model = Model.GEMINI_2_0_FLASH\n",
        "# Standard 25-60min video (up to 65,536 output tokens)\n",
        "# model=Model.GEMINI_2_5_FLASH\n",
        "# Complex video or 1h+ video (up to 65,536 output tokens)\n",
        "# model=Model.GEMINI_2_5_PRO\n",
        "\n",
        "if video is not None:\n",
        "    transcribe_video(video, video_segment, model=model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_tyq90-4hj38"
      },
      "source": [
        "---\n",
        "\n",
        "## ⚖️ Strengths & weaknesses\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aBf2ywEu-CF6"
      },
      "source": [
        "### 👍 Strengths\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "okiqeIMk-CF6"
      },
      "source": [
        "Overall, Gemini is capable of generating excellent transcriptions that surpass human-generated ones in these aspects:\n",
        "\n",
        "- Consistency of the transcription\n",
        "- Impressive semantic understanding\n",
        "- Highly accurate grammar and punctuation\n",
        "- No typos or transcription system mistakes\n",
        "- Exhaustivity (every audible word is transcribed)\n",
        "\n",
        "> 💡 As you know, a single incorrect/missing word (or even letter) can completely change the meaning. These strengths help ensure high-quality transcriptions and reduce the risk of misunderstandings.\n",
        "\n",
        "If we compare YouTube's user-provided transcriptions (sometimes by professional caption vendors) to our auto-generated ones, we can observe some significant differences. Here are some examples from the last test:\n",
        "\n",
        "| timecode | ❌ user-provided                        | ✅ our transcription                            |\n",
        "| -------: | --------------------------------------- | ----------------------------------------------- |\n",
        "|     9:47 | research and **models**                 | research and **model**                          |\n",
        "|    13:32 | used **by 100,000** businesses          | used **by over 100,000** businesses             |\n",
        "|    18:19 | infrastructure core **layer**           | infrastructure core **for AI**                  |\n",
        "|    20:21 | hardware **system**                     | hardware **generation**                         |\n",
        "|    23:42 | **I do** deployed ML models             | **Toyota** deployed ML models                   |\n",
        "|    34:17 | Vertex **video**                        | Vertex **Media**                                |\n",
        "|    41:11 | speed up **app** development            | speed up **application coding and** development |\n",
        "|    42:15 | performance **and proven** insights     | performance **improvement** insights            |\n",
        "|    50:20 | across the **milt** agent ecosystem     | across the **multi-agent** ecosystem            |\n",
        "|    52:50 | Salesforce, **and** Dun                 | Salesforce, **or** Dun                          |\n",
        "|  1:22:28 | please **almost**                       | Please **welcome**                              |\n",
        "|  1:31:07 | organizations, **like I say Charles**   | organizations **like Charles**                  |\n",
        "|  1:33:23 | multiple public **LOMs**                | multiple public **LLMs**                        |\n",
        "|  1:33:54 | Gemini's **Agent tech** AI              | Gemini's **agentic** AI                         |\n",
        "|  1:34:24 | mitigated **outsider** risk             | mitigated **insider** risk                      |\n",
        "|  1:35:58 | from **end point**, **viral**, networks | from **endpoint**, **firewall**, networks       |\n",
        "|  1:38:45 | We at **Google** are                    | We at **Google Cloud** are                      |\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t3Ey-xra-CF6"
      },
      "source": [
        "### 👎 Weaknesses\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MoF5HIth-CF7"
      },
      "source": [
        "The current prompt isn't perfect, though. It focuses first on the audio for transcription and then on all cues for speaker data extraction. Though Gemini natively ensures a very high consolidation from the context, the prompt can lead to these side effects:\n",
        "\n",
        "- Sensitivity to speakers' pronunciation or accent\n",
        "- Misspellings for proper nouns\n",
        "- Inconsistencies between the transcription and a perfectly identified speaker's name\n",
        "\n",
        "Here are examples from the same test:\n",
        "\n",
        "| timecode | ✅ user-provided  | ❌ our transcription |\n",
        "| -------: | ----------------- | -------------------- |\n",
        "|     3:31 | Bosun             | Boson                |\n",
        "|     3:52 | Imagen            | Imagine              |\n",
        "|     3:52 | Veo               | VO                   |\n",
        "|    11:15 | Berman            | Burman               |\n",
        "|    25:06 | Huang             | Wang                 |\n",
        "|    38:58 | Allegiant Stadium | Allegiance Stadium   |\n",
        "|  1:29:07 | Snyk              | Sneak                |\n",
        "\n",
        "We'll stop our exploration here and leave it as an exercise, but here are possible ways to fix these errors, in order of simplicity/cost:\n",
        "\n",
        "- Update the prompt to use visual cues for proper nouns, such as _\"Ensure all proper nouns (people, companies, products, etc.) are spelled correctly and consistently. Prioritize on-screen text for reference.\"_\n",
        "- Enrich the prompt with an additional preliminary table to extract the proper nouns and use them explicitly in the context\n",
        "- Add available video context metadata in the prompt\n",
        "- Split the prompt into two successive requests\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tz9xIZL6-CF7"
      },
      "source": [
        "---\n",
        "\n",
        "## 📈 Tips & optimizations\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jx0m5x62hdvk"
      },
      "source": [
        "### 🔧 Model selection\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PSPZUxeQhdvk"
      },
      "source": [
        "Each model can differ in terms of performance, speed, and cost.\n",
        "\n",
        "Here's a practical summary based on the model specifications, our video test suite, and the current prompt:\n",
        "\n",
        "| Model            | Performance | Speed  |  Cost  | Max. input tokens | Max. output tokens | Video type                  |\n",
        "| ---------------- | :---------: | :----: | :----: | ----------------: | -----------------: | --------------------------- |\n",
        "| Gemini 2.0 Flash |    ⭐⭐     | ⭐⭐⭐ | ⭐⭐⭐ |    1,048,576 = 1M |         8,192 = 8k | Standard video, up to 25min |\n",
        "| Gemini 2.5 Flash |    ⭐⭐     |  ⭐⭐  |  ⭐⭐  |    1,048,576 = 1M |       65,536 = 64k | Standard video, 25min+      |\n",
        "| Gemini 2.5 Pro   |   ⭐⭐⭐    |   ⭐   |   ⭐   |    1,048,576 = 1M |       65,536 = 64k | Complex video or 1h+ video  |\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zGfKnWAwhj38"
      },
      "source": [
        "### 🔧 Video segment\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ovwE6EXahj38"
      },
      "source": [
        "You don't always need to analyze videos from start to finish. You can indicate a video segment with start and/or end offsets in the [VideoMetadata](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/rpc/google.cloud.aiplatform.v1#videometadata) structure.\n",
        "\n",
        "In this example, Gemini will only analyze the 30:00-50:00 segment of the video:\n",
        "\n",
        "```python\n",
        "video_metadata = VideoMetadata(\n",
        "    start_offset=\"1800.0s\",\n",
        "    end_offset=\"3000.0s\",\n",
        "    …\n",
        ")\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hx8Yw-4Qhj38"
      },
      "source": [
        "### 🔧 Media resolution\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FvSo6URZhj38"
      },
      "source": [
        "In our test suite, the videos are fairly standard. We got excellent results by using a \"low\" media resolution (\"medium\" being the default), specified with the `GenerateContentConfig.media_resolution` parameter.\n",
        "\n",
        "> 💡 This provides faster and cheaper inferences, while also enabling the analysis of videos that are three times as long.\n",
        "\n",
        "We used a simple heuristic based on video duration, but you might want to make it dynamic on a per-video basis:\n",
        "\n",
        "```python\n",
        "def get_media_resolution_for_video(video: Video) -> MediaResolution | None:\n",
        "    if not (video_duration := get_video_duration(video)):\n",
        "        return None  # Default\n",
        "\n",
        "    # For testing purposes, this is based on video duration, as our short videos tend to be more detailed\n",
        "    less_than_five_minutes = video_duration < timedelta(minutes=5)\n",
        "    if less_than_five_minutes:\n",
        "        media_resolution = MediaResolution.MEDIA_RESOLUTION_MEDIUM\n",
        "    else:\n",
        "        media_resolution = MediaResolution.MEDIA_RESOLUTION_LOW\n",
        "\n",
        "    return media_resolution\n",
        "```\n",
        "\n",
        "> ⚠️ If you select a \"low\" media resolution and experience an apparent loss of understanding, you might be losing important details in the sampled video frames. This is easy to fix: switch back to the default media resolution.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fq3xWHSdhj38"
      },
      "source": [
        "### 🔧 Sampling frame rate\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JdYdSjiihj38"
      },
      "source": [
        "The default sampling frame rate of 1 FPS worked fine in our tests. You might want to customize it for each video:\n",
        "\n",
        "```python\n",
        "SamplingFrameRate = float\n",
        "\n",
        "def get_sampling_frame_rate_for_video(video: Video) -> SamplingFrameRate | None:\n",
        "    sampling_frame_rate = None  # Default (1 FPS for current models)\n",
        "\n",
        "    # [Optional] Define a custom FPS: 0.0 < sampling_frame_rate <= 24.0\n",
        "\n",
        "    return sampling_frame_rate\n",
        "```\n",
        "\n",
        "> 💡 You can mix the parameters. In this extreme example, assuming the input video has a 24fps frame rate, all frames will be sampled for a 10s segment:\n",
        "\n",
        "```python\n",
        "video_metadata = VideoMetadata(\n",
        "    start_offset=\"42.0s\",\n",
        "    end_offset=\"52.0s\",\n",
        "    fps=24.0,\n",
        ")\n",
        "```\n",
        "\n",
        "> ⚠️ If you use a higher sampling rate, this multiplies the number of frames (and tokens) accordingly, increasing latency and cost. As `10s × 24fps = 240 frames = 4×60s × 1fps`, this 10-second analysis at 24 FPS is equivalent to a 4-minute default analysis at 1 FPS.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FTzErz6Ahj38"
      },
      "source": [
        "### 🎯 Precision vs recall\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3xdMa78Lhj38"
      },
      "source": [
        "The prompt can influence the precision and recall of our data extractions, especially when using explicit versus implicit wording. If you want more qualitative results, favor precision using explicit wording; if you want more quantitative results, favor recall using implicit wording:\n",
        "\n",
        "| wording  | favors    | generates fewer | LLM behavior                                                                   |\n",
        "| -------- | --------- | --------------- | ------------------------------------------------------------------------------ |\n",
        "| explicit | precision | false positives | relies more (or only) on the provided context                                  |\n",
        "| implicit | recall    | false negatives | relies on the overall context, infers more, and can use its training knowledge |\n",
        "\n",
        "Here are examples that can lead to subtly different results:\n",
        "\n",
        "| wording  | verbs                | qualifiers                                   |\n",
        "| -------- | -------------------- | -------------------------------------------- |\n",
        "| explicit | \"extract\", \"quote\"   | \"stated\", \"direct\", \"exact\", \"verbatim\"      |\n",
        "| implicit | \"identify\", \"deduce\" | \"found\", \"indirect\", \"possible\", \"potential\" |\n",
        "\n",
        "> 💡 Different models can also behave differently for the same prompt. In particular, more performant models might seem more \"confident\" and make more implicit inferences or consolidations.\n",
        "\n",
        "> 💡 As an example, in this [AlphaFold video](https://youtu.be/gg7WjuFs8F4?t=297), at the 04:57 timecode, \"Spring 2020\" is first displayed as context. Then, a short declaration from \"The Prime Minister\" is heard in the background (\"You must stay at home\") without any other hints. When asked to \"identify\" (rather than \"extract\") the speaker, Gemini is likely to infer more and attribute the voice to \"Boris Johnson\". There's absolutely no explicit mention of Boris Johnson; his identity is correctly inferred from the context (\"UK\", \"Spring 2020\", and \"The Prime Minister\").\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "g6IMWexM1EAr"
      },
      "source": [
        "### 🏷️ Metadata\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0q8Eqps61EAr"
      },
      "source": [
        "In our current tests, Gemini only uses audio and frame tokens, tokenized from sources on Google Cloud Storage or YouTube. If you have additional video metadata, this can be a goldmine; try to add it to your prompt and enrich the video context for better results upfront.\n",
        "\n",
        "Potentially helpful metadata:\n",
        "\n",
        "- Video description: This can provide a better understanding of where and when the video was shot.\n",
        "- Speaker info: This can help auto-correct names that are only heard and not obvious to spell.\n",
        "- Entity info: Overall, this can help get better transcriptions for custom or private data.\n",
        "\n",
        "> 💡 For YouTube videos, no additional metadata or transcript is fetched. Gemini only receives the raw audio and video streams. You can check this yourself by comparing your results with YouTube's automatic captioning (no punctuation, audio only) or user-provided transcripts (cleaned up), when available.\n",
        "\n",
        "> 💡 If you know your video concerns a team or a company, adding internal data in the context can help correct or complete the requested speaker names (provided there are no homonyms in the same context), companies, and job titles.\n",
        "\n",
        "> 💡 In this [French reportage](https://youtu.be/U_yYkb-ureI?t=376), in the 06:16-06:31 segment, there are two dogs: Arnold and Rio. \"Arnold\" is clearly audible, repeated three times, and correctly transcribed. \"Rio\" is called only once, audible for a fraction of a second in a noisy environment, and the audio transcription can vary. Providing the names of the whole team (owners & dogs, even if they are not all in the video) can help in transcribing this short name consistently.\n",
        "\n",
        "> 💡 It should also be possible to ground the results with Google Search, Google Maps, or your own RAG system. See [Grounding overview](https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e5xUX4Xrhj38"
      },
      "source": [
        "### 🔬 Debugging & evidence\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QCV4Rz41hj38"
      },
      "source": [
        "Iterating through successive prompts and debugging LLM outputs can be challenging, especially when trying to understand the reasons for the results.\n",
        "\n",
        "It's possible to ask Gemini to provide evidence in the response. In our video transcription solution, we could request a timecoded \"evidence\" for each speaker's identified name, company, or role. This enables linking results to their sources, discovering and understanding unexpected insights, checking potential false positives…\n",
        "\n",
        "> 💡 In the tested videos, when trying to understand where the insights came from, requesting evidence yielded very insightful explanations, for example:\n",
        ">\n",
        "> - Person names could be extracted from various sources (video conference captions, badges, unseen participants introducing themselves when asking questions during a conference panel…)\n",
        "> - Company names could be found from text on uniforms, backpacks, vehicles…\n",
        "\n",
        "> 💡 In a document data extraction solution, we could request to provide an \"excerpt\" as evidence, including page number, chapter number, or any other relevant location information.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jscy22A1hj38"
      },
      "source": [
        "### 🐘 Verbose JSON\n",
        "\n",
        "The JSON format is currently the most common way to generate structured outputs with LLMs. However, JSON is a rather verbose data format, as field names are repeated for each object. For example, an output can look like the following, with many repeated underlying tokens:\n",
        "\n",
        "```jsonc\n",
        "{\n",
        "  \"task1_transcripts\": [\n",
        "    { \"start\": \"00:02\", \"text\": \"We've…\", \"voice\": 1 },\n",
        "    { \"start\": \"00:07\", \"text\": \"But we…\", \"voice\": 1 }\n",
        "    // …\n",
        "  ],\n",
        "  \"task2_speakers\": [\n",
        "    {\n",
        "      \"voice\": 1,\n",
        "      \"name\": \"John Moult\",\n",
        "      \"company\": \"University of Maryland\",\n",
        "      \"position\": \"Co-Founder, CASP\",\n",
        "      \"role_in_video\": \"Expert\"\n",
        "    },\n",
        "    // …\n",
        "    {\n",
        "      \"voice\": 3,\n",
        "      \"name\": \"Demis Hassabis\",\n",
        "      \"company\": \"DeepMind\",\n",
        "      \"position\": \"Founder and CEO\",\n",
        "      \"role_in_video\": \"Team Leader\"\n",
        "    }\n",
        "    // …\n",
        "  ]\n",
        "}\n",
        "```\n",
        "\n",
        "To optimize output size, an interesting possibility is to ask Gemini to generate an XML block containing a CSV for each of your tabular extractions. The field names are specified once in the header, and by using tab separators, for example, we can achieve more compact outputs like the following:\n",
        "\n",
        "```xml\n",
        "<TASK1_TRANSCRIPT_CSV>\n",
        "start  text     voice\n",
        "00:02  We've…   1\n",
        "00:07  But we…  1\n",
        "…\n",
        "</TASK1_TRANSCRIPT_CSV>\n",
        "<TASK2_SPEAKER_CSV>\n",
        "voice  name            company                 position          role_in_video\n",
        "1      John Moult      University of Maryland  Co-Founder, CASP  Expert\n",
        "…\n",
        "3      Demis Hassabis  DeepMind                Founder and CEO   Team Leader\n",
        "…\n",
        "</TASK2_SPEAKER_CSV>\n",
        "```\n",
        "\n",
        "> 💡 Gemini excels at patterns and formats. Depending on your needs, feel free to experiment with JSON, XML, CSV, YAML, and any custom structured formats. It's likely that the industry will evolve to allow even more elaborate structured outputs.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4wSb4Zdihj39"
      },
      "source": [
        "### 🐿️ Context caching\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Hc3FzyLUhj39"
      },
      "source": [
        "Context caching optimizes the cost and the latency of repeated requests using the same base inputs.\n",
        "\n",
        "There are two ways requests can benefit from context caching:\n",
        "\n",
        "- **Implicit caching**: By default, upon the first request, input tokens are cached, to accelerate responses for subsequent requests with the same base inputs. This is fully automated and no code change is required.\n",
        "- **Explicit caching**: You place specific inputs into the cache and reuse this cached content as a base for your requests. This provides full control but requires managing the cache manually.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "41HF84eshj39"
      },
      "source": [
        "Example of implicit caching:\n",
        "\n",
        "```python\n",
        "model_id = \"gemini-2.0-flash\"\n",
        "video_file_data = FileData(\n",
        "    file_uri=\"gs://bucket/path/to/my-video.mp4\",\n",
        "    mime_type=\"video/mp4\",\n",
        ")\n",
        "video = Part(file_data=video_file_data)\n",
        "prompt_1 = \"List the people visible in the video.\"\n",
        "prompt_2 = \"Summarize what happens to John Smith.\"\n",
        "\n",
        "# ✅ Request A1: static data (video) placed first\n",
        "response = client.models.generate_content(\n",
        "    model=model_id,\n",
        "    contents=[video, prompt_1],\n",
        ")\n",
        "\n",
        "# ✅ Request A2: likely cache hit for the video tokens\n",
        "response = client.models.generate_content(\n",
        "    model=model_id,\n",
        "    contents=[video, prompt_2],\n",
        ")\n",
        "```\n",
        "\n",
        "> 💡 Implicit caching can be disabled at the project level (see [data governance](https://cloud.google.com/vertex-ai/generative-ai/docs/data-governance#customer_data_retention_and_achieving_zero_data_retention)).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lZYDfxz1hj39"
      },
      "source": [
        "Implicit caching is prefix-based, meaning it only works if you put static data first and variable data last.\n",
        "\n",
        "Example of requests preventing implicit caching:\n",
        "\n",
        "```python\n",
        "# ❌ Request B1: variable input placed first\n",
        "response = client.models.generate_content(\n",
        "    model=model_id,\n",
        "    contents=[prompt_1, video],\n",
        ")\n",
        "\n",
        "# ❌ Request B2: no cache hit\n",
        "response = client.models.generate_content(\n",
        "    model=model_id,\n",
        "    contents=[prompt_2, video],\n",
        ")\n",
        "```\n",
        "\n",
        "> 💡 This explains why the data-plus-instructions input order is preferred, for performance (not LLM-related) reasons.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CwcqJtZ-hj39"
      },
      "source": [
        "Cost-wise, the input tokens retrieved with a cache hit benefit from a 90% discount in the following cases:\n",
        "\n",
        "- **Implicit caching**: With all Gemini models, cache hits are automatically discounted (without any control on the cache or cache-hit guarantee).\n",
        "- **Explicit caching**: With all Gemini models and supported models in Model Garden, you control your cached inputs and their lifespans to ensure cache hits.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wpd76xLlhj39"
      },
      "source": [
        "Example of explicit caching:\n",
        "\n",
        "```python\n",
        "from google.genai.types import (\n",
        "    Content,\n",
        "    CreateCachedContentConfig,\n",
        "    FileData,\n",
        "    GenerateContentConfig,\n",
        "    Part,\n",
        ")\n",
        "\n",
        "model_id = \"gemini-2.0-flash-001\"\n",
        "\n",
        "# Input video\n",
        "video_file_data = FileData(\n",
        "    file_uri=\"gs://cloud-samples-data/video/JaneGoodall.mp4\",\n",
        "    mime_type=\"video/mp4\",\n",
        ")\n",
        "video_part = Part(file_data=video_file_data)\n",
        "video_contents = [Content(role=\"user\", parts=[video_part])]\n",
        "\n",
        "# Video explicitly put in cache, with time-to-live (TTL) before automatic deletion\n",
        "cached_content = client.caches.create(\n",
        "    model=model_id,\n",
        "    config=CreateCachedContentConfig(\n",
        "        ttl=\"1800s\",\n",
        "        display_name=\"video-cache\",\n",
        "        contents=video_contents,\n",
        "    ),\n",
        ")\n",
        "if cached_content.usage_metadata:\n",
        "    print(f\"Cached tokens: {cached_content.usage_metadata.total_token_count or 0:,}\")\n",
        "    # Cached tokens: 46,171\n",
        "    # ✅ Video tokens are cached (standard tokenization rate + storage cost for TTL duration)\n",
        "\n",
        "cache_config = GenerateContentConfig(cached_content=cached_content.name)\n",
        "\n",
        "# Request #1\n",
        "response = client.models.generate_content(\n",
        "    model=model_id,\n",
        "    contents=\"List the people mentioned in the video.\",\n",
        "    config=cache_config,\n",
        ")\n",
        "if response.usage_metadata:\n",
        "    print(f\"Input tokens : {response.usage_metadata.prompt_token_count or 0:,}\")\n",
        "    print(f\"Cached tokens: {response.usage_metadata.cached_content_token_count or 0:,}\")\n",
        "    # Input tokens : 46,178\n",
        "    # Cached tokens: 46,171\n",
        "    # ✅ Cache hit (90% discount)\n",
        "\n",
        "# Request #i (within the TTL period)\n",
        "# …\n",
        "\n",
        "# Request #n (within the TTL period)\n",
        "response = client.models.generate_content(\n",
        "    model=model_id,\n",
        "    contents=\"List all the timecodes when Jane Goodall is mentioned.\",\n",
        "    config=cache_config,\n",
        ")\n",
        "if response.usage_metadata:\n",
        "    print(f\"Input tokens : {response.usage_metadata.prompt_token_count or 0:,}\")\n",
        "    print(f\"Cached tokens: {response.usage_metadata.cached_content_token_count or 0:,}\")\n",
        "    # Input tokens : 46,182\n",
        "    # Cached tokens: 46,171\n",
        "    # ✅ Cache hit (90% discount)\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "goWu_e99hj39"
      },
      "source": [
        "> 💡 Explicit caching needs a specific model version (like `…-001` in this example) to ensure the cache remains valid and is not affected by a model update.\n",
        "\n",
        "> ℹ️ Learn more about [Context caching](https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ased6OeIhj39"
      },
      "source": [
        "### ⏳ Batch prediction\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rQ1tJrE9hj39"
      },
      "source": [
        "If you need to process a large volume of videos and don't need synchronous responses, you can use a single batch request and reduce your cost.\n",
        "\n",
        "> 💡 Batch requests for Gemini models get a 50% discount compared to standard requests.\n",
        "\n",
        "> ℹ️ Learn more about [Batch prediction](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/batch-prediction-gemini#generative-ai-batch-text-python_genai_sdk).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Sp2NMQQpa1E8"
      },
      "source": [
        "### ♾️ To production… and beyond\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "G7b5PlyKa1E8"
      },
      "source": [
        "A few additional notes:\n",
        "\n",
        "- The current prompt is not perfect and can be improved. It has been preserved in its current state to illustrate its development starting with Gemini 2.0 Flash and a simple video test suite.\n",
        "- The Gemini 2.5 models are more capable and intrinsically provide a better video understanding. However, the current prompt has not been optimized for them. Writing optimal prompts for different models is another challenge.\n",
        "- If you test transcribing your own videos, especially different types of videos, you may run into new or specific issues. They can probably be addressed by enriching the prompt.\n",
        "- Future models will likely support more output features. This should allow for richer structured outputs and simpler prompts.\n",
        "- As models keep learning, it's also possible that multimodal video transcription will become a one-liner prompt.\n",
        "- Gemini's image and audio tokenizers are truly impressive and enable many other use cases. To fully grasp the extent of the possibilities, you can run unit tests on images or audio files.\n",
        "- We constrained our challenge to using a single request, which optimizes the solution both for speed and cost.\n",
        "- For applications demanding the absolute highest transcription accuracy, we could isolate the audio-only transcription in a first request before performing speaker identification on the video frames in a second request. It might produce many more voice identifiers than actual speakers, but it should minimize false positives. In the second step, we'd reinject the transcription to focus on extracting and consolidating speaker data from the video frames. This two-step approach would also be a viable strategy to process very long videos, even those several hours in duration.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RFO_u1-8hj3-"
      },
      "source": [
        "---\n",
        "\n",
        "## 🏁 Conclusion\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nubJgw0Ahj3-"
      },
      "source": [
        "Multimodal video transcription, which requires the complex synthesis of audio and visual data, is a true challenge for ML practitioners, without mainstream solutions. A traditional approach, involving an elaborate pipeline of specialized models, would be engineering-intensive without any guarantee of success. In contrast, Gemini proved to be a versatile toolbox for reaching a powerful and straightforward solution based on a single prompt:\n",
        "\n",
        "![multimodal video transcription solutions](https://storage.googleapis.com/github-repo/generative-ai/gemini/use-cases/video-analysis/multimodal_video_transcription/multimodal-video-transcription-solutions.gif)\n",
        "\n",
        "We managed to address this complex problem with the following techniques:\n",
        "\n",
        "- Prototyping with open prompts to develop intuition about Gemini's natural strengths\n",
        "- Taking into account how LLMs work under the hood\n",
        "- Crafting increasingly specific prompts using a tabular extraction strategy\n",
        "- Generating structured outputs to move towards production-ready code\n",
        "- Adding data visualization for easier interpretation of responses and smoother iterations\n",
        "- Adapting default parameters to optimize the results\n",
        "- Conducting more tests, iterating, and even enriching the extracted data\n",
        "\n",
        "These principles should apply to many other data extraction domains and allow you to solve your own complex problems. Have fun and happy solving!\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "huldfEHkhj3-"
      },
      "source": [
        "---\n",
        "\n",
        "## ➕ More!\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "H_-odeqEln_s"
      },
      "source": [
        "- Explore additional use cases in the [Vertex AI Prompt Gallery](https://console.cloud.google.com/vertex-ai/studio/prompt-gallery)\n",
        "- Stay updated by following the [Vertex AI Release Notes](https://cloud.google.com/vertex-ai/generative-ai/docs/release-notes)\n"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "multimodal_video_transcription.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
