{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bZKaz0oSwAx-"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UdL4uvQQs76x"
      },
      "source": [
        "# Veo 3 Reference to Video\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fvision%2Fgetting-started%2Fveo3_reference_to_video.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/vision/getting-started/veo3_reference_to_video.ipynb\">\n",
        "      <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>    \n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo3_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NnMaDH8jwReT"
      },
      "source": [
        "| | |\n",
        "|-|-|\n",
        "|Author(s) | [Katie Nguyen](https://github.com/katiemn) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vIDE4FhjwW67"
      },
      "source": [
        "## Overview\n",
        "\n",
        "### Veo 3\n",
        "\n",
        "Veo 3 on Vertex AI gives application developers access to Google's cutting-edge video generation. This model creates videos with stunning detail and realistic physics across a wide array of visual styles. Veo 3 enhances video quality from text and image prompts, and now includes dialogue and audio generation.\n",
        "\n",
        "In this tutorial, you will learn how to use the Google Gen AI SDK for Python to interact with Veo 3.1 to:\n",
        "- Generate a video from asset images, including subjects, objects and scenes\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dEPqvne0w4qx"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s8p3AOlALGpj"
      },
      "source": [
        "### Install Google Gen AI SDK for Python"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LWGj2AmpLJ2D"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-genai"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aDvFfD83w7iL"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you are running this notebook on Google Colab, run the following cell to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "iTfXlEVQw9xV"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PYCYpliKxFES"
      },
      "source": [
        "### Import libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "RsIFsBN1xIEz"
      },
      "outputs": [],
      "source": [
        "import time\n",
        "import urllib.request\n",
        "\n",
        "from IPython.display import Video, display\n",
        "from PIL import Image as PIL_Image\n",
        "from google import genai\n",
        "from google.genai import types\n",
        "import matplotlib.image as img\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ffYy0e81xAV6"
      },
      "source": [
        "### Set Google Cloud project information and create client\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "PMz0sZASxCTU"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
        "\n",
        "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "br8QTmuyxL5R"
      },
      "source": [
        "### Define helper functions"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "TgkK6Vr4xN5j"
      },
      "outputs": [],
      "source": [
        "def show_video(video):\n",
        "    if isinstance(video, str):\n",
        "        file_name = video.split(\"/\")[-1]\n",
        "        !gsutil cp {video} {file_name}\n",
        "        display(Video(file_name, embed=True, width=600))\n",
        "    else:\n",
        "        with open(\"sample.mp4\", \"wb\") as out_file:\n",
        "            out_file.write(video)\n",
        "        display(Video(\"sample.mp4\", embed=True, width=600))\n",
        "\n",
        "\n",
        "def show_images(\n",
        "    images: list[str],\n",
        "):\n",
        "    fig, axes = plt.subplots(1, len(images), figsize=(12, 6))\n",
        "    if len(images) == 1:\n",
        "        axes = np.array([axes])\n",
        "    for i, ax in enumerate(axes):\n",
        "        image = img.imread(images[i])\n",
        "        ax.imshow(image)\n",
        "        ax.axis(\"off\")\n",
        "    plt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5a6UxKZfxQoH"
      },
      "source": [
        "### Load the video model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "H6K66dOfxSmr"
      },
      "outputs": [],
      "source": [
        "video_model = \"veo-3.1-generate-preview\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UMANSB1YN11I"
      },
      "source": [
        "## Reference images to videos\n",
        "\n",
        "With Reference-to-Video in Veo 3.1, you can use reference images to generate videos. The reference images are `asset` images of subjects, objects, or scenes that will be included in the final video output.\n",
        "\n",
        "**NOTE:** You can include up to 3 `asset` images in a request."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UBPpj5BTZaY9"
      },
      "source": [
        "### Asset references\n",
        "\n",
        "Download and display the asset images that you'll use in the following requests. To use your own local images, modify the URLs in the `wget` command and update the `first_image`, `second_image`, and/or `third_image` variables accordingly."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X81WV5lIpSEe"
      },
      "source": [
        "#### Subject reference images\n",
        "\n",
        "In this example, you'll use two subject reference images of different people. You'll generate a new scene for them based on a text prompt."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "E2vrXKXdu39e"
      },
      "outputs": [],
      "source": [
        "# Download subject images from Cloud Storage\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/man-in-field.png\n",
        "\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/woman.jpeg"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "twURgtG16oUu"
      },
      "source": [
        "Set the `first_image` and `second_image` variables."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "e_aPJ5dzvCnS"
      },
      "outputs": [],
      "source": [
        "first_image = \"man-in-field.png\"  # @param {type: 'string'}\n",
        "second_image = \"woman.jpeg\"  # @param {type: 'string'}\n",
        "\n",
        "show_images([first_image, second_image])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LcrdbeRE9oW5"
      },
      "source": [
        "Now, you'll send a request to generate a video. With Veo 3.1, you can generate videos with audio from a text prompt, input image(s), or both. In order to generate a video in the following sample, specify the following info:\n",
        "\n",
        "  - **Prompt:** A description of the video you would like to see with the reference images.\n",
        "  - **Reference images:** Up to three `asset` images.\n",
        "  - **Aspect ratio:** 16:9\n",
        "  - **Number of videos:** Set this value to 1, 2, 3, or 4.\n",
        "  - **Video duration:** 8 seconds\n",
        "  - **Resolution:** 720p\n",
        "  - **Person generation:** Set to `allow_adult` or `dont_allow`.\n",
        "  - **Generate audio:** Set to `True` if you'd like audio in your generated video."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ym4PP2AvvPW2"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "a woman and a man drinking a cup of coffee in a cafe, chatting about the rainy weather\n",
        "\"\"\"\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=first_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=second_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        generate_audio=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.video_bytes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZSPqF01yySYl"
      },
      "source": [
        "#### Setting reference image\n",
        "\n",
        "Now, you'll use a single scenery reference image and a text prompt to generate a video with different subjects and actions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "h5Jz6bYKyby5"
      },
      "outputs": [],
      "source": [
        "# Download the image from Cloud Storage\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/room.png"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IotQjwiP7Og2"
      },
      "source": [
        "Set the `first_image` variable."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ULTrObxdylip"
      },
      "outputs": [],
      "source": [
        "first_image = \"room.png\"  # @param {type: 'string'}\n",
        "\n",
        "show_images([first_image])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tqEjvGNJ7TD4"
      },
      "source": [
        "Run the request. Update the `prompt` if you'd like to see different content within the scene."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "w7KnIJMYyv66"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "a Corgi walks around in a living room, jumps on the couch and begins to bark\n",
        "\"\"\"\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=first_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        generate_audio=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.video_bytes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0ybB1oG05W4X"
      },
      "source": [
        "#### Product reference image\n",
        "\n",
        "Next, you'll use a product reference image and a text prompt to generate a video. This will demonstrate how Veo maintains product consistency while in motion."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nnfezu0-6aAz"
      },
      "outputs": [],
      "source": [
        "# Download the image from Cloud Storage\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/mug.png"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zEqhYMQG8C5w"
      },
      "source": [
        "Set the `first_image` variable."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fl9hn5oT78Wx"
      },
      "outputs": [],
      "source": [
        "first_image = \"mug.png\"  # @param {type: 'string'}\n",
        "\n",
        "show_images([first_image])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4ivJgSH68RE3"
      },
      "source": [
        "Run the request. Update the `prompt` if you'd like to visualize the product in a different manner."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hvFkg-vzfcY-"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "slowly rotate this coffee mug in a 360 degree circle\n",
        "\"\"\"\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=first_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        generate_audio=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.video_bytes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FOqUuJRGoQtO"
      },
      "source": [
        "#### Three distinct reference images\n",
        "\n",
        "In this example, you'll use three different reference images (a product, a subject, and a scene) from Google Cloud Storage. Instead of downloading them, you'll reference their Cloud Storage URIs directly. To use your own images, replace the gcs_uri variables below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5m1xhMENZrYx"
      },
      "outputs": [],
      "source": [
        "first_image = PIL_Image.open(\n",
        "    urllib.request.urlopen(\n",
        "        \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/flowers.png\"\n",
        "    )\n",
        ")\n",
        "first_image_gcs = \"gs://cloud-samples-data/generative-ai/image/flowers.png\"\n",
        "\n",
        "second_image = PIL_Image.open(\n",
        "    urllib.request.urlopen(\n",
        "        \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/suitcase.png\"\n",
        "    )\n",
        ")\n",
        "second_image_gcs = \"gs://cloud-samples-data/generative-ai/image/suitcase.png\"\n",
        "\n",
        "third_image = PIL_Image.open(\n",
        "    urllib.request.urlopen(\n",
        "        \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/woman.jpg\"\n",
        "    )\n",
        ")\n",
        "third_image_gcs = \"gs://cloud-samples-data/generative-ai/image/woman.jpg\"\n",
        "\n",
        "# Display the images\n",
        "fig, axis = plt.subplots(1, 3, figsize=(18, 6))\n",
        "axis[0].imshow(first_image)\n",
        "axis[1].imshow(second_image)\n",
        "axis[2].imshow(third_image)\n",
        "for ax in axis:\n",
        "    ax.axis(\"off\")\n",
        "plt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5PAnNvt79xNU"
      },
      "source": [
        "Rather than output video_bytes in this section, you'll save your video to Cloud Storage. In order to accomplish this, set your Cloud Storage bucket location in `output_gcs`.\n",
        "\n",
        "**Safety:** All Veo videos include [SynthID](https://deepmind.google/science/synthid/), which embeds a digital watermark directly into the AI-generated video."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0Jq-yKddAFJw"
      },
      "outputs": [],
      "source": [
        "prompt = \"a wide shot of a woman wheeling a blue suitcase through a flower field\"  # @param {type: 'string'}\n",
        "output_gcs = \"gs://[your-bucket-path]\"  # @param {type: 'string'}\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image(gcs_uri=first_image_gcs, mime_type=\"image/png\"),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image(gcs_uri=second_image_gcs, mime_type=\"image/png\"),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image(gcs_uri=third_image_gcs, mime_type=\"image/jpeg\"),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        output_gcs_uri=output_gcs,\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        generate_audio=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.uri)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "veo3_reference_to_video.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
