{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bZKaz0oSwAx-"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UdL4uvQQs76x"
      },
      "source": [
        "# Veo 2 Reference to Video\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fvision%2Fgetting-started%2Fveo2_reference_to_video.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/vision/getting-started/veo2_reference_to_video.ipynb\">\n",
        "      <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>    \n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/veo2_reference_to_video.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NnMaDH8jwReT"
      },
      "source": [
        "| | |\n",
        "|-|-|\n",
        "|Author(s) | [Katie Nguyen](https://github.com/katiemn) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vIDE4FhjwW67"
      },
      "source": [
        "## Overview\n",
        "\n",
        "### Veo 2\n",
        "\n",
        "Veo 2 on Vertex AI brings Google's video generation capabilities to application developers. It's capable of creating videos with astonishing detail that simulate real-world physics across a wide range of visual styles.\n",
        "\n",
        "In this tutorial, you will learn how to use the Google Gen AI SDK for Python to interact with Veo 2 to:\n",
        "- Generate a video from asset images, including subjects and scenes\n",
        "- Generate a video from a reference style\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dEPqvne0w4qx"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s8p3AOlALGpj"
      },
      "source": [
        "### Install Google Gen AI SDK for Python"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LWGj2AmpLJ2D"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-genai"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aDvFfD83w7iL"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you are running this notebook on Google Colab, run the following cell to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "iTfXlEVQw9xV"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PYCYpliKxFES"
      },
      "source": [
        "### Import libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "RsIFsBN1xIEz"
      },
      "outputs": [],
      "source": [
        "import time\n",
        "import urllib.request\n",
        "\n",
        "from IPython.display import Video, display\n",
        "from PIL import Image as PIL_Image\n",
        "from google import genai\n",
        "from google.genai import types\n",
        "import matplotlib.image as img\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ffYy0e81xAV6"
      },
      "source": [
        "### Set Google Cloud project information and create client\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "PMz0sZASxCTU"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
        "\n",
        "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "br8QTmuyxL5R"
      },
      "source": [
        "### Define helper functions"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "TgkK6Vr4xN5j"
      },
      "outputs": [],
      "source": [
        "def show_video(video):\n",
        "    if isinstance(video, str):\n",
        "        file_name = video.split(\"/\")[-1]\n",
        "        !gsutil cp {video} {file_name}\n",
        "        display(Video(file_name, embed=True, width=600))\n",
        "    else:\n",
        "        with open(\"sample.mp4\", \"wb\") as out_file:\n",
        "            out_file.write(video)\n",
        "        display(Video(\"sample.mp4\", embed=True, width=600))\n",
        "\n",
        "\n",
        "def show_images(\n",
        "    images: list[str],\n",
        "):\n",
        "    fig, axes = plt.subplots(1, len(images), figsize=(12, 6))\n",
        "    if len(images) == 1:\n",
        "        axes = np.array([axes])\n",
        "    for i, ax in enumerate(axes):\n",
        "        image = img.imread(images[i])\n",
        "        ax.imshow(image)\n",
        "        ax.axis(\"off\")\n",
        "    plt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5a6UxKZfxQoH"
      },
      "source": [
        "### Load the video model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "H6K66dOfxSmr"
      },
      "outputs": [],
      "source": [
        "video_model = \"veo-2.0-generate-exp\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UMANSB1YN11I"
      },
      "source": [
        "## Reference images to videos\n",
        "\n",
        "With Reference-to-Video in Veo 2, you can use reference images to generate videos. The reference images can either be `style` images where the output video will be generated in the same aesthetic as the reference image, or `asset` images of subjects, objects, or scenes that will be included in the final video output.\n",
        "\n",
        "**NOTE:** You can include up to 3 `asset` images or 1 `style` image, but you can't combine `asset` and `style` reference images in a request."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UBPpj5BTZaY9"
      },
      "source": [
        "### Asset references\n",
        "\n",
        "Download and display the asset images that you'll use in the following requests. To use your own local images, modify the URLs in the `wget` command and update the `first_image`, `second_image`, and/or `third_image` variables accordingly."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X81WV5lIpSEe"
      },
      "source": [
        "#### Subject reference images\n",
        "\n",
        "In this example, you'll use two subject reference images of different people. You'll generate a new scene for them based on a text prompt."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "E2vrXKXdu39e"
      },
      "outputs": [],
      "source": [
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/man-in-field.png\n",
        "\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/woman.jpeg"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "e_aPJ5dzvCnS"
      },
      "outputs": [],
      "source": [
        "first_image = \"man-in-field.png\"  # @param {type: 'string'}\n",
        "second_image = \"woman.jpeg\"  # @param {type: 'string'}\n",
        "\n",
        "show_images([first_image, second_image])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LcrdbeRE9oW5"
      },
      "source": [
        "Now, you'll send a request to generate a video. With Veo 2, you can generate videos from a text prompt, input image(s), or both. In order to generate a video in the following sample, specify the following info:\n",
        "\n",
        "  - **Prompt:** A description of the video you would like to see with the reference images.\n",
        "  - **Reference images:** Up to three `asset` images.\n",
        "  - **Aspect ratio:** 16:9\n",
        "  - **Number of videos:** Set this value to 1, 2, 3, or 4.\n",
        "  - **Video duration:** 8 seconds\n",
        "  - **Resolution:** 720p\n",
        "  - **Person generation:** Set to `allow_adult` or `dont_allow`.\n",
        "  - **Prompt enhancement:** The `veo-2.0-generate-exp` model offers the option to enhance your provided prompt. To utilize this feature, set `enhance_prompt` to True. A new, detailed prompt will be created from your original one to help generate higher quality videos that better adhere to your prompt's intent."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ym4PP2AvvPW2"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "a woman and a man drinking a cup of coffee in a cafe\n",
        "\"\"\"\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=first_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=second_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        enhance_prompt=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.video_bytes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZSPqF01yySYl"
      },
      "source": [
        "#### Setting reference image\n",
        "\n",
        "Now, you'll use just one scenery reference image. You'll then prompt different subjects and actions within the video through a text prompt."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "h5Jz6bYKyby5"
      },
      "outputs": [],
      "source": [
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/room.png"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ULTrObxdylip"
      },
      "outputs": [],
      "source": [
        "first_image = \"room.png\"  # @param {type: 'string'}\n",
        "\n",
        "show_images([first_image])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "w7KnIJMYyv66"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "a Corgi walks around in a living room\n",
        "\"\"\"\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=first_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        enhance_prompt=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.video_bytes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FOqUuJRGoQtO"
      },
      "source": [
        "#### Three distinct reference images\n",
        "\n",
        "In this next example, you'll use three different reference images in the request: a product, a subject, and a scene."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5m1xhMENZrYx"
      },
      "outputs": [],
      "source": [
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/flowers.png\n",
        "\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/suitcase.png\n",
        "\n",
        "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/woman.jpg"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "g-7Yj7e_aWEz"
      },
      "outputs": [],
      "source": [
        "first_image = \"flowers.png\"  # @param {type: 'string'}\n",
        "second_image = \"suitcase.png\"  # @param {type: 'string'}\n",
        "third_image = \"woman.jpg\"  # @param {type: 'string'}\n",
        "\n",
        "show_images([first_image, second_image, third_image])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0Jq-yKddAFJw"
      },
      "outputs": [],
      "source": [
        "prompt = \"a wide shot of a woman wheeling a blue suitcase through a flower field\"  # @param {type: 'string'}\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=first_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=second_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image.from_file(location=third_image),\n",
        "                reference_type=\"asset\",\n",
        "            ),\n",
        "        ],\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        enhance_prompt=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.video_bytes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B9t9UjSHN5rN"
      },
      "source": [
        "### Style References\n",
        "\n",
        "In this next example, you'll use a `style` image stored in Google Cloud Storage. If you'd like to use a different Cloud Storage image, replace the URL and gcs_uri below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Y-F3QcvqOD5e"
      },
      "outputs": [],
      "source": [
        "style_image = PIL_Image.open(\n",
        "    urllib.request.urlopen(\n",
        "        \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/clay.jpg\"\n",
        "    )\n",
        ")\n",
        "style_image_gcs = \"gs://cloud-samples-data/generative-ai/image/clay.jpg\"\n",
        "\n",
        "# Display the image\n",
        "fig, axis = plt.subplots(1, 2, figsize=(18, 6))\n",
        "axis[0].imshow(style_image)\n",
        "for ax in axis:\n",
        "    ax.axis(\"off\")\n",
        "plt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XLv-8e-MC-mF"
      },
      "source": [
        "Send the request with the `style` image. Similarly to the previous requests, you'll include a prompt with your **ONE** style image.\n",
        "\n",
        "Rather than output video_bytes in this section, you'll save your video to Cloud Storage. In order to accomplish this, set your Cloud Storage bucket location in `output_gcs`.\n",
        "\n",
        "**Safety:** All Veo videos include [SynthID](https://deepmind.google/science/synthid/), which embeds a digital watermark directly into the AI-generated video."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4PBCIZx60Arq"
      },
      "outputs": [],
      "source": [
        "prompt = \"a person working at a bakery\"  # @param {type: 'string'}\n",
        "output_gcs = \"gs://[your-bucket-path]\"  # @param {type: 'string'}\n",
        "\n",
        "operation = client.models.generate_videos(\n",
        "    model=video_model,\n",
        "    prompt=prompt,\n",
        "    config=types.GenerateVideosConfig(\n",
        "        reference_images=[\n",
        "            types.VideoGenerationReferenceImage(\n",
        "                image=types.Image(gcs_uri=style_image_gcs, mime_type=\"image/jpeg\"),\n",
        "                reference_type=\"style\",\n",
        "            )\n",
        "        ],\n",
        "        output_gcs_uri=output_gcs,\n",
        "        aspect_ratio=\"16:9\",\n",
        "        number_of_videos=1,\n",
        "        duration_seconds=8,\n",
        "        resolution=\"720p\",\n",
        "        person_generation=\"allow_adult\",\n",
        "        enhance_prompt=True,\n",
        "    ),\n",
        ")\n",
        "\n",
        "while not operation.done:\n",
        "    time.sleep(15)\n",
        "    operation = client.operations.get(operation)\n",
        "    print(operation)\n",
        "\n",
        "if operation.response:\n",
        "    show_video(operation.result.generated_videos[0].video.uri)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "veo2_reference_to_video.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
