{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ikIep-HBcvvC"
   },
   "outputs": [],
   "source": [
    "# Copyright 2025 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Qw6ttkOtrQ_D"
   },
   "source": [
    "# Gemini 3 Pro Image (Nano Banana Pro 🍌) Generation on Vertex AI\n",
    "\n",
    "<table align=\"left\">\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fgetting-started%2Fintro_gemini_3_image_gen.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\">\n",
    "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
    "    </a>\n",
    "  </td>\n",
    "</table>\n",
    "\n",
    "<div style=\"clear: both;\"></div>\n",
    "\n",
    "<b>Share to:</b>\n",
    "\n",
    "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_3_image_gen.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
    "</a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uDN8B4CBdMNs"
   },
   "source": [
    "| Author |\n",
    "| --- |\n",
    "| [Katie Nguyen](https://github.com/katiemn) |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "b0e4d036833c"
   },
   "source": [
    "## Overview\n",
    "\n",
    "This notebook will show you how to use the Nano Banana Pro image model. This model is a powerful, generalist multimodal model that offers state-of-the-art image generation and conversational image editing capabilities. Nano Banana Pro is also able to show its work, allowing you to see the 'thought process' behind the generated output.\n",
    "\n",
    "In this tutorial, you'll learn how to use the model in Vertex AI using the Google Gen AI SDK to try out the following scenarios:\n",
    "\n",
    "- Image generation:\n",
    "  - Text-to-image generation\n",
    "  - Model thoughts\n",
    "  - Grounding with search\n",
    "  - Image sizes\n",
    "- Image editing:\n",
    "  - Localization\n",
    "  - Multi-turn image editing (chat)\n",
    "  - Editing with multiple reference images\n",
    "\n",
    "**NOTE:** Expect higher latency when using this model compared to Gemini 2.5 Flash Image (Nano Banana) as a result of the more advanced capabilities."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Mfk6YY3G5kqp"
   },
   "source": [
    "## Get started"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uJqUEH_mg6kb"
   },
   "source": [
    "### Install Google Gen AI SDK for Python\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {
    "id": "-VBT2jIXLD7h"
   },
   "outputs": [],
   "source": [
    "%pip install --upgrade --quiet google-genai"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "eLIrxLFihSoE"
   },
   "source": [
    "### Authenticate your notebook environment (Colab only)\n",
    "\n",
    "If you are running this notebook on Google Colab, run the following cell to authenticate your environment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {
    "id": "hP-_lnBZhUjZ"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "\n",
    "if \"google.colab\" in sys.modules:\n",
    "    from google.colab import auth\n",
    "\n",
    "    auth.authenticate_user()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "oukaeL9Thgy4"
   },
   "source": [
    "### Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {
    "id": "227VoQtmhjRa"
   },
   "outputs": [],
   "source": [
    "from IPython.display import Image, Markdown, display\n",
    "from google import genai\n",
    "from google.genai import types\n",
    "\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "VdO2n52RhwBG"
   },
   "source": [
    "### Set Google Cloud project information and create client\n",
    "\n",
    "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
    "\n",
    "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {
    "id": "lpI4Mo0phyq8"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
    "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
    "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
    "\n",
    "LOCATION = \"global\"\n",
    "\n",
    "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "QOov6dpG99rY"
   },
   "source": [
    "### Load the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {
    "id": "27Fikag0xSaB"
   },
   "outputs": [],
   "source": [
    "MODEL_ID = \"gemini-3-pro-image-preview\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "xuHBu3aRiYYv"
   },
   "source": [
    "## Image generation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "M2i8O36nTHI1"
   },
   "source": [
    "### Text-to-image\n",
    "\n",
    "In the cell below, you'll call the `generate_content` method and modify the following arguments:\n",
    "\n",
    "  - `prompt`: A text only user message describing the image to be generated.\n",
    "  - `config`: A config for specifying content settings.\n",
    "    - `response_modalities`: To generate an image, you must include `IMAGE` in the `response_modalities` list. To get both text and images, specify `IMAGE` and `TEXT`.\n",
    "    - `ImageConfig`: Set the `aspect_ratio`. Valid ratios are: 1:1, 3:2, 2:3, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9\n",
    "\n",
    "All generated images include a [SynthID watermark](https://deepmind.google/technologies/synthid/), which can be verified via the Media Studio in [Vertex AI Studio](https://cloud.google.com/generative-ai-studio?hl=en)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "NZsZMcA-iPSj"
   },
   "outputs": [],
   "source": [
    "prompt = \"\"\"\n",
    "Generate an infographic of a seasonal produce guide. Include the months and a fun category name for each season as well as detailed illustrations of the produce.\n",
    "\"\"\"\n",
    "response = client.models.generate_content(\n",
    "    model=MODEL_ID,\n",
    "    contents=prompt,\n",
    "    config=types.GenerateContentConfig(\n",
    "        response_modalities=['IMAGE', 'TEXT'],\n",
    "        image_config=types.ImageConfig(\n",
    "            aspect_ratio=\"16:9\",\n",
    "        ),\n",
    "    ),\n",
    ")\n",
    "\n",
    "# Check for errors if an image is not generated\n",
    "if response.candidates[0].finish_reason != types.FinishReason.STOP:\n",
    "    reason = response.candidates[0].finish_reason\n",
    "    raise ValueError(f\"Prompt Content Error: {reason}\")\n",
    "\n",
    "for part in response.candidates[0].content.parts:\n",
    "    if part.thought:\n",
    "        continue # Skip displaying thoughts\n",
    "    if part.inline_data:\n",
    "        display(Image(data=part.inline_data.data, width=1000))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "mbpegA7YhkxI"
   },
   "source": [
    "### See the thoughts\n",
    "\n",
    "This is a thinking model, you can check the thoughts that led to the image being produced."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "KzKLlFCYhzov"
   },
   "outputs": [],
   "source": [
    "for part in response.parts:\n",
    "  if part.thought:\n",
    "    if part.text:\n",
    "      display(Markdown(part.text))\n",
    "    elif part.inline_data:\n",
    "      display(Image(data=part.inline_data.data, width=500))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Ys_8nuIENJP9"
   },
   "source": [
    "### Grounding with search results\n",
    "\n",
    "With this model, you can also generate responses that are grounded in the results of a Google Search. Note that the model is only grounded on text results and not images that can be found on Google Search.\n",
    "\n",
    "To display the grounding data, use the helper function in the following cell."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {
    "id": "6yQDpb-zTl3H"
   },
   "outputs": [],
   "source": [
    "def print_grounding_data(response: types.GenerateContentResponse) -> None:\n",
    "    \"\"\"Prints Gemini response with grounding citations in Markdown format.\"\"\"\n",
    "    grounding_metadata = response.candidates[0].grounding_metadata\n",
    "    lines = []\n",
    "\n",
    "    if response.text:\n",
    "        # Citation indexes are in bytes\n",
    "        ENCODING = \"utf-8\"\n",
    "        text_bytes = response.text.encode(ENCODING)\n",
    "        last_byte_index = 0\n",
    "\n",
    "        if grounding_metadata.grounding_supports:\n",
    "            for support in grounding_metadata.grounding_supports:\n",
    "                lines.append(\n",
    "                    text_bytes[last_byte_index : support.segment.end_index].decode(ENCODING)\n",
    "                )\n",
    "\n",
    "                # Generate and append citation footnotes (e.g., \"[1][2]\")\n",
    "                footnotes = \"\".join([f\"[{i + 1}]\" for i in support.grounding_chunk_indices])\n",
    "                lines.append(f\" {footnotes}\")\n",
    "\n",
    "                # Update index for the next segment\n",
    "                last_byte_index = support.segment.end_index\n",
    "\n",
    "        # Append any remaining text after the last citation\n",
    "        if last_byte_index < len(text_bytes):\n",
    "            lines.append(text_bytes[last_byte_index:].decode(ENCODING))\n",
    "\n",
    "    lines.append(\"\\n\\n----\\n## Grounding Sources\\n\")\n",
    "\n",
    "    if grounding_metadata.grounding_chunks:\n",
    "        # Build Grounding Sources Section\n",
    "        lines.append(\"### Grounding Chunks\\n\")\n",
    "        for i, chunk in enumerate(grounding_metadata.grounding_chunks, start=1):\n",
    "            context = chunk.web or chunk.retrieved_context or chunk.maps\n",
    "            if not context:\n",
    "                continue\n",
    "\n",
    "            uri = context.uri\n",
    "            title = context.title or \"Source\"\n",
    "\n",
    "            # Convert GCS URIs to public HTTPS URLs\n",
    "            if uri:\n",
    "                uri = uri.replace(\" \", \"%20\")\n",
    "                if uri.startswith(\"gs://\"):\n",
    "                    uri = uri.replace(\n",
    "                        \"gs://\", \"https://storage.googleapis.com/\", 1\n",
    "                    )\n",
    "\n",
    "            lines.append(f\"{i}. [{title}]({uri})\\n\")\n",
    "            if hasattr(context, \"place_id\") and context.place_id:\n",
    "                lines.append(f\"    - Place ID: `{context.place_id}`\\n\\n\")\n",
    "            if hasattr(context, \"text\") and context.text:\n",
    "                lines.append(f\"{context.text}\\n\\n\")\n",
    "\n",
    "    # Add Search/Retrieval Queries\n",
    "    if grounding_metadata.web_search_queries:\n",
    "        lines.append(\n",
    "            f\"\\n**Web Search Queries:** {grounding_metadata.web_search_queries}\\n\"\n",
    "        )\n",
    "        if grounding_metadata.search_entry_point:\n",
    "            lines.append(\n",
    "                f\"\\n**Search Entry Point:**\\n{grounding_metadata.search_entry_point.rendered_content}\\n\"\n",
    "            )\n",
    "    elif grounding_metadata.retrieval_queries:\n",
    "        lines.append(\n",
    "            f\"\\n**Retrieval Queries:** {grounding_metadata.retrieval_queries}\\n\"\n",
    "        )\n",
    "\n",
    "    display(Markdown(\"\".join(lines)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Jj_HlbJpv_zx"
   },
   "source": [
    "Next, you'll create a Google Search tool and include it in the `tools` parameter of the following request."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "yWHgXTHldU_N"
   },
   "outputs": [],
   "source": [
    "prompt = \"\"\"\n",
    "Search for and visualize the current weather forecast for the next 5 days in San Francisco in a clean, modern weather chart. Add a visual of what I could wear each day.\n",
    "\"\"\"\n",
    "google_search = types.Tool(google_search=types.GoogleSearch())\n",
    "\n",
    "response = client.models.generate_content(\n",
    "    model=MODEL_ID,\n",
    "    contents=prompt,\n",
    "    config=types.GenerateContentConfig(\n",
    "        response_modalities=['TEXT', 'IMAGE'],\n",
    "        image_config=types.ImageConfig(\n",
    "            aspect_ratio=\"21:9\",\n",
    "        ),\n",
    "        tools=[google_search],\n",
    "    )\n",
    ")\n",
    "\n",
    "for part in response.parts:\n",
    "    if part.text and part.thought:\n",
    "      display(Markdown(part.text))\n",
    "    elif part.inline_data:\n",
    "      display(Image(data=part.inline_data.data, width=500))\n",
    "\n",
    "print_grounding_data(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5Zk2GXsoaS1u"
   },
   "source": [
    "### Image sizes\n",
    "\n",
    "Nano Banana Pro supports the following image sizes: `1K`, `2K`, or `4K`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "rHQvvEkDaehC"
   },
   "outputs": [],
   "source": [
    "prompt = \"\"\"\n",
    "Generate a close up headshot of a person.\n",
    "\"\"\"\n",
    "\n",
    "response = client.models.generate_content(\n",
    "    model=MODEL_ID,\n",
    "    contents=prompt,\n",
    "    config=types.GenerateContentConfig(\n",
    "        response_modalities=['TEXT', 'IMAGE'],\n",
    "        image_config=types.ImageConfig(\n",
    "            aspect_ratio=\"1:1\",\n",
    "            image_size=\"2K\",\n",
    "        ),\n",
    "    )\n",
    ")\n",
    "\n",
    "for part in response.candidates[0].content.parts:\n",
    "    if part.text:\n",
    "        display(Markdown(part.text))\n",
    "    if part.inline_data:\n",
    "        display(Image(data=part.inline_data.data, width=500))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5nlhVQCuT6DS"
   },
   "source": [
    "## Image editing\n",
    "\n",
    "You can also edit images with this model, simply pass the original image as part of the prompt."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Y9FuL02EdvuN"
   },
   "source": [
    "### Localization\n",
    "\n",
    "You can also translate the text in images through image editing. Start by downloading the image and displaying it below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Jp0X8wEjhjLd"
   },
   "outputs": [],
   "source": [
    "!wget https://storage.googleapis.com/cloud-samples-data/generative-ai/image/flying-sneakers.png\n",
    "\n",
    "starting_image = \"flying-sneakers.png\"\n",
    "display(Image(filename=starting_image, width=500))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "HjIBLjr-il1y"
   },
   "outputs": [],
   "source": [
    "with open(starting_image, \"rb\") as f:\n",
    "    image = f.read()\n",
    "\n",
    "response = client.models.generate_content(\n",
    "    model=MODEL_ID,\n",
    "    contents=[\n",
    "        types.Part.from_bytes(\n",
    "            data=image,\n",
    "            mime_type=\"image/png\",\n",
    "        ),\n",
    "        \"Change the text in this infographic from English to Spanish.\",\n",
    "    ],\n",
    "    config=types.GenerateContentConfig(\n",
    "        response_modalities=['TEXT', 'IMAGE'],\n",
    "        image_config=types.ImageConfig(\n",
    "            image_size=\"1K\",\n",
    "        ),\n",
    "    )\n",
    ")\n",
    "\n",
    "for part in response.candidates[0].content.parts:\n",
    "    if part.text:\n",
    "        display(Markdown(part.text))\n",
    "    if part.inline_data:\n",
    "        display(Image(data=part.inline_data.data, width=500))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0sIquv-1lAzn"
   },
   "source": [
    "### Multi-turn image editing (chat)\n",
    "\n",
    "In this next section, you'll generate a starting image and iteratively alter certain aspects of the image by chatting with the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "05m25YRrB9Wg"
   },
   "outputs": [],
   "source": [
    "chat = client.chats.create(\n",
    "    model=MODEL_ID,\n",
    "    config=types.GenerateContentConfig(\n",
    "        response_modalities=['TEXT', 'IMAGE']\n",
    "    )\n",
    ")\n",
    "\n",
    "message = \"Create an image of a clear perfume bottle sitting on a vanity.\"\n",
    "response = chat.send_message(message)\n",
    "\n",
    "# Save the image data to pass in the next chat message\n",
    "data = b''\n",
    "for part in response.candidates[0].content.parts:\n",
    "    if part.text:\n",
    "        display(Markdown(part.text))\n",
    "    if part.inline_data:\n",
    "        data = part.inline_data.data\n",
    "        display(Image(data=data, width=500))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "mggna_5UPTsu"
   },
   "source": [
    "Now, you'll include the previous image data in a new message in the existing chat, along with a new text prompt, to update the previously generated image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bMp_cFHplh-Z"
   },
   "outputs": [],
   "source": [
    "response = chat.send_message(\n",
    "    message=[\n",
    "        types.Part.from_bytes(\n",
    "            data=data,\n",
    "            mime_type=\"image/png\",\n",
    "        ),\n",
    "        \"Make the perfume bottle purple and add a vase of hydrangeas next to the bottle.\",\n",
    "    ],\n",
    ")\n",
    "\n",
    "for part in response.candidates[0].content.parts:\n",
    "    if part.text:\n",
    "        display(Markdown(part.text))\n",
    "    if part.inline_data:\n",
    "        display(Image(data=part.inline_data.data, width=500))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "hQ_uiJOY5Sy9"
   },
   "source": [
    "### Multiple reference images\n",
    "\n",
    "With Nano Banana Pro, you can include multiple reference images in a request to generate a new image that preserves the content of the original images.\n",
    "\n",
    "Run the following cell to visualize the starting images stored in Cloud Storage."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "cUsResOwmBuS"
   },
   "outputs": [],
   "source": [
    "import requests\n",
    "from PIL import Image as PIL_Image\n",
    "from io import BytesIO\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "image_urls = [\n",
    "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/woman.jpg\",\n",
    "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/suitcase.png\",\n",
    "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/armchair.png\",\n",
    "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/man-in-field.png\",\n",
    "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/shoes.jpg\",\n",
    "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/living-room.png\",\n",
    "]\n",
    "\n",
    "fig, axes = plt.subplots(2, 3, figsize=(12, 8))\n",
    "for i, ax in enumerate(axes.flatten()):\n",
    "    ax.imshow(PIL_Image.open(BytesIO(requests.get(image_urls[i]).content)))\n",
    "    ax.axis(\"off\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ErSjXfZ2qg7F"
   },
   "source": [
    "The process for sending the request is similar to previous image editing calls. The main difference is that you will provide multiple `Part.from_uri` instances, one for each reference image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_93g7aAeoyNP"
   },
   "outputs": [],
   "source": [
    "response = client.models.generate_content(\n",
    "    model=MODEL_ID,\n",
    "    contents=[\n",
    "        types.Part.from_uri(\n",
    "            file_uri=\"gs://cloud-samples-data/generative-ai/image/woman.jpg\",\n",
    "            mime_type=\"image/jpeg\",\n",
    "        ),\n",
    "        types.Part.from_uri(\n",
    "            file_uri=\"gs://cloud-samples-data/generative-ai/image/suitcase.png\",\n",
    "            mime_type=\"image/png\",\n",
    "        ),\n",
    "        types.Part.from_uri(\n",
    "            file_uri=\"gs://cloud-samples-data/generative-ai/image/armchair.png\",\n",
    "            mime_type=\"image/png\",\n",
    "        ),\n",
    "        types.Part.from_uri(\n",
    "            file_uri=\"gs://cloud-samples-data/generative-ai/image/man-in-field.png\",\n",
    "            mime_type=\"image/png\",\n",
    "        ),\n",
    "        types.Part.from_uri(\n",
    "            file_uri=\"gs://cloud-samples-data/generative-ai/image/shoes.jpg\",\n",
    "            mime_type=\"image/jpeg\",\n",
    "        ),\n",
    "        types.Part.from_uri(\n",
    "            file_uri=\"gs://cloud-samples-data/generative-ai/image/living-room.png\",\n",
    "            mime_type=\"image/png\",\n",
    "        ),\n",
    "        \"Generate an image of a woman sitting in a living room with a man, both wearing sneakers. The woman is sitting in a white armchair with a blue suitcase next to her.\",\n",
    "    ],\n",
    "    config=types.GenerateContentConfig(\n",
    "        response_modalities=[\"TEXT\", \"IMAGE\"],\n",
    "        image_config=types.ImageConfig(\n",
    "            aspect_ratio=\"16:9\",\n",
    "        ),\n",
    "    ),\n",
    ")\n",
    "\n",
    "\n",
    "for part in response.candidates[0].content.parts:\n",
    "    if part.text:\n",
    "        display(Markdown(part.text))\n",
    "    if part.inline_data:\n",
    "        display(Image(data=part.inline_data.data, width=500))"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
