{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ijGzTHJJUCPY"
      },
      "outputs": [],
      "source": [
        "# Copyright 2023 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VEqbX8OhE8y9"
      },
      "source": [
        "# Gemini: An Overview of Multimodal Use Cases\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fuse-cases%2Fintro_multimodal_use_cases.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\">\n",
        "      <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://goo.gle/3DUssjz\">\n",
        "      <img width=\"32px\" src=\"https://cdn.qwiklabs.com/assets/gcp_cloud-e3a77215f0b8bfa9b3f611c0d2208c7e8708ed31.svg\" alt=\"Google Cloud logo\"><br> Open in  Cloud Skills Boost\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/intro_multimodal_use_cases.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>            \n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8HKLOuOlJutv"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| [Katie Nguyen](https://github.com/katiemn) |\n",
        "| [Saeed Aghabozorgi](https://github.com/saeedaghabozorgi) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VK1Q5ZYdVL4Y"
      },
      "source": [
        "## Overview\n",
        "\n",
        "**YouTube Video: Multimodal AI in action**\n",
        "\n",
        "<a href=\"https://www.youtube.com/watch?v=pEmCgIGpIoo&list=PLIivdWyY5sqJio2yeg1dlfILOUO2FoFRx\" target=\"_blank\">\n",
        "  <img src=\"https://img.youtube.com/vi/pEmCgIGpIoo/maxresdefault.jpg\" alt=\"Multimodal AI in action\" width=\"500\">\n",
        "</a>\n",
        "\n",
        "In this notebook, you will explore a variety of different use cases enabled by multimodality with Gemini.\n",
        "\n",
        "Gemini is a family of generative AI models developed by [Google DeepMind](https://deepmind.google/) that is designed for multimodal use cases. [Gemini 3](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-pro) is the latest model version.\n",
        "\n",
        "### Objectives\n",
        "\n",
        "This notebook demonstrates a variety of multimodal use cases with Gemini.\n",
        "\n",
        "In this tutorial, you will learn how to use Gemini with the Gen AI SDK for Python to:\n",
        "\n",
        "  - Process and generate text\n",
        "  - Parse and summarize PDF documents\n",
        "  - Reason across multiple images\n",
        "  - Generating a video description\n",
        "  - Combining video data with external knowledge\n",
        "  - Understand Audio\n",
        "  - Analyze a code base\n",
        "  - Combine modalities\n",
        "  - Recommendation based on user preferences for e-commerce\n",
        "  - Understanding charts and diagrams\n",
        "  - Comparing images for similarities, anomalies, or differences"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZhsUe0fyc-ER"
      },
      "source": [
        "### Costs\n",
        "\n",
        "This tutorial uses billable components of Google Cloud:\n",
        "\n",
        "- Vertex AI\n",
        "\n",
        "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QDU0XJ1xRDlL"
      },
      "source": [
        "## Getting Started\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "N5afkyDMSBW5"
      },
      "source": [
        "### Install Google Gen AI SDK for Python"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kc4WxYmLSBW5"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-genai gitingest"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6Fom0ZkMSBW6"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you are running this notebook on Google Colab, run the following cell to authenticate your environment. This step is not required if you are using [Vertex AI Workbench](https://cloud.google.com/vertex-ai-workbench).\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LCaCx6PLSBW6"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "# Additional authentication is required for Google Colab\n",
        "if \"google.colab\" in sys.modules:\n",
        "    # Authenticate user to Google Cloud\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QGB8Txa_e4V0"
      },
      "source": [
        "### Set Google Cloud project information and create client\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JGOJHtgDe5-r"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "from google import genai\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"global\")\n",
        "\n",
        "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BuQwwRiniVFG"
      },
      "source": [
        "### Import libraries\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JTk488WDPBtQ"
      },
      "outputs": [],
      "source": [
        "import nest_asyncio\n",
        "from IPython.display import Audio, Image, Markdown, Video, display\n",
        "from gitingest import ingest\n",
        "from google.genai.types import CreateCachedContentConfig, GenerateContentConfig, Part\n",
        "\n",
        "nest_asyncio.apply()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eTNnM-lqfQRo"
      },
      "source": [
        "### Load Gemini 2.5 Flash model\n",
        "\n",
        "Learn more about all [Gemini models on Vertex AI](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2998506fe6d1"
      },
      "outputs": [],
      "source": [
        "MODEL_ID = \"gemini-2.5-flash\"  # @param {type: \"string\"}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "22c7363baeb0"
      },
      "source": [
        "## Individual Modalities"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a052fcef47ea"
      },
      "source": [
        "### Textual understanding\n",
        "\n",
        "Gemini can parse textual questions and retain that context across following prompts."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "577574234d7e"
      },
      "outputs": [],
      "source": [
        "question = \"What is the average weather in Mountain View, CA in the middle of May?\"\n",
        "prompt = \"\"\"\n",
        "Considering the weather, please provide some outfit suggestions.\n",
        "\n",
        "Give examples for the daytime and the evening.\n",
        "\"\"\"\n",
        "\n",
        "contents = [question, prompt]\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7d5586dd92ed"
      },
      "source": [
        "### Document Understanding (Question Answering & Summarization)\n",
        "\n",
        "You can use Gemini to process PDF documents, and analyze content, retain information, and provide answers to queries regarding the documents.\n",
        "\n",
        "The PDF document example used here is the [Gemini 2.5 paper](https://arxiv.org/pdf/2507.06261)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d5af46d4da0c"
      },
      "outputs": [],
      "source": [
        "pdf_file = Part.from_uri(\n",
        "    file_uri=\"https://arxiv.org/pdf/2507.06261\", mime_type=\"application/pdf\"\n",
        ")\n",
        "\n",
        "prompt = \"How many tokens can the model process?\"\n",
        "\n",
        "contents = [pdf_file, prompt]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "25658ef8dcec"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "  You are a professional document summarization specialist.\n",
        "  Please summarize the given document.\n",
        "\"\"\"\n",
        "\n",
        "contents = [pdf_file, prompt]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5OWurhO4mu4J"
      },
      "source": [
        "### Image understanding across multiple images\n",
        "\n",
        "One of the capabilities of Gemini is being able to reason across multiple images.\n",
        "\n",
        "This is an example of using Gemini to reason which glasses would be more suitable for an oval face shape."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "80048d6d0123"
      },
      "outputs": [],
      "source": [
        "image_glasses1_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/glasses1.jpg\"\n",
        "image_glasses2_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/glasses2.jpg\"\n",
        "\n",
        "display(Image(image_glasses1_url, width=150))\n",
        "display(Image(image_glasses2_url, width=150))\n",
        "\n",
        "prompt = \"\"\"\n",
        "I have an oval face. Given my face shape, which glasses would be more suitable?\n",
        "\n",
        "Explain how you reached this decision.\n",
        "Provide your recommendation based on my face shape, and please give an explanation for each.\n",
        "\"\"\"\n",
        "\n",
        "contents = [\n",
        "    prompt,\n",
        "    Part.from_uri(file_uri=image_glasses1_url, mime_type=\"image/jpeg\"),\n",
        "    Part.from_uri(file_uri=image_glasses2_url, mime_type=\"image/jpeg\"),\n",
        "]\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "96b21923035e"
      },
      "source": [
        "### Generating a video description\n",
        "\n",
        "Gemini can also extract tags throughout a video:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3cef00d36cde"
      },
      "outputs": [],
      "source": [
        "video_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/mediterraneansea.mp4\"\n",
        "display(Video(video_url, width=350))\n",
        "\n",
        "prompt = \"\"\"\n",
        "What is shown in this video?\n",
        "Where should I go to see it?\n",
        "What are the top 5 places in the world that look like this?\n",
        "Provide the 10 best tags for this video?\n",
        "\"\"\"\n",
        "\n",
        "video = Part.from_uri(\n",
        "    file_uri=video_url,\n",
        "    mime_type=\"video/mp4\",\n",
        ")\n",
        "contents = [prompt, video]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ca8100e88501"
      },
      "source": [
        "> You can confirm that the location is indeed Antalya, Turkey by visiting the Wikipedia page: https://en.wikipedia.org/wiki/Antalya"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2547a8887702"
      },
      "source": [
        "You can also use Gemini to retrieve extra information beyond the video contents."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "c978cf9f8c71"
      },
      "outputs": [],
      "source": [
        "video_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/ottawatrain3.mp4\"\n",
        "display(Video(video_url, width=350))\n",
        "\n",
        "prompt = \"\"\"\n",
        "Which train line is this?\n",
        "Where does it go?\n",
        "What are the stations/stops?\n",
        "Which river is being crossed?\n",
        "\"\"\"\n",
        "\n",
        "video = Part.from_uri(\n",
        "    file_uri=video_url,\n",
        "    mime_type=\"video/mp4\",\n",
        ")\n",
        "contents = [prompt, video]\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID, contents=contents, config=GenerateContentConfig(temperature=0)\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8a614b7f5284"
      },
      "source": [
        "> You can confirm that this is indeed the Confederation Line on Wikipedia here: https://en.wikipedia.org/wiki/Confederation_Line"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8fb9a30ff4c5"
      },
      "source": [
        "### Audio understanding\n",
        "\n",
        "Gemini can directly process audio for long-context understanding."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5424dbe4c7e1"
      },
      "outputs": [],
      "source": [
        "audio_url = (\n",
        "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/audio/pixel.mp3\"\n",
        ")\n",
        "display(Audio(audio_url))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9ed4ea9d696f"
      },
      "source": [
        "#### Summarization"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "c889e8db2aca"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "  Please provide a short summary and title for the audio.\n",
        "  Provide chapter titles, be concise and short, no need to provide chapter summaries.\n",
        "  Provide each of the chapter titles in a numbered list.\n",
        "  Do not make up any information that is not part of the audio and do not be verbose.\n",
        "\"\"\"\n",
        "\n",
        "audio_file = Part.from_uri(file_uri=audio_url, mime_type=\"audio/mpeg\")\n",
        "contents = [audio_file, prompt]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e64eb3061613"
      },
      "source": [
        "#### Transcription"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4486a25573a0"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "    Transcribe this interview, in the format of timecode, speaker, caption.\n",
        "    Use speaker A, speaker B, etc. to identify the speakers.\n",
        "    Provide each piece of information on a separate bullet point.\n",
        "\"\"\"\n",
        "\n",
        "audio_file = Part.from_uri(file_uri=audio_url, mime_type=\"audio/mpeg\")\n",
        "contents = [audio_file, prompt]\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID,\n",
        "    contents=contents,\n",
        "    config=GenerateContentConfig(max_output_tokens=8192, audio_timestamp=True),\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "96c026a7f2ce"
      },
      "source": [
        "### Reason across a codebase\n",
        "\n",
        "You will use the [Online Boutique repository](https://github.com/GoogleCloudPlatform/microservices-demo) as an example in this notebook. Online Boutique is a cloud-first microservices demo application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them. This application consists of 11 microservices across multiple languages."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ad22725252dc"
      },
      "outputs": [],
      "source": [
        "# The GitHub repository URL\n",
        "# fmt: off\n",
        "repo_url = \"https://github.com/GoogleCloudPlatform/microservices-demo\"  # @param {type:\"string\"}\n",
        "# fmt: on"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1acc008e3d83"
      },
      "source": [
        "#### Create an index and extract the contents of a codebase\n",
        "\n",
        "Clone the repo and create an index and extract content of code/text files."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7f7e3cce09c7"
      },
      "outputs": [],
      "source": [
        "exclude_patterns = {\n",
        "    \"*.png\",\n",
        "    \"*.jpg\",\n",
        "    \"*.jpeg\",\n",
        "    \"*.gif\",\n",
        "    \"*.svg\",\n",
        "    \"*.ico\",\n",
        "    \"*.webp\",\n",
        "    \"*.jar\",\n",
        "    \".git/\",\n",
        "    \"*.gitkeep\",\n",
        "}\n",
        "_, code_index, code_text = ingest(repo_url, exclude_patterns=exclude_patterns)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9572811ab5b2"
      },
      "source": [
        "#### Create a content cache for the codebase\n",
        "\n",
        "The codebase prompt is going to be quite large with all of the included data.\n",
        "Gemini supports [Context caching](https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview), which lets you to store frequently used input tokens in a dedicated cache and reference them for subsequent requests, eliminating the need to repeatedly pass the same set of tokens to a model.\n",
        "\n",
        "**Note**: Context caching is  available for the models stated in [Context caching](https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "daeff13cca4a"
      },
      "outputs": [],
      "source": [
        "prompt = f\"\"\"\n",
        "Context:\n",
        "- The entire codebase is provided below.\n",
        "- Here is an index of all of the files in the codebase:\n",
        "    \\n\\n{code_index}\\n\\n.\n",
        "- Then each of the files is concatenated together. You will find all of the code you need:\n",
        "    \\n\\n{code_text}\\n\\n\n",
        "\"\"\"\n",
        "\n",
        "cached_content = client.caches.create(\n",
        "    model=MODEL_ID,\n",
        "    config=CreateCachedContentConfig(\n",
        "        contents=prompt,\n",
        "        ttl=\"3600s\",\n",
        "    ),\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3d519f3b1763"
      },
      "source": [
        "#### Create a developer getting started guide"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fad43c9b32de"
      },
      "outputs": [],
      "source": [
        "question = \"\"\"\n",
        "  Provide a getting started guide to onboard new developers to the codebase.\n",
        "\"\"\"\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID,\n",
        "    contents=question,\n",
        "    config=GenerateContentConfig(\n",
        "        cached_content=cached_content.name,\n",
        "    ),\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "442e5eedc6dc"
      },
      "source": [
        "#### Finding bugs in the code"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "95e1def33199"
      },
      "outputs": [],
      "source": [
        "question = \"\"\"\n",
        "    Find the top 3 most severe issues in the codebase.\n",
        "\"\"\"\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID,\n",
        "    contents=question,\n",
        "    config=GenerateContentConfig(\n",
        "        cached_content=cached_content.name,\n",
        "    ),\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ae2123dfb85d"
      },
      "source": [
        "#### Summarizing the codebase"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "807ab5b69ea8"
      },
      "outputs": [],
      "source": [
        "question = \"\"\"\n",
        "  Give me a summary of this codebase, and tell me the top 3 things that I can learn from it.\n",
        "\"\"\"\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID,\n",
        "    contents=question,\n",
        "    config=GenerateContentConfig(\n",
        "        cached_content=cached_content.name,\n",
        "    ),\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "06354c53e1a8"
      },
      "source": [
        "## Combining multiple modalities"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3c00e534189b"
      },
      "source": [
        "### Video and audio understanding\n",
        "\n",
        "Try out Gemini's native multimodal and long-context capabilities on video interleaving with audio inputs."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "41b5bb1b04c2"
      },
      "outputs": [],
      "source": [
        "video_url = (\n",
        "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/video/pixel8.mp4\"\n",
        ")\n",
        "display(Video(video_url, width=350))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "a29e43974ca9"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "  Provide a detailed description of the video.\n",
        "  The description should also contain any important dialogue from the video and key features of the phone.\n",
        "\"\"\"\n",
        "\n",
        "video = Part.from_uri(\n",
        "    file_uri=video_url,\n",
        "    mime_type=\"video/mp4\",\n",
        ")\n",
        "contents = [prompt, video]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8b8eec259f62"
      },
      "source": [
        "### All modalities (images, video, audio, text) at once\n",
        "\n",
        "Gemini is natively multimodal and supports interleaving of data from different modalities. It can support a mix of audio, visual, text, and code inputs in the same input sequence."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ff9b20835ada"
      },
      "outputs": [],
      "source": [
        "video_url = \"gs://cloud-samples-data/generative-ai/video/behind_the_scenes_pixel.mp4\"\n",
        "display(Video(video_url.replace(\"gs://\", \"https://storage.googleapis.com/\"), width=350))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ecec6597c63e"
      },
      "outputs": [],
      "source": [
        "image_url = \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/a-man-and-a-dog.png\"\n",
        "display(Image(image_url, width=350))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "a3f2ae2fb517"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "  Look through each frame in the video carefully and answer the questions.\n",
        "  Only base your answers strictly on what information is available in the video attached.\n",
        "  Do not make up any information that is not part of the video and do not be too\n",
        "  verbose, be straightforward.\n",
        "\n",
        "  Questions:\n",
        "  - When is the moment in the image happening in the video? Provide a timestamp.\n",
        "  - What is the context of the moment and what does the narrator say about it?\n",
        "\"\"\"\n",
        "\n",
        "contents = [\n",
        "    prompt,\n",
        "    Part.from_uri(file_uri=video_url, mime_type=\"video/mp4\"),\n",
        "    Part.from_uri(file_uri=image_url, mime_type=\"image/png\"),\n",
        "]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c4de7efa9637"
      },
      "source": [
        "## Use Case: retail / e-commerce\n",
        "\n",
        "Suppose a customer shows you their living room and wants to find appropriate furniture and choose between four wall art options for the room.\n",
        "\n",
        "How can you use Gemini to help the customer choose the best option?"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aaaac1880330"
      },
      "source": [
        "### Generating open recommendations\n",
        "\n",
        "Using the same image, you can ask the model to recommend a piece of furniture that would make sense in the space.\n",
        "\n",
        "Note that the model can choose any furniture in this case, and can do so only from its built-in knowledge."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "60ca92c5776c"
      },
      "outputs": [],
      "source": [
        "room_image_url = \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/living-room.png\"\n",
        "display(Image(room_image_url, width=350))\n",
        "\n",
        "room_image = Part.from_uri(file_uri=room_image_url, mime_type=\"image/png\")\n",
        "\n",
        "prompt = \"Describe this room\"\n",
        "contents = [prompt, room_image]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "e28323d5679a"
      },
      "outputs": [],
      "source": [
        "prompt1 = \"Recommend a new piece of furniture for this room\"\n",
        "prompt2 = \"Explain the reason in detail\"\n",
        "contents = [prompt1, room_image, prompt2]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0a4d4c38c8af"
      },
      "source": [
        "### Generating recommendations based on provided images\n",
        "\n",
        "Instead of keeping the recommendation open, you can also provide a list of items for the model to choose from. Here, you will load a few art images that the Gemini model can recommend. This is particularly useful for retail companies who want to provide product recommendations to users based on their current setup."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "434725a7c58f"
      },
      "outputs": [],
      "source": [
        "art_image_urls = [\n",
        "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/room-art-1.png\",\n",
        "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/room-art-2.png\",\n",
        "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/room-art-3.png\",\n",
        "    \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/room-art-4.png\",\n",
        "]\n",
        "\n",
        "md_content = f\"\"\"\n",
        "|Customer photo |\n",
        "|:-----:|\n",
        "| <img src=\"{room_image_url}\" width=\"50%\"> |\n",
        "\n",
        "|Art 1| Art 2 | Art 3 | Art 4 |\n",
        "|:-----:|:----:|:-----:|:----:|\n",
        "| <img src=\"{art_image_urls[0]}\" width=\"60%\">|<img src=\"{art_image_urls[1]}\" width=\"100%\">|<img src=\"{art_image_urls[2]}\" width=\"60%\">|<img src=\"{art_image_urls[3]}\" width=\"60%\">|\n",
        "\"\"\"\n",
        "\n",
        "display(Markdown(md_content))\n",
        "\n",
        "# Load wall art images as Part objects\n",
        "art_images = [\n",
        "    Part.from_uri(file_uri=url, mime_type=\"image/png\") for url in art_image_urls\n",
        "]\n",
        "\n",
        "# To recommend an item from a selection, you will need to label the item number within the prompt.\n",
        "# That way you are providing the model with a way to reference each image as you pose a question.\n",
        "# Labeling images within your prompt also helps reduce hallucinations and produce better results.\n",
        "prompt = \"\"\"\n",
        "  You are an interior designer.\n",
        "  For each piece of wall art, explain whether it would be appropriate for the style of the room.\n",
        "  Rank each piece according to how well it would be compatible in the room.\n",
        "\"\"\"\n",
        "\n",
        "contents = [\n",
        "    \"Consider the following art pieces:\",\n",
        "    \"art 1:\",\n",
        "    art_images[0],\n",
        "    \"art 2:\",\n",
        "    art_images[1],\n",
        "    \"art 3:\",\n",
        "    art_images[2],\n",
        "    \"art 4:\",\n",
        "    art_images[3],\n",
        "    \"room:\",\n",
        "    room_image,\n",
        "    prompt,\n",
        "]\n",
        "\n",
        "response = client.models.generate_content(model=MODEL_ID, contents=contents)\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4437b7608c8e"
      },
      "source": [
        "## Use Case: Entity relationships in technical diagrams\n",
        "\n",
        "Gemini has multimodal capabilities that enable it to understand diagrams and take actionable steps, such as optimization or code generation. This example demonstrates how Gemini can decipher an entity relationship (ER) diagram, understand the relationships between tables, identify requirements for optimization in a specific environment like BigQuery, and even generate corresponding code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "klY4yBEiKmET"
      },
      "outputs": [],
      "source": [
        "image_er_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/er.png\"\n",
        "display(Image(image_er_url, width=350))\n",
        "\n",
        "prompt = \"Document the entities and relationships in this ER diagram.\"\n",
        "\n",
        "contents = [prompt, Part.from_uri(file_uri=image_er_url, mime_type=\"image/png\")]\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID,\n",
        "    contents=contents,\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZBrdsvIU7Zkf"
      },
      "source": [
        "## Use Case: Similarity/Differences\n",
        "\n",
        "Gemini can compare images and identify similarities or differences between objects.\n",
        "\n",
        "The following example shows two scenes from [Marienplatz in Munich, Germany](https://en.wikipedia.org/wiki/Marienplatz) that are slightly different."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JUSJduLh8457"
      },
      "outputs": [],
      "source": [
        "image_landmark1_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/landmark1.jpg\"\n",
        "image_landmark2_url = \"https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/landmark2.jpg\"\n",
        "\n",
        "md_content = f\"\"\"\n",
        "| Image 1 | Image 2 |\n",
        "|:-----:|:----:|\n",
        "| <img src=\"{image_landmark1_url}\" width=\"350\"> | <img src=\"{image_landmark2_url}\" width=\"350\"> |\n",
        "\"\"\"\n",
        "\n",
        "display(Markdown(md_content))\n",
        "\n",
        "prompt1 = \"\"\"\n",
        "Consider the following two images:\n",
        "Image 1:\n",
        "\"\"\"\n",
        "prompt2 = \"\"\"\n",
        "Image 2:\n",
        "\"\"\"\n",
        "prompt3 = \"\"\"\n",
        "1. What is shown in Image 1? Where is it?\n",
        "2. What is similar between the two images?\n",
        "3. What is difference between Image 1 and Image 2 in terms of the contents or people shown?\n",
        "\"\"\"\n",
        "\n",
        "contents = [\n",
        "    prompt1,\n",
        "    Part.from_uri(file_uri=image_landmark1_url, mime_type=\"image/jpeg\"),\n",
        "    prompt2,\n",
        "    Part.from_uri(file_uri=image_landmark2_url, mime_type=\"image/jpeg\"),\n",
        "    prompt3,\n",
        "]\n",
        "\n",
        "config = GenerateContentConfig(\n",
        "    temperature=0.0,\n",
        "    top_p=0.8,\n",
        "    top_k=40,\n",
        "    candidate_count=1,\n",
        "    max_output_tokens=2048,\n",
        ")\n",
        "\n",
        "response = client.models.generate_content(\n",
        "    model=MODEL_ID,\n",
        "    contents=contents,\n",
        "    config=config,\n",
        ")\n",
        "display(Markdown(response.text))"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "intro_multimodal_use_cases.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
