{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Tce3stUlHN0L"
      },
      "source": [
        "##### Copyright 2024 Google LLC."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "cellView": "form",
        "id": "tuOe1ymfHZPu"
      },
      "outputs": [],
      "source": [
        "# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dfsDR_omdNea"
      },
      "source": [
        "# Integrating PaliGemma with Mesop\n",
        "This notebook demonstrates how to use [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) models with [Mesop](https://google.github.io/mesop/) to create a simple GUI application.\n",
        "<table align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/PaliGemma/[PaliGemma_1]Using_with_Mesop.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FaqZItBdeokU"
      },
      "source": [
        "## Setup\n",
        "\n",
        "### Select the Colab runtime\n",
        "To complete this tutorial, you'll need to have a Colab runtime with sufficient resources to run the Gemma model. In this case, you can use a T4 GPU:\n",
        "\n",
        "1. In the upper-right of the Colab window, select **▾ (Additional connection options)**.\n",
        "2. Select **Change runtime type**.\n",
        "3. Under **Hardware accelerator**, select **T4 GPU**.\n",
        "\n",
        "### Gemma setup\n",
        "\n",
        "To complete this tutorial, you'll first need to complete the setup instructions at [Gemma setup](https://ai.google.dev/gemma/docs/setup). The Gemma setup instructions show you how to do the following:\n",
        "\n",
        "* Get access to Gemma on kaggle.com.\n",
        "* Select a Colab runtime with sufficient resources to run\n",
        "  the Gemma 2B model.\n",
        "* Generate and configure a Kaggle username and an API key as Colab secrets.\n",
        "\n",
        "After you've completed the Gemma setup, move on to the next section, where you'll set environment variables for your Colab environment.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CY2kGtsyYpHF"
      },
      "source": [
        "### Configure your credentials\n",
        "\n",
        "Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
        "\n",
        "1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
        "2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",
        "3. Copy/paste your username into `KAGGLE_USERNAME`\n",
        "3. Copy/paste your key into `KAGGLE_KEY`\n",
        "4. Toggle the buttons on the left to allow notebook access to the secrets.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "A9sUQ4WrP-Yr"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from google.colab import userdata\n",
        "\n",
        "# Note: `userdata.get` is a Colab API. If you're not using Colab, set the env\n",
        "# vars as appropriate for your system.\n",
        "os.environ[\"KAGGLE_USERNAME\"] = userdata.get(\"KAGGLE_USERNAME\")\n",
        "os.environ[\"KAGGLE_KEY\"] = userdata.get(\"KAGGLE_KEY\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iwjo5_Uucxkw"
      },
      "source": [
        "### Install dependencies\n",
        "Run the cell below to install all the required dependencies."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "r_nXPEsF7UWQ"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m571.8/571.8 kB\u001b[0m \u001b[31m37.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m347.7/347.7 kB\u001b[0m \u001b[31m26.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.2/5.2 MB\u001b[0m \u001b[31m46.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m590.6/590.6 MB\u001b[0m \u001b[31m2.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.3/5.3 MB\u001b[0m \u001b[31m69.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.2/2.2 MB\u001b[0m \u001b[31m73.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.5/5.5 MB\u001b[0m \u001b[31m69.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
            "tf-keras 2.15.1 requires tensorflow<2.16,>=2.15, but you have tensorflow 2.16.2 which is incompatible.\u001b[0m\u001b[31m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m1.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m43.2/43.2 kB\u001b[0m \u001b[31m3.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for ml_collections (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
          ]
        }
      ],
      "source": [
        "!pip install -q -U keras keras-nlp\n",
        "!pip install -q overrides ml_collections \"einops~=0.7\" sentencepiece"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pOAEiJmnBE0D"
      },
      "source": [
        "## Exploring prompting capabilities"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HdcJ0WgI_tb7"
      },
      "source": [
        "### PaliGemma"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DfMHtnStiIh-"
      },
      "source": [
        "PaliGemma is a lightweight open vision-language model (VLM) inspired by PaLI-3, and based on open components like the SigLIP vision model and the Gemma language model. PaliGemma takes both images and text as inputs and can answer questions about images with detail and context, meaning that PaliGemma can perform deeper analysis of images and provide useful insights, such as captioning for images and short videos, object detection, and reading text embedded within images.\n",
        "\n",
        "Prompting:\n",
        "\n",
        "* `cap {lang}\\n`: Very raw short caption (from WebLI-alt)\n",
        "* `caption {lang}\\n`: Nice, COCO-like short captions\n",
        "* `describe {lang}\\n`: Somewhat longer, more descriptive captions\n",
        "* `ocr`: Optical character recognition\n",
        "* `answer en {question}\\n`: Question answering about the image contents\n",
        "* `question {lang} {answer}\\n`: Question generation for a given answer\n",
        "* `detect {object} ; {object}\\n`: Count objects in a scene and return the bounding boxes for the objects\n",
        "* `segment {object}\\n`: Do image segmentation of the object in the scene"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "twvbOLey_3tW"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import sys\n",
        "import keras\n",
        "import keras_nlp\n",
        "\n",
        "keras.config.set_floatx(\"bfloat16\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "IuQlLU09_qb3"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.safetensors...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.safetensors.index.json...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/metadata.json...\n",
            "100%|██████████| 143/143 [00:00<00:00, 112kB/s]\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/task.json...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/config.json...\n",
            "100%|██████████| 861/861 [00:00<00:00, 1.19MB/s]\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.safetensors...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.safetensors.index.json...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.weights.h5...\n",
            "100%|██████████| 5.45G/5.45G [01:15<00:00, 77.7MB/s]\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.safetensors...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/model.safetensors.index.json...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/preprocessor.json...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/tokenizer.json...\n",
            "100%|██████████| 410/410 [00:00<00:00, 471kB/s]\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/paligemma/keras/pali_gemma_3b_mix_224/1/download/assets/tokenizer/vocabulary.spm...\n",
            "100%|██████████| 4.07M/4.07M [00:00<00:00, 15.8MB/s]\n"
          ]
        }
      ],
      "source": [
        "# Load PaliGemma\n",
        "paligemma = keras_nlp.models.PaliGemmaCausalLM.from_preset(\"pali_gemma_3b_mix_224\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "ewNkh5lE-UAt"
      },
      "outputs": [],
      "source": [
        "if not os.path.exists(\"big_vision_repo\"):\n",
        "  !git clone --quiet --branch=main --depth=1 \\\n",
        "     https://github.com/google-research/big_vision big_vision_repo\n",
        "\n",
        "# Append big_vision code to python import path\n",
        "if \"big_vision_repo\" not in sys.path:\n",
        "  sys.path.append(\"big_vision_repo\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "ubRH-t77AJIx"
      },
      "outputs": [],
      "source": [
        "import io\n",
        "import re\n",
        "import PIL\n",
        "import requests\n",
        "import numpy as np\n",
        "from PIL import Image\n",
        "import matplotlib.pyplot as plt\n",
        "import matplotlib.patches as patches\n",
        "import big_vision.evaluators.proj.paligemma.transfers.segmentation as segeval\n",
        "\n",
        "# Helpers:\n",
        "\n",
        "\n",
        "def crop_and_resize(image, target_size):\n",
        "    \"\"\"A helper function the resizes given image to the given shape\"\"\"\n",
        "    width, height = image.size\n",
        "    source_size = min(image.size)\n",
        "    left = width // 2 - source_size // 2\n",
        "    top = height // 2 - source_size // 2\n",
        "    right, bottom = left + source_size, top + source_size\n",
        "    return image.resize(target_size, box=(left, top, right, bottom))\n",
        "\n",
        "\n",
        "def read_image(url, target_size=(224, 224)):\n",
        "    \"\"\"Loads images from URL\"\"\"\n",
        "    headers = {\"User-Agent\": \"My User Agent 1.0\"}\n",
        "    contents = io.BytesIO(requests.get(url, headers=headers, stream=True).content)\n",
        "    image = Image.open(contents)\n",
        "    image = crop_and_resize(image, target_size)\n",
        "    image = np.array(image)\n",
        "\n",
        "    # Remove alpha channel if neccessary.\n",
        "    if image.shape[2] == 4:\n",
        "        image = image[:, :, :3]\n",
        "    return image\n",
        "\n",
        "\n",
        "def parse_bbox_and_labels(detokenized_output: str):\n",
        "    \"\"\"Parses model output to extract bounding boxes\"\"\"\n",
        "    matches = re.finditer(\n",
        "        \"<loc(?P<y0>\\d\\d\\d\\d)><loc(?P<x0>\\d\\d\\d\\d)><loc(?P<y1>\\d\\d\\d\\d)><loc(?P<x1>\\d\\d\\d\\d)>\"\n",
        "        \" (?P<label>.+?)( ;|$)\",\n",
        "        detokenized_output,\n",
        "    )\n",
        "    labels, boxes = [], []\n",
        "    fmt = lambda x: float(x) / 1024.0\n",
        "    for m in matches:\n",
        "        d = m.groupdict()\n",
        "        boxes.append([fmt(d[\"y0\"]), fmt(d[\"x0\"]), fmt(d[\"y1\"]), fmt(d[\"x1\"])])\n",
        "        labels.append(d[\"label\"])\n",
        "    return np.array(boxes), np.array(labels)\n",
        "\n",
        "\n",
        "def display_boxes(image, boxes, labels, target_image_size):\n",
        "    \"\"\"Draws bouding boxes on the given image\"\"\"\n",
        "    h, l = target_image_size\n",
        "    fig, ax = plt.subplots()\n",
        "    ax.imshow(image)\n",
        "\n",
        "    for i in range(boxes.shape[0]):\n",
        "        y, x, y2, x2 = boxes[i] * h\n",
        "        width = x2 - x\n",
        "        height = y2 - y\n",
        "        # Create a Rectangle patch\n",
        "        rect = patches.Rectangle(\n",
        "            (x, y), width, height, linewidth=1, edgecolor=\"r\", facecolor=\"none\"\n",
        "        )\n",
        "        # Add label\n",
        "        plt.text(x, y, labels[i], color=\"red\", fontsize=12)\n",
        "        # Add the patch to the Axes\n",
        "        ax.add_patch(rect)\n",
        "\n",
        "    plt.show()\n",
        "\n",
        "\n",
        "def parse_segments(detokenized_output: str) -> tuple[np.ndarray, np.ndarray]:\n",
        "    reconstruct_masks = segeval.get_reconstruct_masks(\"oi\")\n",
        "    matches = re.finditer(\n",
        "        \"<loc(?P<y0>\\d\\d\\d\\d)><loc(?P<x0>\\d\\d\\d\\d)><loc(?P<y1>\\d\\d\\d\\d)><loc(?P<x1>\\d\\d\\d\\d)>\"\n",
        "        + \"\".join(f\"<seg(?P<s{i}>\\d\\d\\d)>\" for i in range(16)),\n",
        "        detokenized_output,\n",
        "    )\n",
        "    boxes, segs = [], []\n",
        "    fmt_box = lambda x: float(x) / 1024.0\n",
        "    for m in matches:\n",
        "        d = m.groupdict()\n",
        "        boxes.append(\n",
        "            [fmt_box(d[\"y0\"]), fmt_box(d[\"x0\"]), fmt_box(d[\"y1\"]), fmt_box(d[\"x1\"])]\n",
        "        )\n",
        "        segs.append([int(d[f\"s{i}\"]) for i in range(16)])\n",
        "\n",
        "    if len(boxes) == 0 or len(segs) == 0:\n",
        "        return boxes, segs\n",
        "\n",
        "    return np.array(boxes), np.array(reconstruct_masks(np.array(segs)))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z4-eFxD0CAyr"
      },
      "source": [
        "#### Prompting example: Visual Question Answering"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "e9tOWyAoAm37"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "<style>\n",
              "      .ndarray_repr .ndarray_raw_data {\n",
              "        display: none;\n",
              "      }\n",
              "      .ndarray_repr.show_array .ndarray_raw_data {\n",
              "        display: block;\n",
              "      }\n",
              "      .ndarray_repr.show_array .ndarray_image_preview {\n",
              "        display: none;\n",
              "      }\n",
              "      </style>\n",
              "      <div id=\"id-fd0c2108-49de-4854-8d13-9a9a34f21c83\" class=\"ndarray_repr\"><pre>ndarray (224, 224, 3) <button style=\"padding: 0 2px;\">show data</button></pre><img src=\"\" class=\"ndarray_image_preview\" /><pre class=\"ndarray_raw_data\">array([[[123, 122, 113],\n",
              "        [ 91,  91,  69],\n",
              "        [ 89,  92,  65],\n",
              "        ...,\n",
              "        [ 67,  76,  40],\n",
              "        [ 54,  64,  28],\n",
              "        [ 66,  76,  41]],\n",
              "\n",
              "       [[ 90,  88,  76],\n",
              "        [ 83,  83,  65],\n",
              "        [ 89,  92,  73],\n",
              "        ...,\n",
              "        [ 60,  69,  34],\n",
              "        [ 49,  60,  26],\n",
              "        [ 61,  72,  38]],\n",
              "\n",
              "       [[112, 121, 112],\n",
              "        [143, 152, 152],\n",
              "        [152, 163, 165],\n",
              "        ...,\n",
              "        [ 74,  89,  49],\n",
              "        [ 70,  85,  44],\n",
              "        [ 74,  89,  50]],\n",
              "\n",
              "       ...,\n",
              "\n",
              "       [[127, 175,  86],\n",
              "        [129, 179,  87],\n",
              "        [123, 171,  80],\n",
              "        ...,\n",
              "        [117, 165,  74],\n",
              "        [133, 179,  87],\n",
              "        [131, 176,  92]],\n",
              "\n",
              "       [[157, 200, 107],\n",
              "        [149, 195,  99],\n",
              "        [146, 195,  97],\n",
              "        ...,\n",
              "        [154, 200, 112],\n",
              "        [131, 177,  90],\n",
              "        [133, 183,  99]],\n",
              "\n",
              "       [[172, 217, 124],\n",
              "        [152, 200, 105],\n",
              "        [150, 202, 105],\n",
              "        ...,\n",
              "        [168, 218, 125],\n",
              "        [148, 196, 110],\n",
              "        [139, 190, 101]]], dtype=uint8)</pre></div><script>\n",
              "      (() => {\n",
              "      const titles = ['show data', 'hide data'];\n",
              "      let index = 0\n",
              "      document.querySelector('#id-fd0c2108-49de-4854-8d13-9a9a34f21c83 button').onclick = (e) => {\n",
              "        document.querySelector('#id-fd0c2108-49de-4854-8d13-9a9a34f21c83').classList.toggle('show_array');\n",
              "        index = (++index) % 2;\n",
              "        document.querySelector('#id-fd0c2108-49de-4854-8d13-9a9a34f21c83 button').textContent = titles[index];\n",
              "        e.preventDefault();\n",
              "        e.stopPropagation();\n",
              "      }\n",
              "      })();\n",
              "    </script>"
            ],
            "text/plain": [
              "array([[[123, 122, 113],\n",
              "        [ 91,  91,  69],\n",
              "        [ 89,  92,  65],\n",
              "        ...,\n",
              "        [ 67,  76,  40],\n",
              "        [ 54,  64,  28],\n",
              "        [ 66,  76,  41]],\n",
              "\n",
              "       [[ 90,  88,  76],\n",
              "        [ 83,  83,  65],\n",
              "        [ 89,  92,  73],\n",
              "        ...,\n",
              "        [ 60,  69,  34],\n",
              "        [ 49,  60,  26],\n",
              "        [ 61,  72,  38]],\n",
              "\n",
              "       [[112, 121, 112],\n",
              "        [143, 152, 152],\n",
              "        [152, 163, 165],\n",
              "        ...,\n",
              "        [ 74,  89,  49],\n",
              "        [ 70,  85,  44],\n",
              "        [ 74,  89,  50]],\n",
              "\n",
              "       ...,\n",
              "\n",
              "       [[127, 175,  86],\n",
              "        [129, 179,  87],\n",
              "        [123, 171,  80],\n",
              "        ...,\n",
              "        [117, 165,  74],\n",
              "        [133, 179,  87],\n",
              "        [131, 176,  92]],\n",
              "\n",
              "       [[157, 200, 107],\n",
              "        [149, 195,  99],\n",
              "        [146, 195,  97],\n",
              "        ...,\n",
              "        [154, 200, 112],\n",
              "        [131, 177,  90],\n",
              "        [133, 183,  99]],\n",
              "\n",
              "       [[172, 217, 124],\n",
              "        [152, 200, 105],\n",
              "        [150, 202, 105],\n",
              "        ...,\n",
              "        [168, 218, 125],\n",
              "        [148, 196, 110],\n",
              "        [139, 190, 101]]], dtype=uint8)"
            ]
          },
          "execution_count": 8,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# Image by: Wainuio, CC BY-SA 4.0, via Wikimedia Commons\n",
        "image_url = \"https://upload.wikimedia.org/wikipedia/commons/8/8b/Bird-8077%2C_Kapiti%2C_North_Island%2C_New_Zealand.jpg\"\n",
        "image = read_image(image_url)\n",
        "image"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "Zq14V5PEU8aH"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "In this image we can see a bird on the ground. In the background there is water.\n"
          ]
        }
      ],
      "source": [
        "# Describing the image\n",
        "prompt = \"describe en\\n\"\n",
        "output = paligemma.generate(\n",
        "    inputs={\n",
        "        \"images\": image,\n",
        "        \"prompts\": prompt,\n",
        "    }\n",
        ")\n",
        "print(output[len(prompt) :])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cZUYYAqrmQ_F"
      },
      "source": [
        "### Integration with Mesop\n",
        "\n",
        "We will create a simple GUI application that allows users to upload images, specify prompt styles, and other arguments, such as the question or type of object, that are necessary to generate the answer.\n",
        "\n",
        "The application will be able to provide both images and text as the output."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "Di3Rn5JQZvPp"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.0/5.0 MB\u001b[0m \u001b[31m18.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.6/76.6 kB\u001b[0m \u001b[31m12.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m83.0/83.0 kB\u001b[0m \u001b[31m13.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "!pip install -q mesop"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "UKBYCRE6jk5v"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\u001b[32mRunning server on: http://localhost:32123\u001b[0m\n",
            " * Serving Flask app 'mesop.server.server'\n",
            " * Debug mode: off\n"
          ]
        }
      ],
      "source": [
        "import mesop as me\n",
        "import mesop.labs as mel\n",
        "\n",
        "me.colab_run()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "HeR8TAtBAWul"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "INFO:werkzeug:\u001b[31m\u001b[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.\u001b[0m\n",
            " * Running on all addresses (::)\n",
            " * Running on http://[::1]:32123\n",
            " * Running on http://[::1]:32123\n",
            "INFO:werkzeug:\u001b[33mPress CTRL+C to quit\u001b[0m\n"
          ]
        }
      ],
      "source": [
        "from typing import Any, Dict, Literal\n",
        "\n",
        "\n",
        "@me.stateclass\n",
        "class State:\n",
        "    \"\"\"This class is responsible for storing data in the application\"\"\"\n",
        "\n",
        "    image: list = None\n",
        "    selected_task: str = None\n",
        "    additional_parameter: str = None\n",
        "    output: str = None\n",
        "    output_type: Literal[\"text\", \"object_detection\", \"object_segmentation\"] = \"text\"\n",
        "\n",
        "\n",
        "def me_image_to_array(data, target_size=(224, 224)):\n",
        "    \"\"\"Transforms image represented by bytes to an array\"\"\"\n",
        "    contents = io.BytesIO(data)\n",
        "    image = Image.open(contents)\n",
        "    image = crop_and_resize(image, target_size)\n",
        "    image = np.array(image)\n",
        "\n",
        "    # removes alpha channel if present\n",
        "    if image.shape[2] == 4:\n",
        "        image = image[:, :, :3]\n",
        "    return image\n",
        "\n",
        "\n",
        "def update_state_by_task(name: str) -> Dict[str, Any]:\n",
        "    _task_to_params = {\n",
        "        # <no params> -> Text\n",
        "        \"raw_short_caption\": {\n",
        "            \"prompt\": \"cap en\\n\",\n",
        "        },\n",
        "        \"coco_like_short_caption\": {\n",
        "            \"prompt\": \"caption en\\n\",\n",
        "        },\n",
        "        \"descriptive_caption\": {\n",
        "            \"prompt\": \"describe en\\n\",\n",
        "        },\n",
        "        \"ocr\": {\n",
        "            \"prompt\": \"ocr\",\n",
        "        },\n",
        "        # Text -> Text\n",
        "        \"qa\": {\n",
        "            \"prompt\": \"answer en {}\\n\",\n",
        "            \"requires_additional_parameter\": True,\n",
        "            \"additional_param_label\": \"What would you like to ask the model?\",\n",
        "        },\n",
        "        \"qg\": {\n",
        "            \"prompt\": \"question en {}\\n\",\n",
        "            \"requires_additional_parameter\": True,\n",
        "            \"additional_param_label\": \"What would be the answer for the questions?\",\n",
        "        },\n",
        "        # Text -> Image\n",
        "        \"object_detection\": {\n",
        "            \"prompt\": \"detect {}\\n\",\n",
        "            \"output_type\": \"object_detection\",\n",
        "            \"requires_additional_parameter\": True,\n",
        "            \"additional_param_label\": \"What object would you like to detect?\",\n",
        "        },\n",
        "        \"object_segmentation\": {\n",
        "            \"prompt\": \"segment {}\\n\",\n",
        "            \"output_type\": \"object_segmentation\",\n",
        "            \"requires_additional_parameter\": True,\n",
        "            \"additional_param_label\": \"What object would you like to segment?\",\n",
        "        },\n",
        "    }\n",
        "\n",
        "    params = _task_to_params[name]\n",
        "    state = me.state(State)\n",
        "    state.prompt_template = params[\"prompt\"]\n",
        "    state.output_type = params.get(\"output_type\", \"text\")\n",
        "    state.requires_additional_parameter = params.get(\n",
        "        \"requires_additional_parameter\", False\n",
        "    )\n",
        "    state.additional_param_label = params.get(\n",
        "        \"additional_param_label\", \"Additional parameter:\"\n",
        "    )\n",
        "\n",
        "\n",
        "def handle_image_upload(event: me.UploadEvent):\n",
        "    state = me.state(State)\n",
        "    image = me_image_to_array(event.file.getvalue())\n",
        "    state.image = image.tolist()\n",
        "    state.output = None\n",
        "\n",
        "\n",
        "def handle_select_task(event: me.SelectSelectionChangeEvent):\n",
        "    state = me.state(State)\n",
        "    state.output = None\n",
        "    state.selected_task = event.value\n",
        "\n",
        "\n",
        "def handle_additional_parameter_input(e: me.InputEvent):\n",
        "    state = me.state(State)\n",
        "    state.additional_parameter = e.value\n",
        "\n",
        "\n",
        "def display_object_detection(boxes, labels, target_image_size=(224, 224)):\n",
        "    \"\"\"Displays the image with boxes around detected objects\"\"\"\n",
        "    state = me.state(State)\n",
        "    h, l = target_image_size\n",
        "    fig, ax = plt.subplots()\n",
        "    ax.imshow(state.image)\n",
        "    ax.get_xaxis().set_visible(False)\n",
        "    ax.get_yaxis().set_visible(False)\n",
        "\n",
        "    for i in range(boxes.shape[0]):\n",
        "        y, x, y2, x2 = boxes[i] * h\n",
        "        width = x2 - x\n",
        "        height = y2 - y\n",
        "        # Create a Rectangle patch\n",
        "        rect = patches.Rectangle(\n",
        "            (x, y), width, height, linewidth=1, edgecolor=\"r\", facecolor=\"none\"\n",
        "        )\n",
        "        # Add label\n",
        "        plt.text(x, y, labels[i], color=\"red\", fontsize=12)\n",
        "        # Add the patch to the Axes\n",
        "        ax.add_patch(rect)\n",
        "    me.plot(fig, style=me.Style(width=\"100%\"))\n",
        "\n",
        "\n",
        "def display_object_segmentation(\n",
        "    bounding_box, segment_mask, target_image_size=(224, 224)\n",
        "):\n",
        "    # Initialize a full mask with the target size\n",
        "    state = me.state(State)\n",
        "    image = np.array(state.image)\n",
        "    full_mask = np.zeros(target_image_size, dtype=np.uint8)\n",
        "    target_width, target_height = target_image_size\n",
        "\n",
        "    for bbox, mask in zip(bounding_box, segment_mask):\n",
        "        y1, x1, y2, x2 = bbox\n",
        "        x1 = int(x1 * target_width)\n",
        "        y1 = int(y1 * target_height)\n",
        "        x2 = int(x2 * target_width)\n",
        "        y2 = int(y2 * target_height)\n",
        "\n",
        "        # Ensure mask is 2D before converting to Image\n",
        "        if mask.ndim == 3:\n",
        "            mask = mask.squeeze(axis=-1)\n",
        "        mask = Image.fromarray(mask)\n",
        "        mask = mask.resize((x2 - x1, y2 - y1), resample=Image.NEAREST)\n",
        "        mask = np.array(mask)\n",
        "        binary_mask = (mask > 0.5).astype(np.uint8)\n",
        "\n",
        "        # Place the binary mask onto the full mask\n",
        "        full_mask[y1:y2, x1:x2] = np.maximum(full_mask[y1:y2, x1:x2], binary_mask)\n",
        "\n",
        "    cmap = plt.get_cmap(\"jet\")\n",
        "    colored_mask = cmap(full_mask / 1.0)\n",
        "    colored_mask = (colored_mask[:, :, :3] * 255).astype(np.uint8)\n",
        "    blended_image = image.copy()\n",
        "    mask_indices = full_mask > 0\n",
        "    alpha = 0.5\n",
        "\n",
        "    for c in range(3):\n",
        "        blended_image[:, :, c] = np.where(\n",
        "            mask_indices,\n",
        "            (1 - alpha) * image[:, :, c] + alpha * colored_mask[:, :, c],\n",
        "            image[:, :, c],\n",
        "        )\n",
        "    fig, ax = plt.subplots()\n",
        "    ax.imshow(blended_image)\n",
        "    me.plot(fig, style=me.Style(width=\"100%\"))\n",
        "\n",
        "\n",
        "def generate_content(e: me.ClickEvent):\n",
        "    \"\"\"Generates an answer to the users query\"\"\"\n",
        "    state = me.state(State)\n",
        "    prompt = (\n",
        "        state.prompt_template.format(state.additional_parameter)\n",
        "        if state.requires_additional_parameter\n",
        "        else state.prompt_template\n",
        "    )\n",
        "    image = np.array(state.image)\n",
        "    output = paligemma.generate(\n",
        "        inputs={\n",
        "            \"images\": image,\n",
        "            \"prompts\": prompt,\n",
        "        }\n",
        "    )\n",
        "    state.output = output[len(prompt) :]\n",
        "\n",
        "\n",
        "@me.page(path=\"/app\")\n",
        "def app():\n",
        "    \"\"\"the main function of the application\"\"\"\n",
        "    state = me.state(State)\n",
        "\n",
        "    with me.box(\n",
        "        style=me.Style(\n",
        "            display=\"flex\",\n",
        "            flex_direction=\"column\",\n",
        "            gap=10,\n",
        "            padding=me.Padding.symmetric(horizontal=\"30%\"),\n",
        "            margin=me.Margin.symmetric(vertical=48),\n",
        "        )\n",
        "    ):\n",
        "\n",
        "        # Image\n",
        "        me.text(\n",
        "            \"Upload an image to get started:\",\n",
        "            style=me.Style(width=\"100%\", text_align=\"center\"),\n",
        "            type=\"headline-5\",\n",
        "        )\n",
        "        with me.box(\n",
        "            style=me.Style(\n",
        "                display=\"flex\", flex_direction=\"column\", align_items=\"center\"\n",
        "            )\n",
        "        ):\n",
        "            me.uploader(\n",
        "                label=\"Upload Image\",\n",
        "                accepted_file_types=[\"image/jpeg\", \"image/png\"],\n",
        "                on_upload=handle_image_upload,\n",
        "            )\n",
        "\n",
        "        # Task\n",
        "        if state.image:\n",
        "            me.text(\n",
        "                \"Choose a task:\",\n",
        "                style=me.Style(width=\"100%\", text_align=\"center\"),\n",
        "                type=\"headline-5\",\n",
        "            )\n",
        "            me.select(\n",
        "                label=\"Choose PaliGemma task\",\n",
        "                options=[\n",
        "                    me.SelectOption(\n",
        "                        label=\"Raw short caption\", value=\"raw_short_caption\"\n",
        "                    ),\n",
        "                    me.SelectOption(\n",
        "                        label=\"COCO-like short caption\", value=\"coco_like_short_caption\"\n",
        "                    ),\n",
        "                    me.SelectOption(\n",
        "                        label=\"Longer, descriptive caption\", value=\"descriptive_caption\"\n",
        "                    ),\n",
        "                    me.SelectOption(label=\"Optical character recognition\", value=\"ocr\"),\n",
        "                    me.SelectOption(label=\"Question answering\", value=\"qa\"),\n",
        "                    me.SelectOption(label=\"Question generation\", value=\"qg\"),\n",
        "                    me.SelectOption(label=\"Object detection\", value=\"object_detection\"),\n",
        "                    me.SelectOption(\n",
        "                        label=\"Object segmentation\", value=\"object_segmentation\"\n",
        "                    ),\n",
        "                ],\n",
        "                on_selection_change=handle_select_task,\n",
        "                style=me.Style(width=\"100%\"),\n",
        "            )\n",
        "\n",
        "        # Generation\n",
        "        if state.image and state.selected_task:\n",
        "\n",
        "            update_state_by_task(state.selected_task)\n",
        "            if state.requires_additional_parameter:\n",
        "                me.text(\n",
        "                    state.additional_param_label,\n",
        "                    style=me.Style(width=\"100%\", text_align=\"center\"),\n",
        "                    type=\"headline-5\",\n",
        "                )\n",
        "                me.input(\n",
        "                    label=\"Define an object (e.g. car, tree, building...)\",\n",
        "                    on_blur=handle_additional_parameter_input,\n",
        "                    style=me.Style(width=\"100%\"),\n",
        "                )\n",
        "\n",
        "            with me.box(\n",
        "                style=me.Style(\n",
        "                    display=\"flex\", flex_direction=\"column\", align_items=\"center\"\n",
        "                )\n",
        "            ):\n",
        "                me.button(\"Generate!\", on_click=generate_content, type=\"flat\")\n",
        "\n",
        "            if state.output is not None:\n",
        "                if state.output_type == \"text\":\n",
        "                    me.textarea(\n",
        "                        label=\"output\",\n",
        "                        readonly=True,\n",
        "                        value=state.output,\n",
        "                        style=me.Style(width=\"100%\"),\n",
        "                    )\n",
        "                elif state.output_type == \"object_detection\":\n",
        "                    boxes, labels = parse_bbox_and_labels(state.output)\n",
        "                    display_object_detection(boxes, labels)\n",
        "                elif state.output_type == \"object_segmentation\":\n",
        "                    bboxes, seg_masks = parse_segments(state.output)\n",
        "                    if len(bboxes) and len(seg_masks):\n",
        "                        display_object_segmentation(bboxes, seg_masks)\n",
        "                    else:\n",
        "                        me.text(\n",
        "                            \"Sorry, cannot find the specified object on the image.\",\n",
        "                            style=me.Style(width=\"100%\", text_align=\"center\"),\n",
        "                            type=\"headline-5\",\n",
        "                        )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "id": "tGngB24yk2vX"
      },
      "outputs": [
        {
          "data": {
            "application/javascript": "(async (port, path, width, height, cache, element) => {\n    if (!google.colab.kernel.accessAllowed && !cache) {\n      return;\n    }\n    element.appendChild(document.createTextNode(''));\n    const url = await google.colab.kernel.proxyPort(port, {cache});\n    const iframe = document.createElement('iframe');\n    iframe.src = new URL(path, url).toString();\n    iframe.height = height;\n    iframe.width = width;\n    iframe.style.border = 0;\n    iframe.allow = [\n        'accelerometer',\n        'autoplay',\n        'camera',\n        'clipboard-read',\n        'clipboard-write',\n        'gyroscope',\n        'magnetometer',\n        'microphone',\n        'serial',\n        'usb',\n        'xr-spatial-tracking',\n    ].join('; ');\n    element.appendChild(iframe);\n  })(32123, \"/app\", \"100%\", 800, false, window.element)",
            "text/plain": [
              "<IPython.core.display.Javascript object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "me.colab_show(path=\"/app\", height=800)"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "[PaliGemma_1]Using_with_Mesop.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
