{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wR53lePHuiP-"
      },
      "source": [
        "# Finetune PaliGemma\n",
        "\n",
        "> *These models and code are not official Google products and were trained and released for research purposes.*\n",
        "\n",
        "\n",
        "**This notebook shows how to finetune PaliGemma 2 on a vision-language task.**\n",
        "The training data consists of 90 pairs of images and long captions describing them.\n",
        "To make it runnable on a T4 colab runtime with 16GB HBM and 12GB RAM, we opt to only finetune the attention layers of the language model and freeze the other parameters.\n",
        "\n",
        " **This setup is illustrative**. In a real usecase, the amount of data, trainable parameters, training steps and hyper-parameters and obtained results could be significantly different.\n",
        "\n",
        "This notebook uses the model reference implementation from [big_vision](https://github.com/google-research/big_vision).\n",
        "and shows how to:\n",
        "\n",
        " * Install deps, download model checkpoint and training data.\n",
        " * Load the model onto GPU devices.\n",
        " * Prepare the input to the model for training and inference.\n",
        " * Finetune the model and inspect output in validation split."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6U0QUFveqSP2"
      },
      "source": [
        "## Setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "DfxKb3F839Ks"
      },
      "outputs": [],
      "source": [
        "# @title Fetch big_vision code and install dependencies.\n",
        "import os\n",
        "import sys\n",
        "\n",
        "# TPUs with\n",
        "if \"COLAB_TPU_ADDR\" in os.environ:\n",
        "  raise \"It seems you are using Colab with remote TPUs which is not supported.\"\n",
        "\n",
        "# Fetch big_vision repository if python doesn't know about it and install\n",
        "# dependencies needed for this notebook.\n",
        "if not os.path.exists(\"big_vision_repo\"):\n",
        "  !git clone --quiet --branch=main --depth=1 \\\n",
        "     https://github.com/google-research/big_vision big_vision_repo\n",
        "\n",
        "# Append big_vision code to python import path\n",
        "if \"big_vision_repo\" not in sys.path:\n",
        "  sys.path.append(\"big_vision_repo\")\n",
        "\n",
        "# Install missing dependencies. Assume jax~=0.4.25 with GPU available.\n",
        "!pip3 install -q \"overrides\" \"ml_collections\" \"einops~=0.7\" \"sentencepiece\"\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "azmRZvgGyhAb"
      },
      "source": [
        "### Configure your API key to access Kaggle\n",
        "\n",
        "To use PaliGemma, you must provide your Kaggle username and a Kaggle API key.\n",
        "\n",
        "1. To generate a Kaggle API key, go to the **Account** tab of your Kaggle user profile and select **Create New Token**. This will trigger the download of a `kaggle.json` file containing your API credentials.\n",
        "1. In Colab, select **Secrets** (🔑) in the left pane and add your Kaggle username and Kaggle API key. Store your username under the name `KAGGLE_USERNAME` and your API key under the name `KAGGLE_KEY`.\n",
        "\n",
        "To be able to download, you will also need to acknowledge the Terms and Conditions of the PaliGemma on:\n",
        "\n",
        "* https://www.kaggle.com/models/google/paligemma/\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "zGLIp1Cx3_CX"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from google.colab import userdata\n",
        "\n",
        "# Note: `userdata.get` is a Colab API. If you're not using Colab, set the env\n",
        "# vars as appropriate or make your credentials available in ~/.kaggle/kaggle.json\n",
        "\n",
        "os.environ[\"KAGGLE_USERNAME\"] = userdata.get('KAGGLE_USERNAME')\n",
        "os.environ[\"KAGGLE_KEY\"] = userdata.get('KAGGLE_KEY')\n",
        "\n",
        "# The T4 runtime is tight on memory to finetune this model. Preallocate\n",
        "# all memory ahead of time to avoid OOM'ing due to fragmentation.\n",
        "os.environ[\"XLA_PYTHON_CLIENT_MEM_FRACTION\"] = \"1.0\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "gQNOTfF24AV4",
        "outputId": "5241dd5b-d5c2-473c-a5e0-0ad72db288d8"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Downloading the checkpoint from Kaggle, this could take a few minutes....\n",
            "Model path: /root/.cache/kagglehub/models/google/paligemma-2/jax/paligemma2-3b-pt-224/1/./paligemma2-3b-pt-224.b16.npz\n"
          ]
        }
      ],
      "source": [
        "# @title Download checkpoint, tokenizer and dataset to local filesystem.\n",
        "#\n",
        "import os\n",
        "import kagglehub\n",
        "\n",
        "# Use these for PaliGemma-2 3B 224px²\n",
        "LLM_VARIANT = \"gemma2_2b\"\n",
        "MODEL_PATH = \"./paligemma2-3b-pt-224.b16.npz\"\n",
        "KAGGLE_HANDLE = \"google/paligemma-2/jax/paligemma2-3b-pt-224\"  # Path to fetch from Kaggle.\n",
        "\n",
        "# Use these for PaliGemma 1:\n",
        "# LLM_VARIANT = \"gemma_2b\"\n",
        "# MODEL_PATH = \"./paligemma-3b-pt-224.f16.npz\"\n",
        "# KAGGLE_HANDLE = \"google/paligemma/jax/paligemma-3b-pt-224\"\n",
        "\n",
        "if not os.path.exists(MODEL_PATH):\n",
        "  print(\"Downloading the checkpoint from Kaggle, this could take a few minutes....\")\n",
        "  MODEL_PATH = kagglehub.model_download(KAGGLE_HANDLE, MODEL_PATH)\n",
        "  print(f\"Model path: {MODEL_PATH}\")\n",
        "\n",
        "TOKENIZER_PATH = \"./paligemma_tokenizer.model\"\n",
        "if not os.path.exists(TOKENIZER_PATH):\n",
        "  print(\"Downloading the model tokenizer...\")\n",
        "  !gsutil cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n",
        "  print(f\"Tokenizer path: {TOKENIZER_PATH}\")\n",
        "\n",
        "DATA_DIR=\"./longcap100\"\n",
        "if not os.path.exists(DATA_DIR):\n",
        "  print(\"Downloading the dataset...\")\n",
        "  !gsutil -m -q cp -n -r gs://longcap100/ .\n",
        "  print(f\"Data path: {DATA_DIR}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zDoq0O77GF30"
      },
      "source": [
        "## Notebook"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dTfe2k8J4Bw0",
        "outputId": "51956b7f-8b7d-4565-cb11-1287595b054a"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "JAX version:  0.4.33\n",
            "JAX platform: gpu\n",
            "JAX devices:  1\n"
          ]
        }
      ],
      "source": [
        "import base64\n",
        "import functools\n",
        "import html\n",
        "import io\n",
        "import os\n",
        "import warnings\n",
        "\n",
        "import jax\n",
        "import jax.numpy as jnp\n",
        "import numpy as np\n",
        "import ml_collections\n",
        "\n",
        "import tensorflow as tf\n",
        "import sentencepiece\n",
        "\n",
        "from IPython.core.display import display, HTML\n",
        "from PIL import Image\n",
        "\n",
        "# Import model definition from big_vision\n",
        "from big_vision.models.proj.paligemma import paligemma\n",
        "from big_vision.trainers.proj.paligemma import predict_fns\n",
        "\n",
        "# Import big vision utilities\n",
        "import big_vision.datasets.jsonl\n",
        "import big_vision.utils\n",
        "import big_vision.sharding\n",
        "\n",
        "# Don't let TF use the GPU or TPUs\n",
        "tf.config.set_visible_devices([], \"GPU\")\n",
        "tf.config.set_visible_devices([], \"TPU\")\n",
        "\n",
        "backend = jax.extend.backend.get_backend()\n",
        "print(f\"JAX version:  {jax.__version__}\")\n",
        "print(f\"JAX platform: {backend.platform}\")\n",
        "print(f\"JAX devices:  {jax.device_count()}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "1aghcULcEdtv"
      },
      "outputs": [],
      "source": [
        "# @title Construct model and load params into RAM.\n",
        "\n",
        "# Define model\n",
        "# IMPORTANT: Gemma-2 has a \"final_logits_softcap\" property, we set it to 0.0\n",
        "# for better transfer results.\n",
        "model_config = ml_collections.FrozenConfigDict({\n",
        "    \"llm\": {\"vocab_size\": 257_152, \"variant\": LLM_VARIANT, \"final_logits_softcap\": 0.0},\n",
        "    \"img\": {\"variant\": \"So400m/14\", \"pool_type\": \"none\", \"scan\": True, \"dtype_mm\": \"float16\"}\n",
        "})\n",
        "model = paligemma.Model(**model_config)\n",
        "tokenizer = sentencepiece.SentencePieceProcessor(TOKENIZER_PATH)\n",
        "\n",
        "# Load params - this can take up to 1 minute in T4 colabs.\n",
        "params = paligemma.load(None, MODEL_PATH, model_config)\n",
        "\n",
        "# Define `decode` function to sample outputs from the model.\n",
        "decode_fn = predict_fns.get_all(model)['decode']\n",
        "decode = functools.partial(decode_fn, devices=jax.devices(), eos_token=tokenizer.eos_id())"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "RWOdf_fw2SAO"
      },
      "outputs": [],
      "source": [
        "# @title Move params to GPU/TPU memory.\n",
        "#\n",
        "# To keep HBM usage low and fit in a T4 GPU (16GB HBM) we opt to only finetune\n",
        "# a part of the parameters. Additionally we keep the frozen params in float16\n",
        "# and cast trainable to float32.\n",
        "\n",
        "# Create a pytree mask of the trainable params.\n",
        "def is_trainable_param(name, param):  # pylint: disable=unused-argument\n",
        "  if name.startswith(\"llm/layers/attn/\"):  return True\n",
        "  if name.startswith(\"llm/\"):              return False\n",
        "  if name.startswith(\"img/\"):              return False\n",
        "  raise ValueError(f\"Unexpected param name {name}\")\n",
        "trainable_mask = big_vision.utils.tree_map_with_names(is_trainable_param, params)\n",
        "\n",
        "#\n",
        "# If more than one device is available (e.g. multiple GPUs) the parameters can\n",
        "# be sharded across them to reduce HBM usage per device.\n",
        "mesh = jax.sharding.Mesh(jax.devices(), (\"data\"))\n",
        "\n",
        "data_sharding = jax.sharding.NamedSharding(\n",
        "    mesh, jax.sharding.PartitionSpec(\"data\"))\n",
        "\n",
        "params_sharding = big_vision.sharding.infer_sharding(\n",
        "    params, strategy=[('.*', 'fsdp(axis=\"data\")')], mesh=mesh)\n",
        "\n",
        "# Yes: Some donated buffers are not usable.\n",
        "warnings.filterwarnings(\n",
        "    \"ignore\", message=\"Some donated buffers were not usable\")\n",
        "\n",
        "@functools.partial(jax.jit, donate_argnums=(0,), static_argnums=(1,))\n",
        "def maybe_cast_to_f32(params, trainable):\n",
        "  # Cast others to float16, since some GPUs don't support bf16.\n",
        "  return jax.tree.map(lambda p, m: p.astype(jnp.float32)\n",
        "                      if m else p.astype(jnp.float16),\n",
        "                      params, trainable)"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Loading all params in simultaneous - albeit much faster and more succinct -\n",
        "# requires more RAM than the T4 colab runtimes have by default (12GB RAM).\n",
        "# Instead we do it param by param.\n",
        "params, treedef = jax.tree.flatten(params)\n",
        "sharding_leaves = jax.tree.leaves(params_sharding)\n",
        "trainable_leaves = jax.tree.leaves(trainable_mask)\n",
        "for idx, (sharding, trainable) in enumerate(zip(sharding_leaves, trainable_leaves)):\n",
        "  params[idx] = big_vision.utils.reshard(params[idx], sharding)\n",
        "  params[idx] = maybe_cast_to_f32(params[idx], trainable)\n",
        "  params[idx].block_until_ready()\n",
        "params = jax.tree.unflatten(treedef, params)\n",
        "\n",
        "# Print params to show what the model is made of.\n",
        "def parameter_overview(params):\n",
        "  for path, arr in big_vision.utils.tree_flatten_with_names(params)[0]:\n",
        "    print(f\"{path:80s} {str(arr.shape):22s} {arr.dtype}\")\n",
        "\n",
        "print(\" == Model params == \")\n",
        "parameter_overview(params)"
      ],
      "metadata": {
        "id": "ipJehqguO3T9",
        "outputId": "bbb5c58b-243d-4172-fb35-df9ff25c159b",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            " == Model params == \n",
            "img/Transformer/encoder_norm/bias                                                (1152,)                float16\n",
            "img/Transformer/encoder_norm/scale                                               (1152,)                float16\n",
            "img/Transformer/encoderblock/LayerNorm_0/bias                                    (27, 1152)             float16\n",
            "img/Transformer/encoderblock/LayerNorm_0/scale                                   (27, 1152)             float16\n",
            "img/Transformer/encoderblock/LayerNorm_1/bias                                    (27, 1152)             float16\n",
            "img/Transformer/encoderblock/LayerNorm_1/scale                                   (27, 1152)             float16\n",
            "img/Transformer/encoderblock/MlpBlock_0/Dense_0/bias                             (27, 4304)             float16\n",
            "img/Transformer/encoderblock/MlpBlock_0/Dense_0/kernel                           (27, 1152, 4304)       float16\n",
            "img/Transformer/encoderblock/MlpBlock_0/Dense_1/bias                             (27, 1152)             float16\n",
            "img/Transformer/encoderblock/MlpBlock_0/Dense_1/kernel                           (27, 4304, 1152)       float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/key/bias             (27, 16, 72)           float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/key/kernel           (27, 1152, 16, 72)     float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/out/bias             (27, 1152)             float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/out/kernel           (27, 16, 72, 1152)     float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/query/bias           (27, 16, 72)           float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/query/kernel         (27, 1152, 16, 72)     float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/value/bias           (27, 16, 72)           float16\n",
            "img/Transformer/encoderblock/MultiHeadDotProductAttention_0/value/kernel         (27, 1152, 16, 72)     float16\n",
            "img/embedding/bias                                                               (1152,)                float16\n",
            "img/embedding/kernel                                                             (14, 14, 3, 1152)      float16\n",
            "img/head/bias                                                                    (2304,)                float16\n",
            "img/head/kernel                                                                  (1152, 2304)           float16\n",
            "img/pos_embedding                                                                (1, 256, 1152)         float16\n",
            "llm/embedder/input_embedding                                                     (257152, 2304)         float16\n",
            "llm/final_norm/scale                                                             (2304,)                float16\n",
            "llm/layers/attn/attn_vec_einsum/w                                                (26, 8, 256, 2304)     float32\n",
            "llm/layers/attn/kv_einsum/w                                                      (26, 2, 4, 2304, 256)  float32\n",
            "llm/layers/attn/q_einsum/w                                                       (26, 8, 2304, 256)     float32\n",
            "llm/layers/mlp/gating_einsum                                                     (26, 2, 2304, 9216)    float16\n",
            "llm/layers/mlp/linear                                                            (26, 9216, 2304)       float16\n",
            "llm/layers/post_attention_norm/scale                                             (26, 2304)             float16\n",
            "llm/layers/post_ffw_norm/scale                                                   (26, 2304)             float16\n",
            "llm/layers/pre_attention_norm/scale                                              (26, 2304)             float16\n",
            "llm/layers/pre_ffw_norm/scale                                                    (26, 2304)             float16\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "8SRW0NuU4UcW"
      },
      "outputs": [],
      "source": [
        "# @title Define preprocess functions to create inputs to the model.\n",
        "\n",
        "def preprocess_image(image, size=224):\n",
        "  # Model has been trained to handle images of different aspects ratios\n",
        "  # resized to 224x224 in the range [-1, 1]. Bilinear and antialias resize\n",
        "  # options are helpful to improve quality in some tasks.\n",
        "  image = np.asarray(image)\n",
        "  if image.ndim == 2:  # Convert image without last channel into greyscale.\n",
        "    image = np.stack((image,)*3, axis=-1)\n",
        "  image = image[..., :3]  # Remove alpha layer.\n",
        "  assert image.shape[-1] == 3\n",
        "\n",
        "  image = tf.constant(image)\n",
        "  image = tf.image.resize(image, (size, size), method='bilinear', antialias=True)\n",
        "  return image.numpy() / 127.5 - 1.0  # [0, 255]->[-1,1]\n",
        "\n",
        "def preprocess_tokens(prefix, suffix=None, seqlen=None):\n",
        "  # Model has been trained to handle tokenized text composed of a prefix with\n",
        "  # full attention and a suffix with causal attention.\n",
        "  separator = \"\\n\"\n",
        "  tokens = tokenizer.encode(prefix, add_bos=True) + tokenizer.encode(separator)\n",
        "  mask_ar = [0] * len(tokens)    # 0 to use full attention for prefix.\n",
        "  mask_loss = [0] * len(tokens)  # 0 to not use prefix tokens in the loss.\n",
        "\n",
        "  if suffix:\n",
        "    suffix = tokenizer.encode(suffix, add_eos=True)\n",
        "    tokens += suffix\n",
        "    mask_ar += [1] * len(suffix)    # 1 to use causal attention for suffix.\n",
        "    mask_loss += [1] * len(suffix)  # 1 to use suffix tokens in the loss.\n",
        "\n",
        "  mask_input = [1] * len(tokens)    # 1 if its a token, 0 if padding.\n",
        "  if seqlen:\n",
        "    padding = [0] * max(0, seqlen - len(tokens))\n",
        "    tokens = tokens[:seqlen] + padding\n",
        "    mask_ar = mask_ar[:seqlen] + padding\n",
        "    mask_loss = mask_loss[:seqlen] + padding\n",
        "    mask_input = mask_input[:seqlen] + padding\n",
        "\n",
        "  return jax.tree.map(np.array, (tokens, mask_ar, mask_loss, mask_input))\n",
        "\n",
        "def postprocess_tokens(tokens):\n",
        "  tokens = tokens.tolist()  # np.array to list[int]\n",
        "  try:  # Remove tokens at and after EOS if any.\n",
        "    eos_pos = tokens.index(tokenizer.eos_id())\n",
        "    tokens = tokens[:eos_pos]\n",
        "  except ValueError:\n",
        "    pass\n",
        "  return tokenizer.decode(tokens)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "whzWOojGOtzi"
      },
      "outputs": [],
      "source": [
        "# @title Function to iterate over train and validation examples.\n",
        "SEQLEN = 128\n",
        "\n",
        "# TODO: Consider data iterators skipping big_vision and tf.data?\n",
        "train_dataset = big_vision.datasets.jsonl.DataSource(\n",
        "    os.path.join(DATA_DIR, \"data_train90.jsonl\"),\n",
        "    fopen_keys={\"image\": DATA_DIR})\n",
        "\n",
        "val_dataset = big_vision.datasets.jsonl.DataSource(\n",
        "    os.path.join(DATA_DIR, \"data_val10.jsonl\"),\n",
        "    fopen_keys={\"image\": DATA_DIR})\n",
        "\n",
        "\n",
        "def train_data_iterator():\n",
        "  \"\"\"Never ending iterator over training examples.\"\"\"\n",
        "  # Shuffle examples and repeat so one can train for many epochs.\n",
        "  dataset = train_dataset.get_tfdata().shuffle(1_000).repeat()\n",
        "  for example in dataset.as_numpy_iterator():\n",
        "    image = Image.open(io.BytesIO(example[\"image\"]))\n",
        "    image = preprocess_image(image)\n",
        "\n",
        "    prefix = \"caption en\"  # Could also be a different prefix per example.\n",
        "    suffix = example[\"suffix\"].decode().lower()\n",
        "    tokens, mask_ar, mask_loss, _ = preprocess_tokens(prefix, suffix, SEQLEN)\n",
        "\n",
        "    yield {\n",
        "        \"image\": np.asarray(image),\n",
        "        \"text\": np.asarray(tokens),\n",
        "        \"mask_ar\": np.asarray(mask_ar),\n",
        "        \"mask_loss\": np.asarray(mask_loss),\n",
        "    }\n",
        "\n",
        "\n",
        "def validation_data_iterator():\n",
        "  \"\"\"Single iterator over validation examples.\"\"\"\n",
        "  for example in val_dataset.get_tfdata(ordered=True).as_numpy_iterator():\n",
        "    image = Image.open(io.BytesIO(example[\"image\"]))\n",
        "    image = preprocess_image(image)\n",
        "\n",
        "    prefix = \"caption en\"  # Could also be a different prefix per example.\n",
        "    tokens, mask_ar, _, mask_input = preprocess_tokens(prefix, seqlen=SEQLEN)\n",
        "\n",
        "    yield {\n",
        "        \"image\": np.asarray(image),\n",
        "        \"text\": np.asarray(tokens),\n",
        "        \"mask_ar\": np.asarray(mask_ar),\n",
        "        \"mask_input\": np.asarray(mask_input),\n",
        "    }\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 397
        },
        "id": "BzJfb5t0nsLq",
        "outputId": "8920b068-c56a-4f99-848d-810fd0cd0068"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Training examples\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person stands on a tennis court, holding a racquet and a bunch of pink balls. the court is surrounded by a white wall with a brown circle on it. the person is wearing pink socks, pink shoes, and a pink skirt. the balls are on the ground. the lights are on.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a large city with a towering clock tower and numerous buildings. the sky is cloudy, and the sun shines through the clouds. the clock tower is tall and imposing, and the steeple on top of the building is a prominent feature. the buildings are clustered together, and the trees are tall and green. the overall atmosphere is serene and peaceful.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a lily pad and a lily pad flower float effortlessly on the ripples, while a brown leaf and a brown lily pad provide a contrast to the white flower. the lily pad flower&#x27;s yellow center and orange center add a splash of color to the water&#x27;s surface. the water reflects the flower&#x27;s beauty, creating a serene and tranquil atmosphere.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a glass jar filled with a variety of colored pencils. the jar is clear and the pencils are arranged in a haphazard fashion. there are many different colored pencils in the jar, including a blue pencil, a green pencil, a red pencil, a pink pencil, and a purple pencil. the pencils are all different, with different colored tips and erasers. the jar is made of glass and has a clear rim. the background is gray.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a tool box filled with a variety of tools, including a wrench with a silver head, a screwdriver with a gray handle,a wrench with a gray head, a screwdriver with a gray handle, a metal socket with a silver head...</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">two puffins stand on a grassy hill, their beaks open. the puffin on the right has its mouth open, revealing its orange beak and black eye. the other puffin on the left has its mouth closed, showcasing its white chest and black and white plumage. the hill is covered in green grass, and the flowers bloom in a variety of colors, including pink, purple, and yellow. the puffins&#x27; feet are orange, and their wings are black. the background is blue, and the overall scene is serene and peaceful.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a bowl of steaming instant noodle soup with a spoon resting in the center. the broth is clear and the vegetables, including carrots, peas, and green beans, are floating gently in the liquid. the spoon is long and silver, with a reflection of light on its handle. the overall image is simple and straightforward, with a focus on the deliciousness of the soup.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a brown and white cat with a red collar looks to the left, its eyes shining yellow. the cat&#x27;s fur is long and silky, and its whiskers are long and prominent. the cat&#x27;s nose is pink, and its ears are pointy. the cat&#x27;s eyes are yellow, and its fur is brown and white. the cat is standing in the dark, and its head is turned to the side.</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        }
      ],
      "source": [
        "# @title Inspect training examples.\n",
        "def render_inline(image, resize=(128, 128)):\n",
        "  \"\"\"Convert image into inline html.\"\"\"\n",
        "  image = Image.fromarray(image)\n",
        "  image.resize(resize)\n",
        "  with io.BytesIO() as buffer:\n",
        "    image.save(buffer, format='jpeg')\n",
        "    image_b64 = str(base64.b64encode(buffer.getvalue()), \"utf-8\")\n",
        "    return f\"data:image/jpeg;base64,{image_b64}\"\n",
        "\n",
        "def render_example(image, caption):\n",
        "  image = ((image + 1)/2 * 255).astype(np.uint8)  # [-1,1] -> [0, 255]\n",
        "  return f\"\"\"\n",
        "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
        "        <img style=\"width:128px; height:128px;\" src=\"{render_inline(image, resize=(64,64))}\" />\n",
        "        <p style=\"width:256px; margin:10px; font-size:small;\">{html.escape(caption)}</p>\n",
        "    </div>\n",
        "    \"\"\"\n",
        "\n",
        "html_out = \"\"\n",
        "for idx, example in zip(range(8), train_data_iterator()):\n",
        "  caption = postprocess_tokens(example[\"text\"])  # detokenize model input.\n",
        "  caption = caption[len(\"caption en\\n\"):]        # strip prefix\n",
        "  html_out += render_example(example[\"image\"], caption)\n",
        "\n",
        "print(\"Training examples\")\n",
        "display(HTML(html_out))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "dwUV_imW3WQJ"
      },
      "outputs": [],
      "source": [
        "# @title Define the training step and evaluation loop.\n",
        "#\n",
        "# The main update_fn using simple SGD.\n",
        "#\n",
        "@functools.partial(jax.jit, donate_argnums=(0,))\n",
        "def update_fn(params, batch, learning_rate):\n",
        "  imgs, txts, mask_ar = batch[\"image\"], batch[\"text\"], batch[\"mask_ar\"]\n",
        "\n",
        "  def loss_fn(params):\n",
        "    text_logits, _ = model.apply({\"params\": params}, imgs, txts[:, :-1], mask_ar[:, :-1], train=True)\n",
        "    logp = jax.nn.log_softmax(text_logits, axis=-1)\n",
        "\n",
        "    # The model takes as input txts[:, :-1] but the loss is defined as predicting\n",
        "    # next tokens txts[:, 1:]. Additionally, mask_loss[:, 1:] indicates which tokens\n",
        "    # are part of the loss (e.g. prefix and padded tokens are not included).\n",
        "    mask_loss = batch[\"mask_loss\"][:, 1:]\n",
        "    targets = jax.nn.one_hot(txts[:, 1:], text_logits.shape[-1])\n",
        "\n",
        "    # Compute the loss per example. i.e. the mean of per token pplx.\n",
        "    # Since each example has a different number of tokens we normalize it.\n",
        "    token_pplx = jnp.sum(logp * targets, axis=-1)  # sum across vocab_size.\n",
        "    example_loss = -jnp.sum(token_pplx * mask_loss, axis=-1)  # sum across seq_len.\n",
        "    example_loss /= jnp.clip(jnp.sum(mask_loss, -1), 1)  # weight by num of tokens.\n",
        "\n",
        "    # batch_loss: mean of per example loss.\n",
        "    return jnp.mean(example_loss)\n",
        "\n",
        "  loss, grads = jax.value_and_grad(loss_fn)(params)\n",
        "\n",
        "  # Apply gradients to trainable params using SGD.\n",
        "  def apply_grad(param, gradient, trainable):\n",
        "    if not trainable: return param\n",
        "    return param - learning_rate * gradient\n",
        "\n",
        "  params = jax.tree_util.tree_map(apply_grad, params, grads, trainable_mask)\n",
        "\n",
        "  return params, loss\n",
        "\n",
        "# Evaluation/inference loop.\n",
        "def make_predictions(data_iterator, *, num_examples=None,\n",
        "                     batch_size=4, seqlen=SEQLEN, sampler=\"greedy\"):\n",
        "  outputs = []\n",
        "  while True:\n",
        "    # Construct a list of examples in the batch.\n",
        "    examples = []\n",
        "    try:\n",
        "      for _ in range(batch_size):\n",
        "        examples.append(next(data_iterator))\n",
        "        examples[-1][\"_mask\"] = np.array(True)  # Indicates true example.\n",
        "    except StopIteration:\n",
        "      if len(examples) == 0:\n",
        "        return outputs\n",
        "\n",
        "    # Not enough examples to complete a batch. Pad by repeating last example.\n",
        "    while len(examples) % batch_size:\n",
        "      examples.append(dict(examples[-1]))\n",
        "      examples[-1][\"_mask\"] = np.array(False)  # Indicates padding example.\n",
        "\n",
        "    # Convert list of examples into a dict of np.arrays and load onto devices.\n",
        "    batch = jax.tree.map(lambda *x: np.stack(x), *examples)\n",
        "    batch = big_vision.utils.reshard(batch, data_sharding)\n",
        "\n",
        "    # Make model predictions\n",
        "    tokens = decode({\"params\": params}, batch=batch,\n",
        "                    max_decode_len=seqlen, sampler=sampler)\n",
        "\n",
        "    # Fetch model predictions to device and detokenize.\n",
        "    tokens, mask = jax.device_get((tokens, batch[\"_mask\"]))\n",
        "    tokens = tokens[mask]  # remove padding examples.\n",
        "    responses = [postprocess_tokens(t) for t in tokens]\n",
        "\n",
        "    # Append to html output.\n",
        "    for example, response in zip(examples, responses):\n",
        "      outputs.append((example[\"image\"], response))\n",
        "      if num_examples and len(outputs) >= num_examples:\n",
        "        return outputs"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "067wj_6bZAG3",
        "outputId": "c3393bab-9c89-410c-9cf6-9e477e194a03"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "step:  1/64   lr: 0.00500   loss: 3.2343\n",
            "Model predictions at step 1\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman&#x27;s hand on a white wall.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a white dress is standing by the water.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a black belt bag with the words &quot; loading &quot; on the front and the word &quot; loading &quot; on the back. the bag is black and the word &quot; loading &quot; is on the front. the word &quot; loading &quot; is on the back. the bag is black and the word &quot; loading &quot; is on the front. the word &quot; loading &quot; is on the back. the bag is black and the word &quot; loading &quot; is on the front. the word &quot; loading &quot; is on the back. the bag is black and the word &quot; loading &quot; is on the front. the word &quot; loading &quot; is on the back. the bag is black and</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a pink shirt and a pink bag is walking on the street.</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "step:  2/64   lr: 0.01000   loss: 1.8795\n",
            "step:  3/64   lr: 0.01500   loss: 1.6227\n",
            "step:  4/64   lr: 0.02000   loss: 1.7171\n",
            "step:  5/64   lr: 0.02500   loss: 1.8039\n",
            "step:  6/64   lr: 0.03000   loss: 1.8505\n",
            "step:  7/64   lr: 0.02998   loss: 2.4128\n",
            "step:  8/64   lr: 0.02992   loss: 1.7908\n",
            "step:  9/64   lr: 0.02981   loss: 1.6596\n",
            "step: 10/64   lr: 0.02966   loss: 1.4779\n",
            "step: 11/64   lr: 0.02947   loss: 1.5320\n",
            "step: 12/64   lr: 0.02924   loss: 1.1854\n",
            "step: 13/64   lr: 0.02897   loss: 1.1401\n",
            "step: 14/64   lr: 0.02866   loss: 1.0224\n",
            "step: 15/64   lr: 0.02831   loss: 1.1550\n",
            "step: 16/64   lr: 0.02792   loss: 1.1351\n",
            "Model predictions at step 16\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman&#x27;s hand on a white wall, with a pink dress and white wall. the hand is white, with a pink nail. the dress is pink, with a white collar and white cuffs. the wall is white, with a white wall. the dress is long, with a white collar and white cuffs. the nail is white, with a pink nail. the hand is white, with a pink nail. the dress is pink, with a white collar and white cuffs. the wall is white, with a white wall. the dress is long, with a white collar and white cuffs. the nail is white, with a pink</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a white dress with red flowers on it is standing on a white wall, holding a white woven bag. the dress is white and has a red flower on it. the bag is white and has a white handle. the woman is wearing a white shirt with a red flower on it, and the dress is white with red flowers on it. the dress is long and has a slit in the middle. the bag is white and has a white handle. the woman is wearing a white shirt with a red flower on it, and the dress is white with red flowers on it. the dress is white and has a slit in the</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person wearing a red blazer and black pants, with a black belt and a black bag on their waist. the bag has a white label and a silver zipper. the person is wearing a white shirt and has a white nail on their finger. the bag is black and has a white label. the person is wearing a red blazer and has a black belt on their waist. the bag is black and has a white label. the person is wearing a white shirt and has a white nail on their finger. the bag is black and has a white label. the person is wearing a red blazer and has a black belt on their waist. the</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a pink shirt and blue jeans stands on a stone staircase, her hand on the bag. the jeans are blue, and the shirt is pink. the bag is pink, and the strap is white. the woman is wearing a silver bracelet and a silver bracelet on her hand. the bag is small and pink, and the strap is white. the shirt is pink, and the buttons are silver. the jeans are blue, and the belt is white. the woman is wearing a silver bracelet and a silver bracelet on her hand. the bag is pink, and the strap is white. the shirt is pink, and the buttons are</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "step: 17/64   lr: 0.02750   loss: 1.3412\n",
            "step: 18/64   lr: 0.02704   loss: 1.1616\n",
            "step: 19/64   lr: 0.02655   loss: 0.9532\n",
            "step: 20/64   lr: 0.02602   loss: 1.1109\n",
            "step: 21/64   lr: 0.02546   loss: 1.0477\n",
            "step: 22/64   lr: 0.02488   loss: 1.0161\n",
            "step: 23/64   lr: 0.02426   loss: 0.7687\n",
            "step: 24/64   lr: 0.02362   loss: 0.6345\n",
            "step: 25/64   lr: 0.02296   loss: 0.6716\n",
            "step: 26/64   lr: 0.02227   loss: 0.6888\n",
            "step: 27/64   lr: 0.02156   loss: 0.6420\n",
            "step: 28/64   lr: 0.02083   loss: 0.7701\n",
            "step: 29/64   lr: 0.02009   loss: 0.6571\n",
            "step: 30/64   lr: 0.01933   loss: 0.7011\n",
            "step: 31/64   lr: 0.01856   loss: 0.6353\n",
            "step: 32/64   lr: 0.01778   loss: 0.6679\n",
            "Model predictions at step 32\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman&#x27;s hand rests on a white wall, her fingers curled around the edge. the wall is made of concrete, and the shadow of her hand is cast on the wall. the woman&#x27;s arm is extended out, and her hand is curled. the dress is pink, and the shirt is white.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a long white dress with a floral pattern stands on a stone wall overlooking the ocean. the dress is flowing in the wind, and the sky is clear. the woman is holding a white woven bag and a white hat. the water is calm and blue.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person wears a red blazer with a black belt and a black skirt. the blazer has a button on the front and a pocket on the side. the person&#x27;s hand is on the jacket&#x27;s sleeve. the jacket is open. the belt is black and has a silver buckle. the bag is a fanny pack and has a zipper. the pants are black and have a white stripe on the back. the grass is green.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman stands on a stone staircase, her hand on the railing. her jeans are blue, and her shirt is pink. the bag is pink. the woman is wearing a white cardigan and a silver bracelet. the stairs are made of stone.</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "step: 33/64   lr: 0.01699   loss: 0.8104\n",
            "step: 34/64   lr: 0.01620   loss: 0.6509\n",
            "step: 35/64   lr: 0.01540   loss: 0.2895\n",
            "step: 36/64   lr: 0.01460   loss: 0.3786\n",
            "step: 37/64   lr: 0.01380   loss: 0.4475\n",
            "step: 38/64   lr: 0.01301   loss: 0.4427\n",
            "step: 39/64   lr: 0.01222   loss: 0.3183\n",
            "step: 40/64   lr: 0.01144   loss: 0.3254\n",
            "step: 41/64   lr: 0.01067   loss: 0.3482\n",
            "step: 42/64   lr: 0.00991   loss: 0.2905\n",
            "step: 43/64   lr: 0.00917   loss: 0.4441\n",
            "step: 44/64   lr: 0.00844   loss: 0.3120\n",
            "step: 45/64   lr: 0.00773   loss: 0.4281\n",
            "step: 46/64   lr: 0.00704   loss: 0.1778\n",
            "step: 47/64   lr: 0.00638   loss: 0.1648\n",
            "step: 48/64   lr: 0.00574   loss: 0.1446\n",
            "Model predictions at step 48\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a pink dress leans against a white wall, her hand delicately resting on the ledge. the dress is flowing and sheer, with long sleeves that drape gracefully. the woman&#x27;s hand is visible, and her fingers are curled. the wall is white and the shadow on the wall is long and dark.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a long white floral dress stands on a pier overlooking the ocean. the dress is flowing gracefully, showcasing the woman&#x27;s curves and the intricate design of the dress. the sky is clear and blue, with fluffy white clouds drifting above. the water is calm and blue, reflecting the sky. the woman&#x27;s hand is on her dress, and her other hand is on the bag. the dress is long and flowing, and the flowers are red and white.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person wears a red blazer with a black belt and a black skirt. the blazer has a single button and a single vent. the belt is black and has a silver zipper. the person&#x27;s hand is on the jacket&#x27;s sleeve. the jacket has a single vent and a single button. the skirt has a single vent and a single button. the bag is a fanny pack and has a silver zipper.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman stands on a stone staircase, her hand on her bag. the jeans are blue, and the shirt is pink. the bag is pink, and the belt is white. the woman is wearing a white cardigan and a silver bracelet. the stairs are made of stone, and the wall is made of concrete.</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "step: 49/64   lr: 0.00512   loss: 0.1304\n",
            "step: 50/64   lr: 0.00454   loss: 0.1808\n",
            "step: 51/64   lr: 0.00398   loss: 0.1156\n",
            "step: 52/64   lr: 0.00345   loss: 0.2323\n",
            "step: 53/64   lr: 0.00296   loss: 0.1531\n",
            "step: 54/64   lr: 0.00250   loss: 0.1296\n",
            "step: 55/64   lr: 0.00208   loss: 0.2341\n",
            "step: 56/64   lr: 0.00169   loss: 0.1462\n",
            "step: 57/64   lr: 0.00134   loss: 0.1332\n",
            "step: 58/64   lr: 0.00103   loss: 0.0951\n",
            "step: 59/64   lr: 0.00076   loss: 0.1676\n",
            "step: 60/64   lr: 0.00053   loss: 0.0734\n",
            "step: 61/64   lr: 0.00034   loss: 0.0867\n",
            "step: 62/64   lr: 0.00019   loss: 0.0694\n",
            "step: 63/64   lr: 0.00008   loss: 0.1235\n",
            "step: 64/64   lr: 0.00002   loss: 0.0907\n",
            "Model predictions at step 64\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a pink dress leans against a white wall, her hand delicately resting on the ledge. the dress drapes gracefully, showcasing the contours of her body. the wall is tall and imposing, with a crack running along its length. the woman&#x27;s hand is poised on the ledge, offering a warm greeting. the dress is long and flowing, draping gracefully.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a long white floral dress stands on a stone wall overlooking the ocean. the dress is flowing gracefully, showcasing the woman&#x27;s curves and the intricate design of the fabric. her hand is on the side of the dress, offering a gentle caress. the sky is clear and blue, with fluffy white clouds drifting above. the water is calm and blue, reflecting the sky. the woman&#x27;s legs are long and slender, showcasing her slender physique.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person wears a red blazer with a black belt and a black skirt. the blazer has a single button and a single vent. the person wears a white tank top and a black belt. the bag is a fanny pack and has a zipper. the jacket is loose and the blazer has a belt loop. the person is standing next to a green plant and is wearing a black pants. the bag is a fanny pack and has a zipper.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman stands on a stone staircase, her hand on her bag. the stairs are made of stone, and the wall is made of concrete. the woman is wearing blue jeans, a pink shirt, and a white cardigan. the bag is pink, and the strap is pink. the sky is visible through the leaves on the tree.</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 9min 6s, sys: 2min 43s, total: 11min 50s\n",
            "Wall time: 17min 51s\n"
          ]
        }
      ],
      "source": [
        "# @title Run training loop.\n",
        "#\n",
        "# Run a short training loop with cosine learning rate schedule.\n",
        "#\n",
        "# Note: the first step can be quite slow on some machines (up to several minutes)\n",
        "# due to XLA compilation of the jax.jit'd function.\n",
        "#\n",
        "%%time\n",
        "\n",
        "BATCH_SIZE = 8\n",
        "TRAIN_EXAMPLES = 512\n",
        "LEARNING_RATE = 0.03\n",
        "\n",
        "TRAIN_STEPS = TRAIN_EXAMPLES // BATCH_SIZE\n",
        "EVAL_STEPS = TRAIN_STEPS // 4\n",
        "\n",
        "train_data_it = train_data_iterator()\n",
        "\n",
        "sched_fn = big_vision.utils.create_learning_rate_schedule(\n",
        "    total_steps=TRAIN_STEPS+1, base=LEARNING_RATE,\n",
        "    decay_type=\"cosine\", warmup_percent=0.10)\n",
        "\n",
        "for step in range(1, TRAIN_STEPS+1):\n",
        "  # Make list of N training examples.\n",
        "  examples = [next(train_data_it) for _ in range(BATCH_SIZE)]\n",
        "\n",
        "  # Convert list of examples into a dict of np.arrays and load onto devices.\n",
        "  batch = jax.tree.map(lambda *x: np.stack(x), *examples)\n",
        "  batch = big_vision.utils.reshard(batch, data_sharding)\n",
        "\n",
        "  # Training step and report training loss\n",
        "  learning_rate = sched_fn(step)\n",
        "  params, loss = update_fn(params, batch, learning_rate)\n",
        "\n",
        "  loss = jax.device_get(loss)\n",
        "  print(f\"step: {step:2d}/{TRAIN_STEPS:2d}   lr: {learning_rate:.5f}   loss: {loss:.4f}\")\n",
        "\n",
        "  if step == 1 or (step % EVAL_STEPS) == 0:\n",
        "    print(f\"Model predictions at step {step}\")\n",
        "    html_out = \"\"\n",
        "    for image, caption in make_predictions(\n",
        "        validation_data_iterator(), num_examples=4, batch_size=4):\n",
        "      html_out += render_example(image, caption)\n",
        "    display(HTML(html_out))\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "id": "hgUhEKjzPdMQ",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 447
        },
        "outputId": "e0f9bfaf-7688-42a9-b1d8-22f7ad5743e6"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Model predictions\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ],
            "text/html": [
              "\n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a pink dress leans against a white wall, her hand delicately resting on the ledge. the dress drapes gracefully, showcasing the contours of her body. the wall is tall and imposing, with a crack running along its length. the woman&#x27;s hand is poised on the ledge, offering a warm greeting. the dress is long and flowing, draping gracefully.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman in a long white floral dress stands on a stone wall overlooking the ocean. the dress is flowing gracefully, showcasing the woman&#x27;s curves and the intricate design of the fabric. her hand is on the side of the dress, offering a gentle caress. the sky is clear and blue, with fluffy white clouds drifting above. the water is calm and blue, reflecting the sky. the woman&#x27;s legs are long and slender, showcasing her slender physique.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person wears a red blazer with a black belt and a black skirt. the blazer has a single button and a single vent. the person wears a white tank top and a black belt. the bag is a fanny pack and has a zipper. the jacket is loose and the blazer has a belt loop. the person is standing next to a green plant and is wearing a black pants. the bag is a fanny pack and has a zipper.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman stands on a stone staircase, her hand on her bag. the stairs are made of stone, and the wall is made of concrete. the woman is wearing blue jeans, a pink shirt, and a white cardigan. the bag is pink, and the strap is pink. the sky is visible through the leaves on the tree.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a person is laying in bed, and their hand is on the pillow. the bed is white, and the pillow is gray. the jeans are blue, and the shoes are white. the sweater is pink, and the writing on the sweater is red. the person is wearing a sweater and jeans.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a man stands with his hand in his hair, his eyes closed. he wears a black sweater and a checked shirt. the sweater is long-sleeved and has a collar. the man&#x27;s hair is long and blonde. the background is pink.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a row of white hangers on a white clothes rack, with a white wall behind them. the hangers are made of wood and have silver hooks. the rack is tall and has a white pole running through it. the wall is white and has a gray shadow on it.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a white hoodie and white pants hang on a wooden hanger. the hoodie has a white drawstring and a white drawstring on the pants. the pants have a white drawstring and a white tag on the pants. the hanger is black. the wall is white.</p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a woman stands on the sidewalk, showcasing her black knee-high boots and a black bag with a gold chain. the boots are black, and the bag is black. the woman is wearing blue jeans and a black belt. the bag is on her shoulder, and the chain is on her bag. the boots are on her legs, and the bag is on her. </p>\n",
              "    </div>\n",
              "    \n",
              "    <div style=\"display: inline-flex; align-items: center; justify-content: center;\">\n",
              "        <img style=\"width:128px; height:128px;\" src=\"\" />\n",
              "        <p style=\"width:256px; margin:10px; font-size:small;\">a man stands on a road, his hands in his pockets. his pants are brown, and his shirt is white. he wears a denim jacket and white shoes. the road is gray and the trees are green. the man is standing tall and straight, with his arms akimbo. the jacket is open, and his shirt is tucked in. the pants are loose, and the hem of his shirt is rolled up. the man is wearing a white t-shirt and a pair of white shoes.</p>\n",
              "    </div>\n",
              "    "
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 33.5 s, sys: 246 ms, total: 33.7 s\n",
            "Wall time: 56.7 s\n"
          ]
        }
      ],
      "source": [
        "# @title Evaluate the model on all examples.\n",
        "#\n",
        "# The validation data consists of 10 images in a different domain than training\n",
        "# data.\n",
        "%%time\n",
        "\n",
        "print(\"Model predictions\")\n",
        "html_out = \"\"\n",
        "for image, caption in make_predictions(validation_data_iterator(), batch_size=4):\n",
        "  html_out += render_example(image, caption)\n",
        "display(HTML(html_out))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ai0NMbAwsr0j"
      },
      "source": [
        "# Save the final checkpoint"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "id": "5H_3CV33_JkV"
      },
      "outputs": [],
      "source": [
        "def npsave(pytree, path):\n",
        "  names_and_vals, _ = big_vision.utils.tree_flatten_with_names(pytree)\n",
        "  with open(path, \"wb\") as f:\n",
        "    np.savez(f, **{k:v for k, v in names_and_vals})\n",
        "\n",
        "# Takes around 4 minutes\n",
        "npsave(params, 'my-custom-paligemma-ckpt.npz')"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "T4",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}