{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "60KmTK7o6ppd"
      },
      "source": [
        "##### Copyright 2024 Google LLC."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "GoAJlrr6LUyj"
      },
      "outputs": [],
      "source": [
        "# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xAd05sKCIoJL"
      },
      "source": [
        "# Gemma 7B - chat and distributed pirate-tuning\n",
        "\n",
        "This notebook demonstrates how you can chat with the Gemma 7B model and finetune it so that it generates responses in the tone of pirates.\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_1]Finetune_distributed.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AO7a1Q4Yyc9Z"
      },
      "source": [
        "# Installation\n",
        "\n",
        "Install Keras and KerasNLP with the Gemma model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WWEzVJR4Fx9g"
      },
      "outputs": [],
      "source": [
        "# Install Keras 3 last. See https://keras.io/getting_started/ for more details.\n",
        "!pip install -q -U tensorflow-cpu\n",
        "!pip install -q -U keras-nlp\n",
        "!pip install -q -U keras>=3\n",
        "!pip install -q -U arrr datasets\n",
        "!pip install -q -U matplotlib"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lbZsUvfhwL2D"
      },
      "source": [
        "**How to this run demo:**\n",
        "* On Kaggle, you can use a TPUv3-8 which have 8 cores with 16GB of memory each.\n",
        "* On GCP, you can [provision a TPUv3-8 backend for Colab](https://docs.google.com/document/d/13DjWGxqbqAyoEcJjtmenmaMya9lxTsFn0jJP-UUH1jc/edit?usp=sharing).\n",
        "* On GCP, you can [provision an multi-GPU 8xV100 backend for Colab](https://docs.google.com/document/d/1MNINJCk6vp0gCr4XqvmaTiwN8GKJH9_SFceXNvKjRw0/edit?usp=sharing).\n",
        "\n",
        "Check what accelerators are available:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BK4MpHLKGujb"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0),\n",
              " TpuDevice(id=1, process_index=0, coords=(0,0,0), core_on_chip=1),\n",
              " TpuDevice(id=2, process_index=0, coords=(1,0,0), core_on_chip=0),\n",
              " TpuDevice(id=3, process_index=0, coords=(1,0,0), core_on_chip=1),\n",
              " TpuDevice(id=4, process_index=0, coords=(0,1,0), core_on_chip=0),\n",
              " TpuDevice(id=5, process_index=0, coords=(0,1,0), core_on_chip=1),\n",
              " TpuDevice(id=6, process_index=0, coords=(1,1,0), core_on_chip=0),\n",
              " TpuDevice(id=7, process_index=0, coords=(1,1,0), core_on_chip=1)]"
            ]
          },
          "execution_count": 1,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import jax\n",
        "\n",
        "jax.devices()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WEgg_OVIL2HY"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "# The Keras 3 distribution API is only implemented for the JAX backend for now\n",
        "os.environ[\"KERAS_BACKEND\"] = \"jax\"\n",
        "# Pre-allocate 100% of TPU memory to minimize memory fragmentation\n",
        "os.environ[\"XLA_PYTHON_CLIENT_MEM_FRACTION\"] = \"1.0\"\n",
        "\n",
        "import keras\n",
        "import keras_nlp\n",
        "\n",
        "# for reproducibility\n",
        "keras.utils.set_random_seed(42)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "h5H7oE9MtnBA"
      },
      "outputs": [],
      "source": [
        "# Access to Gemma checkpoints on Kaggle\n",
        "\n",
        "# On Colab, set up you Kaggle credentials as Colab secrets and use this:\n",
        "\n",
        "# from google.colab import userdata\n",
        "# os.environ[\"KAGGLE_USERNAME\"] = userdata.get('KAGGLE_USERNAME')\n",
        "# os.environ[\"KAGGLE_KEY\"] = userdata.get('KAGGLE_KEY')\n",
        "\n",
        "# On Kaggle, set up you Kaggle credentials as Kaggle secrets and use this:\n",
        "\n",
        "# from kaggle_secrets import UserSecretsClient\n",
        "# secret_label = \"your-secret-label\"\n",
        "# os.environ[\"KAGGLE_USERNAME\"] = UserSecretsClient().get_secret(KAGGLE_USERNAME)\n",
        "# os.environ[\"KAGGLE_KEY\"] = UserSecretsClient().get_secret(KAGGLE_KEY)\n",
        "# On a GCP TPU/GPU VM, set the environment vars before running this notebook."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mzhbIuGotgda"
      },
      "outputs": [],
      "source": [
        "# formatting utility\n",
        "from IPython.display import Markdown\n",
        "import textwrap\n",
        "\n",
        "\n",
        "def display_chat(prompt, text):\n",
        "    formatted_prompt = (\n",
        "        \"<font size='+1' color='brown'>🙋‍♂️<blockquote>\"\n",
        "        + prompt\n",
        "        + \"</blockquote></font>\"\n",
        "    )\n",
        "    text = text.replace(\"•\", \"  *\")\n",
        "    text = textwrap.indent(text, \"> \", predicate=lambda _: True)\n",
        "    formatted_text = \"<font size='+1' color='teal'>🤖\\n\\n\" + text + \"\\n</font>\"\n",
        "    return Markdown(formatted_prompt + formatted_text)\n",
        "\n",
        "\n",
        "def to_markdown(text):\n",
        "    text = text.replace(\"•\", \"  *\")\n",
        "    return Markdown(textwrap.indent(text, \"> \", predicate=lambda _: True))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wo1xkzr62hXN"
      },
      "source": [
        "# Load the 7B model, with model parallelism"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xrR8TpVS6uPs"
      },
      "source": [
        "The code below specifies how the weights of the model get partitioned across multiple accelerators.\n",
        "\n",
        "Detailed documentation in the new [Keras 3 distribution API guide](https://keras.io/guides/distribution/)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8Wgh8h0qQCcu"
      },
      "outputs": [],
      "source": [
        "# Create a device mesh with shape (1, 8) to parition weights across all 8 TPUs cores.\n",
        "devices = keras.distribution.list_devices()\n",
        "device_mesh = keras.distribution.DeviceMesh((1, 8), [\"batch\", \"model\"], devices)\n",
        "\n",
        "# Create a LayoutMap to partition relevant weights\n",
        "layout_map = keras.distribution.LayoutMap(device_mesh)\n",
        "# Weights that match 'token_embedding/embeddings' will be sharded on 8 TPUs\n",
        "layout_map[\"token_embedding/embeddings\"] = (\"model\", None)\n",
        "# Regex to match against the query, key and value matrices in attention layers\n",
        "layout_map[\"decoder_block.*attention.*(query|key|value).kernel\"] = (\"model\", None, None)\n",
        "layout_map[\"decoder_block.*attention_output.kernel\"] = (\"model\", None, None)\n",
        "layout_map[\"decoder_block.*ffw_gating.kernel\"] = (None, \"model\")\n",
        "layout_map[\"decoder_block.*ffw_linear.kernel\"] = (\"model\", None)\n",
        "\n",
        "# Make ModelParallel laoading using the LayoutMap the default\n",
        "keras.distribution.set_distribution(\n",
        "    keras.distribution.ModelParallel(device_mesh, layout_map, batch_dim_name=\"batch\")\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dTULEQ29qidE"
      },
      "source": [
        " Load the model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bu48vUnbQj0p"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Downloading from https://www.kaggle.com/api/v1/models/keras/gemma/keras/gemma_1.1_instruct_7b_en/3/download/task.json...\n",
            "Downloading from https://www.kaggle.com/api/v1/models/keras/gemma/keras/gemma_1.1_instruct_7b_en/3/download/preprocessor.json...\n",
            "normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
          ]
        }
      ],
      "source": [
        "gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset(\"gemma_1.1_instruct_7b_en\")\n",
        "gemma_lm.compile(sampler=keras_nlp.samplers.GreedySampler())  # this is the default"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ORCOIawAvpZ1"
      },
      "source": [
        "Verify that the model has been partitioned correctly. Let's take `decoder_block_1` as an example."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "DqT7TRHKvoMK"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "<class 'keras_nlp.src.models.gemma.gemma_decoder_block.GemmaDecoderBlock'>\n",
            "decoder_block_1/pre_attention_norm/scale                    (3072,)           PartitionSpec(None,)\n",
            "decoder_block_1/attention/query/kernel                      (16, 3072, 256)   PartitionSpec('model', None, None)\n",
            "decoder_block_1/attention/key/kernel                        (16, 3072, 256)   PartitionSpec('model', None, None)\n",
            "decoder_block_1/attention/value/kernel                      (16, 3072, 256)   PartitionSpec('model', None, None)\n",
            "decoder_block_1/attention/attention_output/kernel           (16, 256, 3072)   PartitionSpec('model', None, None)\n",
            "decoder_block_1/pre_ffw_norm/scale                          (3072,)           PartitionSpec(None,)\n",
            "decoder_block_1/ffw_gating/kernel                           (3072, 24576)     PartitionSpec(None, 'model')\n",
            "decoder_block_1/ffw_gating_2/kernel                         (3072, 24576)     PartitionSpec(None, None)\n",
            "decoder_block_1/ffw_linear/kernel                           (24576, 3072)     PartitionSpec('model', None)\n"
          ]
        }
      ],
      "source": [
        "decoder_block_1 = gemma_lm.backbone.get_layer(\"decoder_block_1\")\n",
        "print(type(decoder_block_1))\n",
        "for variable in decoder_block_1.weights:\n",
        "    print(\n",
        "        f\"{variable.path:<58}  {str(variable.shape):<16}  {str(variable.value.sharding.spec)}\"\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jc0ZzYIW0TSN"
      },
      "source": [
        "# Let's chat\n",
        "\n",
        "We loaded the instruction-tuned model gemma_instruct_7b_en, which understands the following turn tokens:\n",
        "```\n",
        "<start_of_turn>user\\n  ... <end_of_turn>\\n\n",
        "<start_of_turn>model\\n ... <end_of_turn>\\n\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1CPDXJ-z-Ydv"
      },
      "source": [
        "### Chat helper (to manage discussion state)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "N0koS7C_MgQ-"
      },
      "outputs": [],
      "source": [
        "class ChatState:\n",
        "\n",
        "    __START_TURN_USER__ = \"<start_of_turn>user\\n\"\n",
        "    __START_TURN_MODEL__ = \"<start_of_turn>model\\n\"\n",
        "    __END_TURN__ = \"<end_of_turn>\\n\"\n",
        "\n",
        "    def __init__(self, model, system=\"\"):\n",
        "        self.model = model\n",
        "        self.system = system\n",
        "        self.history = []\n",
        "\n",
        "    def add_to_history_as_user(self, message):\n",
        "        self.history.append(self.__START_TURN_USER__ + message + self.__END_TURN__)\n",
        "\n",
        "    def add_to_history_as_model(self, message):\n",
        "        self.history.append(self.__START_TURN_MODEL__ + message + self.__END_TURN__)\n",
        "\n",
        "    def get_history(self):\n",
        "        return \"\".join([*self.history])\n",
        "\n",
        "    def get_full_prompt(self):\n",
        "        prompt = self.get_history() + self.__START_TURN_MODEL__\n",
        "        if len(self.system) > 0:\n",
        "            prompt = self.system + \"\\n\" + prompt\n",
        "        return prompt\n",
        "\n",
        "    def send_message(self, message):\n",
        "        self.add_to_history_as_user(message)\n",
        "        prompt = self.get_full_prompt()\n",
        "        response = self.model.generate(prompt, max_length=1024)\n",
        "        result = response.replace(prompt, \"\")\n",
        "        self.add_to_history_as_model(result)\n",
        "        return result"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "eI1p0iZMh3ef"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Tell me, in a few words,  how to compute all prime numbers up to 1000?</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> Use the Sieve of Eratosthenes algorithm.\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 9,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat = ChatState(gemma_lm)\n",
        "message = \"Tell me, in a few words,  how to compute all prime numbers up to 1000?\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0drcbj-66wCA"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Now in Python. And don't forget to print the result.</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> ```python\n",
              "> def sieve(n):\n",
              ">     is_prime = [True] * (n+1)\n",
              ">     is_prime[0] = is_prime[1] = False\n",
              ">     for i in range(2, int(n**0.5) + 1):\n",
              ">         if is_prime[i]:\n",
              ">             for j in range(i*i, n+1, i):\n",
              ">                 is_prime[j] = False\n",
              ">     primes = [i for i, is_prime in enumerate(is_prime) if is_prime]\n",
              ">     return primes\n",
              "> \n",
              "> primes = sieve(1000)\n",
              "> print(primes)\n",
              "> ```\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 10,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "message = \"Now in Python. And don't forget to print the result.\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mtr5ZEJMHp-l"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997]\n"
          ]
        }
      ],
      "source": [
        "def sieve(n):\n",
        "    is_prime = [True] * (n + 1)\n",
        "    is_prime[0] = is_prime[1] = False\n",
        "    for i in range(2, int(n**0.5) + 1):\n",
        "        if is_prime[i]:\n",
        "            for j in range(i * i, n + 1, i):\n",
        "                is_prime[j] = False\n",
        "    primes = [i for i, is_prime in enumerate(is_prime) if is_prime]\n",
        "    return primes\n",
        "\n",
        "\n",
        "primes = sieve(1000)\n",
        "print(primes)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "KMHW4chPjk8E"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Thank you, it works! Can you explain the code in plain English?</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> **Step 1: Initialization**\n",
              "> \n",
              "> - We create a list of 1001 boolean values (True/False), where each index represents a number from 0 to 1000.\n",
              "> - We set the first two values to False because 0 and 1 are not prime numbers.\n",
              "> \n",
              "> **Step 2: Iterating over numbers**\n",
              "> \n",
              "> - We loop through numbers from 2 to the square root of 1000.\n",
              "> - For each prime number we encounter, we mark all its multiples as non-prime.\n",
              "> \n",
              "> **Step 3: Identifying primes**\n",
              "> \n",
              "> - After the loop, we iterate over the list and collect the indices of values that are still True, which indicates they are prime numbers.\n",
              "> \n",
              "> **Step 4: Outputting results**\n",
              "> \n",
              "> - Finally, we print the list of prime numbers identified.\n",
              "> \n",
              "> **How it works:**\n",
              "> \n",
              "> - The Sieve of Eratosthenes is an efficient algorithm for finding prime numbers.\n",
              "> - By marking multiples of prime numbers as non-prime, it reduces the number of candidates we need to check.\n",
              "> - The loop stops at the square root of 1000 because any number greater than the square root would have already been marked as a multiple of a smaller prime.\n",
              "> \n",
              "> **Output:**\n",
              "> \n",
              "> The code will output a list of all prime numbers up to 1000, like this:\n",
              "> \n",
              "> ```\n",
              "> [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ... ]\n",
              "> ```\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "message = \"Thank you, it works! Can you explain the code in plain English?\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2G39JfiQidlc"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Great! Now add those explanations as comments in the code, line by line.</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> ```python\n",
              "> # This function implements the Sieve of Eratosthenes algorithm.\n",
              "> def sieve(n):\n",
              ">     # Create a list of boolean values, where each index represents a number from 0 to n.\n",
              ">     is_prime = [True] * (n+1)\n",
              ">     \n",
              ">     # Set the first two values to False because 0 and 1 are not prime numbers.\n",
              ">     is_prime[0] = is_prime[1] = False\n",
              ">     \n",
              ">     # Iterate over numbers from 2 to the square root of n.\n",
              ">     for i in range(2, int(n**0.5) + 1):\n",
              ">         # If the current number is prime, mark all its multiples as non-prime.\n",
              ">         if is_prime[i]:\n",
              ">             for j in range(i*i, n+1, i):\n",
              ">                 is_prime[j] = False\n",
              ">     \n",
              ">     # Identify prime numbers by iterating over the list and collecting indices of values that are still True.\n",
              ">     primes = [i for i, is_prime in enumerate(is_prime) if is_prime]\n",
              ">     \n",
              ">     # Return the list of prime numbers.\n",
              ">     return primes\n",
              "> \n",
              "> # Get prime numbers up to 1000.\n",
              "> primes = sieve(1000)\n",
              "> \n",
              "> # Print the results.\n",
              "> print(primes)\n",
              "> ```\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 13,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "message = \"Great! Now add those explanations as comments in the code, line by line.\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UaBq9PoXvcgF"
      },
      "source": [
        "Copy-pasted code from Gemma's answer:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7hm8EO64Hsgq"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997]\n"
          ]
        }
      ],
      "source": [
        "# This function implements the Sieve of Eratosthenes algorithm.\n",
        "def sieve(n):\n",
        "    # Create a list of boolean values, where each index represents a number from 0 to n.\n",
        "    is_prime = [True] * (n + 1)\n",
        "\n",
        "    # Set the first two values to False because 0 and 1 are not prime numbers.\n",
        "    is_prime[0] = is_prime[1] = False\n",
        "\n",
        "    # Iterate over numbers from 2 to the square root of n.\n",
        "    for i in range(2, int(n**0.5) + 1):\n",
        "        # If the current number is prime, mark all its multiples as non-prime.\n",
        "        if is_prime[i]:\n",
        "            for j in range(i * i, n + 1, i):\n",
        "                is_prime[j] = False\n",
        "\n",
        "    # Identify prime numbers by iterating over the list and collecting indices of values that are still True.\n",
        "    primes = [i for i, is_prime in enumerate(is_prime) if is_prime]\n",
        "\n",
        "    # Return the list of prime numbers.\n",
        "    return primes\n",
        "\n",
        "\n",
        "# Get prime numbers up to 1000.\n",
        "primes = sieve(1000)\n",
        "\n",
        "# Print the results.\n",
        "print(primes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IcPCXCwvXC7t"
      },
      "source": [
        "# Finetune to speak like a pirate"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xiLW0SpI1PfC"
      },
      "source": [
        "## Enable LoRA\n",
        "\n",
        "Perform finetuning using [Low Rank Adaptation](https://arxiv.org/abs/2106.09685) (LoRA). LoRA is a fine-tuning technique which greatly reduces the number of trainable parameters for downstream tasks by freezing the full weights of the model and inserting a smaller number of new trainable weights into the model. Basically LoRA reparameterizes the larger full weight matrices by 2 smaller low-rank matrices AxB to train and this technique makes training much faster and more memory-efficient."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3o_Gi3v_jp7s"
      },
      "outputs": [],
      "source": [
        "# Enable LoRA for the model and set the LoRA rank to 8.\n",
        "gemma_lm.backbone.enable_lora(rank=8)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yAdaFfI4izR9"
      },
      "outputs": [],
      "source": [
        "# The model still works normally\n",
        "gemma_lm.generate(\"Hello\", max_length=1024)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Wpxw7PNt-_Nx"
      },
      "source": [
        "## Pirate dataset\n",
        "\n",
        "Text dataset piratified using the arrr Python package :-)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Gv3mqm307YJ4"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Virgin australia commenced services on 31 august 2000 as virgin blue, with two aircraft on a single route. Arrr!\n",
            "Tope\n",
            "Camels use th' fat in their humps to keep them filled with energy and hydration for long periods of time.\n",
            "Th' name of th' third daughter be alice\n",
            "Tomoaki komorida was born on july 10,1981.\n",
            "No. Stalemate be a drawn position. It doesn't matter who has captured more pieces or be in a winning position\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 23:06:33.102440: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        }
      ],
      "source": [
        "from datasets import load_dataset\n",
        "\n",
        "ds = load_dataset(\n",
        "    \"databricks/databricks-dolly-15k\",\n",
        "    split=\"train\",\n",
        ")\n",
        "eng = ds.with_format(\"np\", columns=\"response\", output_all_columns=False)\n",
        "eng = eng[:2048]  # Use a subset of the dataset for faster training\n",
        "\n",
        "pirate = []\n",
        "\n",
        "# Use Arrr.py to translate response into pirate speak\n",
        "from arrr import translate\n",
        "\n",
        "for x in eng[\"response\"]:\n",
        "    pirate.append(translate(x))\n",
        "\n",
        "import tensorflow as tf  # for tf.data\n",
        "\n",
        "ds = tf.data.Dataset.from_tensor_slices(pirate)\n",
        "ds = ds.batch(2, drop_remainder=True)\n",
        "\n",
        "for batch in ds.take(3):\n",
        "    for text in batch:\n",
        "        print(text.numpy().decode(\"utf-8\"))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5mlNw2S5AFqU"
      },
      "source": [
        "## Pirate-tune!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1-hQFy7hXWRl"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Preprocessor: \"gemma_causal_lm_preprocessor\"</span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[1mPreprocessor: \"gemma_causal_lm_preprocessor\"\u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
              "┃<span style=\"font-weight: bold\"> Tokenizer (type)                                   </span>┃<span style=\"font-weight: bold\">                                             Vocab # </span>┃\n",
              "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
              "│ gemma_tokenizer (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">GemmaTokenizer</span>)                   │                                             <span style=\"color: #00af00; text-decoration-color: #00af00\">256,000</span> │\n",
              "└────────────────────────────────────────────────────┴─────────────────────────────────────────────────────┘\n",
              "</pre>\n"
            ],
            "text/plain": [
              "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
              "┃\u001b[1m \u001b[0m\u001b[1mTokenizer (type)                                  \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m                                            Vocab #\u001b[0m\u001b[1m \u001b[0m┃\n",
              "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
              "│ gemma_tokenizer (\u001b[38;5;33mGemmaTokenizer\u001b[0m)                   │                                             \u001b[38;5;34m256,000\u001b[0m │\n",
              "└────────────────────────────────────────────────────┴─────────────────────────────────────────────────────┘\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"gemma_causal_lm\"</span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[1mModel: \"gemma_causal_lm\"\u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
              "┃<span style=\"font-weight: bold\"> Layer (type)                  </span>┃<span style=\"font-weight: bold\"> Output Shape              </span>┃<span style=\"font-weight: bold\">         Param # </span>┃<span style=\"font-weight: bold\"> Connected to               </span>┃\n",
              "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
              "│ padding_mask (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">InputLayer</span>)     │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>)              │               <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │ -                          │\n",
              "├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤\n",
              "│ token_ids (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">InputLayer</span>)        │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>)              │               <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │ -                          │\n",
              "├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤\n",
              "│ gemma_backbone                │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">3072</span>)        │   <span style=\"color: #00af00; text-decoration-color: #00af00\">8,559,815,680</span> │ padding_mask[<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>],        │\n",
              "│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">GemmaBackbone</span>)               │                           │                 │ token_ids[<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>]            │\n",
              "├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤\n",
              "│ token_embedding               │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">256000</span>)      │     <span style=\"color: #00af00; text-decoration-color: #00af00\">786,432,000</span> │ gemma_backbone[<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>]       │\n",
              "│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">ReversibleEmbedding</span>)         │                           │                 │                            │\n",
              "└───────────────────────────────┴───────────────────────────┴─────────────────┴────────────────────────────┘\n",
              "</pre>\n"
            ],
            "text/plain": [
              "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
              "┃\u001b[1m \u001b[0m\u001b[1mLayer (type)                 \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape             \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m        Param #\u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mConnected to              \u001b[0m\u001b[1m \u001b[0m┃\n",
              "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
              "│ padding_mask (\u001b[38;5;33mInputLayer\u001b[0m)     │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m)              │               \u001b[38;5;34m0\u001b[0m │ -                          │\n",
              "├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤\n",
              "│ token_ids (\u001b[38;5;33mInputLayer\u001b[0m)        │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m)              │               \u001b[38;5;34m0\u001b[0m │ -                          │\n",
              "├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤\n",
              "│ gemma_backbone                │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m3072\u001b[0m)        │   \u001b[38;5;34m8,559,815,680\u001b[0m │ padding_mask[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m],        │\n",
              "│ (\u001b[38;5;33mGemmaBackbone\u001b[0m)               │                           │                 │ token_ids[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m]            │\n",
              "├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤\n",
              "│ token_embedding               │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m256000\u001b[0m)      │     \u001b[38;5;34m786,432,000\u001b[0m │ gemma_backbone[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m]       │\n",
              "│ (\u001b[38;5;33mReversibleEmbedding\u001b[0m)         │                           │                 │                            │\n",
              "└───────────────────────────────┴───────────────────────────┴─────────────────┴────────────────────────────┘\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">8,559,815,680</span> (31.89 GB)\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[1m Total params: \u001b[0m\u001b[38;5;34m8,559,815,680\u001b[0m (31.89 GB)\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">22,134,784</span> (84.44 MB)\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m22,134,784\u001b[0m (84.44 MB)\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">8,537,680,896</span> (31.81 GB)\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m8,537,680,896\u001b[0m (31.81 GB)\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Epoch 1/8\n",
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 109ms/step - loss: 1.6315 - sparse_categorical_accuracy: 0.4726"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:07:22.208332: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m157s\u001b[0m 109ms/step - loss: 1.6309 - sparse_categorical_accuracy: 0.4726\n",
            "Epoch 2/8\n",
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.6997 - sparse_categorical_accuracy: 0.5312\n",
            "Epoch 3/8\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:09:07.695223: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.6243 - sparse_categorical_accuracy: 0.5671\n",
            "Epoch 4/8\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:10:52.996947: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.5289 - sparse_categorical_accuracy: 0.6214\n",
            "Epoch 5/8\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:12:38.196787: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.4240 - sparse_categorical_accuracy: 0.6893\n",
            "Epoch 6/8\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:14:23.414510: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.3281 - sparse_categorical_accuracy: 0.7559\n",
            "Epoch 7/8\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:16:08.598979: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.2489 - sparse_categorical_accuracy: 0.8148\n",
            "Epoch 8/8\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:17:53.797181: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[1m1024/1024\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 103ms/step - loss: 0.1876 - sparse_categorical_accuracy: 0.8632\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "2024-04-08 16:19:38.968305: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
          ]
        }
      ],
      "source": [
        "# Limit the input sequence length to 256 to control memory usage.\n",
        "gemma_lm.preprocessor.sequence_length = 256\n",
        "# Use AdamW (a common optimizer for transformer models).\n",
        "optimizer = keras.optimizers.AdamW(\n",
        "    learning_rate=1e-4,\n",
        "    weight_decay=0.01,\n",
        ")\n",
        "# Exclude layernorm and bias terms from decay.\n",
        "optimizer.exclude_from_weight_decay(var_names=[\"bias\", \"scale\"])\n",
        "\n",
        "gemma_lm.compile(\n",
        "    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
        "    optimizer=optimizer,\n",
        "    weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],\n",
        "    sampler=keras_nlp.samplers.GreedySampler(),\n",
        ")\n",
        "gemma_lm.summary()\n",
        "history = gemma_lm.fit(ds, epochs=8)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CnpeavB4fZ7Y"
      },
      "source": [
        "Note that enabling LoRA reduces the number of trainable parameters significantly, from 7 billion to only 22 million."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bn7nknTcT52b"
      },
      "outputs": [
        {
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiMAAAGzCAYAAAD9pBdvAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8fJSN1AAAACXBIWXMAAA9hAAAPYQGoP6dpAAA9mUlEQVR4nO3dd3hUVeLG8XdmUoEUQkhCQkIIXUroAUJTI6iIZS1YYVFEEZXi7iqugvpTcYusutKxra6KumJDQEAJXRBE6SUFQklCwBQCaTPz+yMYlwUkwWROJvP9PM88j97cO/e9EZ3XM/fcY3E6nU4BAAAYYjUdAAAAeDbKCAAAMIoyAgAAjKKMAAAAoygjAADAKMoIAAAwijICAACMoowAAACjKCMAAMAoygiAavHmm2/KYrEoPT3ddBQAboYyAgAAjKKMAAAAoygjAADAKMoIgBozY8YMtW/fXr6+voqMjNTYsWOVm5t7xj579+7VjTfeqIiICPn5+alp06a69dZblZeXV7HP0qVL1bdvXwUHB6tBgwZq06aNHn/8cRdfDYCa4mU6AIC66amnntLTTz+tpKQkjRkzRrt379bMmTO1ceNGrVmzRt7e3iopKdHgwYNVXFyshx56SBERETp06JC++OIL5ebmKigoSNu3b9c111yjTp066ZlnnpGvr6/27dunNWvWmL5EANWEMgKg2h09elRTp07VoEGDtGjRIlmt5YOwbdu21YMPPqh33nlHI0eO1I4dO5SWlqYPP/xQN910U8XxkydPrvjrpUuXqqSkRIsWLVJoaKjLrwVAzeNrGgDVbtmyZSopKdH48eMriogk3XvvvQoMDNTChQslSUFBQZKkJUuW6OTJk+d8r+DgYEnSp59+KofDUbPBARhBGQFQ7fbv3y9JatOmzRnbfXx8FBcXV/Hz5s2ba+LEiZo3b55CQ0M1ePBgTZ8+/Yz7RYYNG6bExESNGjVK4eHhuvXWW/XBBx9QTIA6hDICwKgXX3xRP/74ox5//HGdOnVKDz/8sNq3b6+DBw9Kkvz9/bVy5UotW7ZMd911l3788UcNGzZMV1xxhex2u+H0AKoDZQRAtWvWrJkkaffu3WdsLykpUVpaWsXPf9axY0c98cQTWrlypVatWqVDhw5p1qxZFT+3Wq26/PLLNW3aNO3YsUPPPfecvv76a33zzTc1fzEAahxlBEC1S0pKko+Pj1555RU5nc6K7a+99pry8vI0ZMgQSVJ+fr7KysrOOLZjx46yWq0qLi6WJB0/fvys9+/cubMkVewDwL0xmwZAtWvcuLEmTZqkp59+WldeeaWuvfZa7d69WzNmzFCPHj105513SpK+/vprPfjgg7r55pvVunVrlZWV6e2335bNZtONN94oSXrmmWe0cuVKDRkyRM2aNVN2drZmzJihpk2bqm/fviYvE0A1oYwAqBFPPfWUGjdurFdffVUTJkxQSEiIRo8ereeff17e3t6SpPj4eA0ePFiff/65Dh06pHr16ik+Pl6LFi1Sr169JEnXXnut0tPT9frrrysnJ0ehoaEaMGCAnn766YrZOADcm8X532OoAAAALsY9IwAAwCjKCAAAMIoyAgAAjKKMAAAAoygjAADAKMoIAAAwyi2eM+JwOHT48GEFBATIYrGYjgMAACrB6XSqoKBAkZGRZ6zg/b/coowcPnxY0dHRpmMAAICLkJGRoaZNm573525RRgICAiSVX0xgYKDhNAAAoDLy8/MVHR1d8Tl+Pm5RRn7+aiYwMJAyAgCAm7nQLRbcwAoAAIyijAAAAKMoIwAAwCjKCAAAMIoyAgAAjKKMAAAAoygjAADAKMoIAAAwijICAACMoowAAACjKCMAAMAoyggAADDKY8uIw+HU4m2ZGvH6Bp0sKTMdBwAAj+W5ZcTp1PNf7lTynqOavzHDdBwAADxWlcvIypUrNXToUEVGRspiseiTTz654DErVqxQ165d5evrq5YtW+rNN9+8iKjVy8tm1X0D4iRJc1emqqTMYTgRAACeqcplpLCwUPHx8Zo+fXql9k9LS9OQIUN06aWXasuWLRo/frxGjRqlJUuWVDlsdbuxa1M1DvDV4bwifbrlkOk4AAB4JIvT6XRe9MEWixYsWKDrr7/+vPs8+uijWrhwobZt21ax7dZbb1Vubq4WL15cqfPk5+crKChIeXl5CgwMvNi45zQrOUUvLNqlFo3ra+mEAbJaLdX6/gAAeKrKfn7X+D0j69atU1JS0hnbBg8erHXr1p33mOLiYuXn55/xqil3JMQo0M9LKUcL9dWOzBo7DwAAOLcaLyOZmZkKDw8/Y1t4eLjy8/N16tSpcx4zdepUBQUFVbyio6NrLF+An7eG946VJM1YkaLfMFAEAAAuQq2cTTNp0iTl5eVVvDIyana2y8jEWPl5W/XjwTytTTlWo+cCAABnqvEyEhERoaysrDO2ZWVlKTAwUP7+/uc8xtfXV4GBgWe8alKjBr66tUeMJGnGin01ei4AAHCmGi8jvXv31vLly8/YtnTpUvXu3bumT10lo/o1l5fVojX7jumHjFzTcQAA8BhVLiMnTpzQli1btGXLFknlU3e3bNmiAwcOSCr/imX48OEV+99///1KTU3Vn/70J+3atUszZszQBx98oAkTJlTPFVSTpg3r6drOkZIYHQEAwJWqXEa+++47denSRV26dJEkTZw4UV26dNHkyZMlSUeOHKkoJpLUvHlzLVy4UEuXLlV8fLxefPFFzZs3T4MHD66mS6g+Ywa0kCQt2Z6lfdkFhtMAAOAZftNzRlylJp8z8r9G/+s7fbUjSzd2baoXb4mv0XMBAFCX1ZrnjLibBy5tKUn6dMshHco999RjAABQfSgj/6NzdLD6tGikModTc1emmo4DAECdRxk5hwcGlo+OvL/xgI6dKDacBgCAuo0ycg6JLRupY1SQikodenNtuuk4AADUaZSRc7BYLHpgYPnMmrfWpqugqNRwIgAA6i7KyHkMbh+huMb1lV9Upne/PXDhAwAAwEWhjJyH1WrR/aefOzJvdZqKSu2GEwEAUDdRRn7F9Z2j1CTIT0cLivXx5kOm4wAAUCdRRn6Fj5dV9/aLkyTNXpmiMrvDcCIAAOoeysgF3NozWg3reWv/sZP6clum6TgAANQ5lJELqOfjpd/3aS5JmrkiRW7w9HwAANwKZaQSRvRppvo+Nu08kq8Vu4+ajgMAQJ1CGamE4Ho+uj0hRpI0Y8U+w2kAAKhbKCOVNKpfnHxsVm1M/0kb04+bjgMAQJ1BGamk8EA/3dgtSlL5vSMAAKB6UEaq4L7+LWS1SF/vytbOI/mm4wAAUCdQRqogNrS+rurYRBKjIwAAVBfKSBWNOf2I+C9+PKz9xwoNpwEAwP1RRqqoQ1SQBrRuLIdTmr0y1XQcAADcHmXkIjwwsHx05KPvDio7v8hwGgAA3Btl5CL0bB6ibs0aqsTu0Gtr0kzHAQDArVFGLoLFYqkYHfn3+gPKO1VqOBEAAO6LMnKRLm0TpjbhATpRXKa316WbjgMAgNuijFwkq9WiMadHR15fk65TJXbDiQAAcE+Ukd/gmk5NFB3ir+OFJZq/8YDpOAAAuCXKyG/gZbNqdP/y0ZG5q9JUancYTgQAgPuhjPxGN3drqtAGvjqUe0qfbTlsOg4AAG6HMvIb+XnbdE/f5pKkmckpcjichhMBAOBeKCPV4M5eMQrw89K+7BNaujPLdBwAANwKZaQaBPh5665ezSRJM1akyOlkdAQAgMqijFSTu/s2l6+XVT9k5GpdyjHTcQAAcBuUkWoS2sBXw3pESyofHQEAAJVDGalG9/aLk81q0ep9OfrxYK7pOAAAuAXKSDWKDqmn6+IjJUkzGR0BAKBSKCPV7P7Tj4hfvD1T+7JPGE4DAEDtRxmpZq3DA5TULlxOpzQ7mdERAAAuhDJSAx64tHx0ZMH3h3Q495ThNAAA1G6UkRrQNaahesWFqMzh1NxVqabjAABQq1FGasgDA1tKkt7fkKHjhSWG0wAAUHtRRmpIv1ah6hAVqFOldr25Nt10HAAAai3KSA2xWCwVoyNvrU3XieIyw4kAAKidKCM1aHD7CMWF1lfeqVK99+0B03EAAKiVKCM1yGa16L4BcZKkeatTVVxmN5wIAIDahzJSw27o0lQRgX7Kyi/Wx5sPmY4DAECtQxmpYT5eVo3q11xS+UPQ7A6n4UQAANQulBEXuK1njILreSv92Ekt2nbEdBwAAGoVyogL1Pf10u/7xEqSZnyTIqeT0REAAH5GGXGREb1jVc/Hph1H8pW856jpOAAA1BqUERdpWN9Ht/WMkSTNWMECegAA/Iwy4kKj+jWXt82iDWnHtWn/cdNxAACoFSgjLtQkyF+/69JUUvm9IwAAgDLicvcNiJPFIi3fla1dmfmm4wAAYBxlxMXiGjfQ1R2aSJJmce8IAACUERPGDGwhSfr8xyPKOH7ScBoAAMyijBjQISpI/VqFyu5wavZKRkcAAJ6NMmLIAwNbSpI++O6gsguKDKcBAMAcyoghveJC1CUmWCVlDr2+Ot10HAAAjKGMGGKxWCpGR95Zv195p0oNJwIAwAzKiEGXtw1T6/AGOlFcpnfW7zcdBwAAIygjBlmtloqZNW+sSVNRqd1wIgAAXI8yYtg1nSIVFeyvnBMl+uC7DNNxAABwOcqIYd42q+4bECdJmp2cqlK7w3AiAABcizJSC9zSPVqhDXx0KPeUPv/hsOk4AAC4FGWkFvDztmlkYnNJ0swVKXI4nIYTAQDgOpSRWuKu3s0U4OulvdkntHxXtuk4AAC4DGWklgj089advZtJkmas2Cenk9ERAIBnoIzUIncnNpePl1XfH8jV+tTjpuMAAOASlJFapHGAr27p3lRS+egIAACegDJSy9zXv4VsVotW7c3R1oN5puMAAFDjKCO1THRIPQ3t1ESSNDOZ0REAQN13UWVk+vTpio2NlZ+fnxISErRhw4Zf3f+ll15SmzZt5O/vr+joaE2YMEFFRUUXFdgTjDm9gN6ibZlKPXrCcBoAAGpWlcvI/PnzNXHiRE2ZMkWbN29WfHy8Bg8erOzsc09Hfffdd/XYY49pypQp2rlzp1577TXNnz9fjz/++G8OX1e1iQhQUrswOZ3lT2UFAKAuq3IZmTZtmu69916NHDlSl1xyiWbNmqV69erp9ddfP+f+a9euVWJiom6//XbFxsZq0KBBuu222y44muLpfh4d+fj7gzqSd8pwGgAAak6VykhJSYk2bdqkpKSkX97AalVSUpLWrVt3zmP69OmjTZs2VZSP1NRUffnll7r66qvPe57i4mLl5+ef8fI03Zo1VM/mISq1OzVvVZrpOAAA1JgqlZGcnBzZ7XaFh4efsT08PFyZmZnnPOb222/XM888o759+8rb21stWrTQwIEDf/VrmqlTpyooKKjiFR0dXZWYdcYDA1tIkt7bcEA/FZYYTgMAQM2o8dk0K1as0PPPP68ZM2Zo8+bN+vjjj7Vw4UL93//933mPmTRpkvLy8ipeGRkZNR2zVhrQurHaRwbqZIldb65NNx0HAIAaUaUyEhoaKpvNpqysrDO2Z2VlKSIi4pzHPPnkk7rrrrs0atQodezYUTfccIOef/55TZ06VQ6H45zH+Pr6KjAw8IyXJ7JYLBpzenTkzbXpKiwuM5wIAIDqV6Uy4uPjo27dumn58uUV2xwOh5YvX67evXuf85iTJ0/Kaj3zNDabTZJYf6USrurQRM1D6yvvVKne23DAdBwAAKpdlb+mmThxoubOnau33npLO3fu1JgxY1RYWKiRI0dKkoYPH65JkyZV7D906FDNnDlT77//vtLS0rR06VI9+eSTGjp0aEUpwfnZrBbd1z9OkjRvVZqKy+yGEwEAUL28qnrAsGHDdPToUU2ePFmZmZnq3LmzFi9eXHFT64EDB84YCXniiSdksVj0xBNP6NChQ2rcuLGGDh2q5557rvquoo67oWuU/rFsjzLzi/TJ94c0rEeM6UgAAFQbi9MNvivJz89XUFCQ8vLyPPb+kbkrU/XclzvVPLS+lk0cIJvVYjoSAAC/qrKf36xN4yZuS4hRkL+30nIKtXjbuadRAwDgjigjbqKBr5dG9ImVJM1YsY+bfwEAdQZlxI2M7BMrf2+bth/O16q9OabjAABQLSgjbqRhfR/d1rP85tUZK/YZTgMAQPWgjLiZe/s3l7fNovWpx7X5wE+m4wAA8JtRRtxMkyB/Xd85SpI045sUw2kAAPjtKCNu6P6BLWSxSMt2Zml3ZoHpOAAA/CaUETfUonEDXdm+fC2gWcmMjgAA3BtlxE09MLClJOmzHw4r4/hJw2kAALh4lBE31bFpkPq1CpXd4dTcVamm4wAAcNEoI25szMAWkqT5GzN0tKDYcBoAAC4OZcSN9Y5rpPjoYBWXOfTGmjTTcQAAuCiUETdmsVj0wOnRkbfX7Vd+UanhRAAAVB1lxM1d0S5crcIaqKC4TO+s3286DgAAVUYZcXNWq0X3DygfHXl9dZqKSu2GEwEAUDWUkTrg2s6Rigr2V86JEn246aDpOAAAVAllpA7wtlk1un+cJGnOyhSV2R2GEwEAUHmUkTrilu7RalTfRxnHT+mLH4+YjgMAQKVRRuoIfx+bRibGSpJmrkiRw+E0GwgAgEqijNQhd/WOVQNfL+3OKtDXu7JNxwEAoFIoI3VIkL+37ugVI0masWKfnE5GRwAAtR9lpI65p29z+XhZtflArjakHTcdBwCAC6KM1DFhAX66uVtTSdKMFSmG0wAAcGGUkTrovv4tZLVIyXuOatuhPNNxAAD4VZSROiimUT1d0ylSkjQzmdERAEDtRhmpo8acXkBv0dYjSsspNJwGAIDzo4zUUe2aBOqytmFyOKXZjI4AAGoxykgd9sDp0ZH/bD6ozLwiw2kAADg3ykgd1j02RD1jQ1Rqd+q11amm4wAAcE6UkTpuzKXloyP//vaAck+WGE4DAMDZKCN13MDWjdWuSaBOltj11tr9puMAAHAWykgdZ7FYKmbWvLk2TSdLygwnAgDgTJQRD3B1hwg1a1RPP50s1XsbMkzHAQDgDJQRD+Bls+q+/uWjI/NWpaqkzGE4EQAAv6CMeIgbu0UpLMBXR/KK9MmWQ6bjAABQgTLiIXy9bBrVr7kkaVZyiuwOp+FEAACUo4x4kNsTminQz0upRwv11fZM03EAAJBEGfEoDXy9NKJPrCRpxooUOZ2MjgAAzKOMeJjf94mVn7dVWw/lafW+HNNxAACgjHiaRg18dWuPGEnSjG9YQA8AYB5lxAPd2z9OXlaL1qUe0/cHfjIdBwDg4SgjHigq2F/Xd4mSJM1cwegIAMAsyoiHun9AnCwW6asdWdqbVWA6DgDAg1FGPFTLsAANuiRckjTm35u1eNsROXj2CADAAMqIB3tkUBsF+nlpX/YJ3f/OZg3552ot3pZJKQEAuJTF6QYPm8jPz1dQUJDy8vIUGBhoOk6dknuyRK+tTtMba9J1orh8Rd9LmgRqXFIrDbokXBaLxXBCAIC7quznN2UEkiglAIDqRxnBRfmp8OdSkqbCErskqX1koMZd3kpXUEoAAFVAGcFv8lNhieatTtWba9LPKCXjk1orqV0YpQQAcEGUEVQLSgkA4GJRRlCtjheWaN6qVL25Nl0nT5eSDlGBGn95a11OKQEAnANlBDXieGGJ5q5K1Vv/VUo6RgVpfFIrXdaWUgIA+AVlBDXqeGGJ5qxM1b/W/VJKOjUtLyWXtqGUAAAoI3CRYyeKNXdVGqUEAHAWyghc6tiJYs1Zlap/rd2vU6XlpSS+aZDGJ7XWwDaNKSUA4IEoIzDi2Ini01/f/FcpiQ7W+KRWGtiaUgIAnoQyAqNyThRrLqUEADwaZQS1Qk7FSEm6ikodkqTOp0vJAEoJANRplBHUKkcLijVnZYreXr+/opR0iQnW+KTW6t8qlFICAHUQZQS1EqUEADwHZQS1WnZBkeYkp+qdb38pJV1Pl5J+lBIAqBMoI3AL2QVFmp2cqnfW71dxWXkp6dasocYntVLflpQSAHBnlBG4FUoJANQ9lBG4pez8Is1KTtW/v/2llHRv1lDjk1orsWUjSgkAuBHKCNxadn6RZian6N/fHlDJ6VLSI7a8lPRpQSkBAHdAGUGdkJVfpFmUEgBwS5QR1ClZ+UWauSJF7274pZT0jA3R+KRW6k0pAYBaiTKCOikzr3yk5IxS0ry8lPRpEWo4HQDgv1FGUKedq5QkNA/R+KTW6t2ikeF0AACp8p/f1ot58+nTpys2NlZ+fn5KSEjQhg0bfnX/3NxcjR07Vk2aNJGvr69at26tL7/88mJODUiSIoL89NS17bXyj5dqRO9m8rFZ9W3acd02d72GzV6ndSnHTEcEAFRSlUdG5s+fr+HDh2vWrFlKSEjQSy+9pA8//FC7d+9WWFjYWfuXlJQoMTFRYWFhevzxxxUVFaX9+/crODhY8fHxlTonIyO4kCN5pzRzRYre35ChEnv5SEmvuPKRkl5xjJQAgAk19jVNQkKCevTooVdffVWS5HA4FB0drYceekiPPfbYWfvPmjVLf/vb37Rr1y55e3tX8TLKUUZQWecqJb3jGml8UislUEoAwKVqpIyUlJSoXr16+uijj3T99ddXbB8xYoRyc3P16aefnnXM1VdfrZCQENWrV0+ffvqpGjdurNtvv12PPvqobDbbOc9TXFys4uLiMy4mOjqaMoJKO5xbXkrmb6SUAIApNXLPSE5Ojux2u8LDw8/YHh4erszMzHMek5qaqo8++kh2u11ffvmlnnzySb344ot69tlnz3ueqVOnKigoqOIVHR1dlZiAIoP99X/Xd9CKPw7Unb1i5G2zaF3qMQ2bs163z12vDWnHTUcEAJx2UTewVoXD4VBYWJjmzJmjbt26adiwYfrzn/+sWbNmnfeYSZMmKS8vr+KVkZFR0zFRR0UG++vZ6ztqxR8v1R0J5aVkbcox3TJ7ne6Yt14b0yklAGCaV1V2Dg0Nlc1mU1ZW1hnbs7KyFBERcc5jmjRpIm9v7zO+kmnXrp0yMzNVUlIiHx+fs47x9fWVr69vVaIBvyoq2F/P3dBRD1zaUjO+2acPvsvQmn3HtGbfOvVrFaqnrm2vFo0bmI4JAB6pSiMjPj4+6tatm5YvX16xzeFwaPny5erdu/c5j0lMTNS+ffvkcDgqtu3Zs0dNmjQ5ZxEBatLPpeSbPwzU7adHSlbtzdFVL63StKV7VFRqNx0RADxOlb+mmThxoubOnau33npLO3fu1JgxY1RYWKiRI0dKkoYPH65JkyZV7D9mzBgdP35c48aN0549e7Rw4UI9//zzGjt2bPVdBVBFTRvW0/M3dNTyiQN1aZvGKrE79Mryvbrq5VVavTfHdDwA8ChV+ppGkoYNG6ajR49q8uTJyszMVOfOnbV48eKKm1oPHDggq/WXjhMdHa0lS5ZowoQJ6tSpk6KiojRu3Dg9+uij1XcVwEWKaVRPr/++hxZty9TTn29XWk6h7nztW13fOVJ/HnKJGgfwdSEA1DQeBw+cVlBUqhe/2qO31qXL6ZQC/bz02FXtdGuPaFmtLMQHAFXF2jTARfrxYK4eX7BV2w7lS5K6NWuo527ooLYR/NkDgKqo0bVpgLqsU9NgffJAoiZfc4nq+9i0af9PuuaV1Zq6aKdOlpSZjgcAdQ5lBDgHL5tVd/dtrmWPDNCV7SNU5nBqdnKqrpi2Ul/vyrrwGwAAKo0yAvyKJkH+mnVXN702oruigv11KPeU7n7zO415Z5My84pMxwOAOoEyAlTC5e3CtXRif43uHyeb1aJF2zKVNC1Zb6xJk91R62+7AoBajTICVFI9Hy89fnU7ff5gX3WJCdaJ4jI9/fkOXT99jbYezDMdDwDcFmUEqKJLIgP1n/v76LkbOijQz0tbD+Xpuumr9fTn23WimBtcAaCqKCPARbBaLbojoZmWPzJQ13WOlMMpvbEmXUkvJmvR1iNygxnzAFBrUEaA36BxgK9evrWL3r6np2Ib1VNmfpHG/Huz7nnrO2UcP2k6HgC4BcoIUA36tWqsxeP76+HLWsrbZtHXu7I16B8rNSs5RaV2x4XfAAA8GGUEqCZ+3jZNHNRGi8b1U0LzEJ0qteuFRbs09J+rtWn/T6bjAUCtRRkBqlnLsAC9P7qX/nZTJzWs561dmQW6ceZaTfp4q/JOlpqOBwC1DmUEqAEWi0U3d4/W8kcG6uZuTSVJ7204oMunrdCnWw5xgysA/BfKCFCDQur76G83x2v+6F5qGdZAOSdKNO79LbrrtQ1Kzyk0HQ8AagXKCOACCXGN9OXD/fSHQa3l62XV6n05GvTSSr2yfK+Ky+ym4wGAUZQRwEV8vKx68LJW+mpCf/VrFaqSMoemLd2jq15epXUpx0zHAwBjKCOAizVrVF//urunXrmti0Ib+Cr1aKFum7tej3zwg44XlpiOBwAuRxkBDLBYLLo2PlLLHxmgO3vFyGKR/rP5oC57cYU+2JghB4vvAfAglBHAoCB/bz17fUd9PKaP2jUJVO7JUv3pPz/q1jnrtTerwHQ8AHAJyghQC3SJaajPH0zUn69uJ39vmzakH9fVr6zS35bsUlEpN7gCqNsoI0At4WWz6t7+cVr2yAAltQtXqd2p6d+kaNA/Vip5z1HT8QCgxlBGgFomKthf80Z01+y7uqlJkJ8OHD+pEa9v0IPvblZ2fpHpeABQ7SgjQC01uH2Elk4coHv6NpfVIn3x4xFd/mKy3l6XLjs3uAKoQyxON3gudX5+voKCgpSXl6fAwEDTcQCX23YoT39esFU/HMyTJMVHB+v5GzqofWSQ4WQAcH6V/fxmZARwAx2igvTxA4l65rr2auDrpR8ycnXtq2v07Bc7VFhcZjoeAPwmlBHATdisFg3vHavljwzQkI5NZHc4NW91mq6YlqyvtmeajgcAF40yAriZ8EA/Tb+jq94Y2UNNG/rrcF6RRr+9Sff+6zsdzj1lOh4AVBllBHBTl7YJ09IJA/TAwBbyslq0dEeWkqYla96qVJXZHabjAUClUUYAN+bvY9OfrmyrL8f1U4/YhjpZYtezC3fq2lfXaEtGrul4AFAplBGgDmgdHqD5o3vrLzd2VJC/t3YcydcNM9boyU+2Kb+o1HQ8APhVlBGgjrBaLRrWI0bLHxmg33WJktMpvb1+vy5/MVmf/3BYbjCLH4CHoowAdUxoA19NG9ZZ745KUFxofR0tKNZD732v37+xUQeOnTQdDwDOQhkB6qg+LUO1aHw/jU9qJR+bVcl7juqKfyRr+jf7VFLGDa4Aag/KCFCH+XrZND6ptRaP76c+LRqpuMyhvy3ZrSGvrNLG9OOm4wGAJMoI4BHiGjfQv0cl6B/D4tWovo/2Zp/QzbPW6dGPftRPhSWm4wHwcJQRwENYLBbd0KWplj8yQLf1jJYkzf8uQ5dPS9aC7w9ygysAYygjgIcJruejqb/rpI/u76024QE6XliiCfN/0Ig3NirjODe4AnA9ygjgobrHhuiLh/vqj4PbyMfLqpV7jmrQP1byBFcALkcZATyYt82qsZe21OJx/dQrLkSnSsuf4HrDjLXafjjPdDwAHoIyAkBxjRvovXt76S83dlSgn5e2HsrTta+u0dRFO3WqxG46HoA6jjICQFL5Da7DesRo2SMDNKRjE9kdTs1OTtWVL6/Umn05puMBqMMoIwDOEBbgp+l3dNW84d3VJMhP+4+d1B3zvtUfP/xBuSeZBgyg+lFGAJxT0iXh+mpCfw3v3UwWi/ThpoNKmpasz1jnBkA1o4wAOK8AP289c10HfXR/b7UKa6CcEyV6+L3vdfebG3Uo95TpeADqCMoIgAvq1ixECx/upwlJreVjs+qb3Ud1xbRkvbEmTXYHoyQAfhvKCIBK8fGyalxSK305rq96xDbUyRK7nv58h26cuVa7MvNNxwPgxigjAKqkZViA5o/urWev76AAXy9tycjVNa+s1t+X7FZRKdOAAVQdZQRAlVmtFt3Zq5mWThygwe3DVeZw6tVv9unql1dpfeox0/EAuBnKCICLFhHkp9l3ddesO7sqLMBXqTmFunXOek36+EflnSo1HQ+Am6CMAPjNruzQREsnDtDtCTGSpPc2ZChpWrK+3HqEacAALogyAqBaBPl76/kbOmr+6F6Ka1xfRwuK9cC/N2v025uUmVdkOh6AWowyAqBaJcQ10pcP99PDl7WUt82ipTuylDQtWW+v3y8H04ABnANlBEC18/O2aeKgNvrioX7qEhOsE8VlevKTbbpl9jrtzSowHQ9ALUMZAVBj2kQE6KP7++jpa9urvo9N3+3/SVe/skr/WLpHxWVMAwZQjjICoEbZrBaN6BOrpRMH6PK2YSq1O/Xy8r0a8spqfZd+3HQ8ALUAZQSAS0QG+2veiO569fYuCm3go33ZJ3TTrHV64pOtyi9iGjDgySgjAFzGYrHomk6RWjZxgG7p3lSS9M76A7piWrK+2p5pOB0AUygjAFwuuJ6P/npTvN69N0GxjeopK79Yo9/epDHvbFJ2PtOAAU9DGQFgTJ8WoVo8vr8eGNhCNqtFi7Zl6vJpyXr32wNMAwY8CGUEgFF+3jb96cq2+vzBvurUNEgFRWV6fMFW3Tp3vVKOnjAdD4ALUEYA1AqXRAZqwQOJemJIO/l727Qh7biuenmVXv16r0rKHKbjAahBlBEAtYbNatGofnH6akJ/DWjdWCVlDv39qz0a+s/V+v7AT6bjAaghlBEAtU50SD29ObKHXr61s0Lq+2h3VoF+N3Otnvpsu04Ul5mOB6CaUUYA1EoWi0XXdY7SsokD9LuuUXI6pTfXpmvQtGR9vSvLdDwA1YgyAqBWC6nvo2m3dNa/7u6p6BB/Hc4r0t1vfqeH3vteRwuKTccDUA0oIwDcQv/WjbVkfH+N7h8nq0X6/IfDSpqWrA++y5DTyTRgwJ1RRgC4jXo+Xnr86nb6dGxftY8MVN6pUv3pox91x7xvlZ5TaDoegItEGQHgdjo2DdKnYxM16aq28vO2am3KMQ1+aaVmrkhRqZ1pwIC7oYwAcEteNqvuG9BCS8b3V9+WoSouc+gvi3fp2lfX6MeDuabjAaiCiyoj06dPV2xsrPz8/JSQkKANGzZU6rj3339fFotF119//cWcFgDO0qxRfb19T0/9/eZ4Bdfz1s4j+bp++ho9+8UOnSxhGjDgDqpcRubPn6+JEydqypQp2rx5s+Lj4zV48GBlZ2f/6nHp6en6wx/+oH79+l10WAA4F4vFopu6NdWyiQN0bXykHE5p3uo0DfrHSiXvOWo6HoALsDireBt6QkKCevTooVdffVWS5HA4FB0drYceekiPPfbYOY+x2+3q37+/7r77bq1atUq5ubn65JNPKn3O/Px8BQUFKS8vT4GBgVWJC8ADfbMrW098sk2Hck9Jkm7oEqUnhrRTowa+hpMBnqWyn99VGhkpKSnRpk2blJSU9MsbWK1KSkrSunXrznvcM888o7CwMN1zzz2VOk9xcbHy8/PPeAFAZV3aNkxfTeivuxOby2KRFnx/SEnTkvXx5oNMAwZqoSqVkZycHNntdoWHh5+xPTw8XJmZmec8ZvXq1Xrttdc0d+7cSp9n6tSpCgoKqnhFR0dXJSYAqL6vlyYPvUQLHkhU24gA/XSyVBM/+EHDX9+gjOMnTccD8F9qdDZNQUGB7rrrLs2dO1ehoaGVPm7SpEnKy8ureGVkZNRgSgB1WefoYH3+UF/9cXAb+XhZtWpvjgb9Y6XmrUpVGdOAgVrBqyo7h4aGymazKSvrzHUhsrKyFBERcdb+KSkpSk9P19ChQyu2ORzl//J7eXlp9+7datGixVnH+fr6yteX73YBVA9vm1VjL22pqzpE6PEFW7U+9bieXbhTn245rBdu7Kj2kUGmIwIerUojIz4+PurWrZuWL19esc3hcGj58uXq3bv3Wfu3bdtWW7du1ZYtWype1157rS699FJt2bKFr18AuFRc4wZ6795e+suNHRXo56Wth/J07atr9NzCHcovKjUdD/BYVRoZkaSJEydqxIgR6t69u3r27KmXXnpJhYWFGjlypCRp+PDhioqK0tSpU+Xn56cOHTqccXxwcLAknbUdAFzBYrFoWI8YXdo2TE9/tkMLtx7R3FVp+s/mQxqf1Eq39YyRt43nQQKuVOUyMmzYMB09elSTJ09WZmamOnfurMWLF1fc1HrgwAFZrfyLDKB2Cwvw0/Q7uuqmXdl6duEOpRwt1ORPt+uttel6/Op2uqxtmCwWi+mYgEeo8nNGTOA5IwBqUqndofc3HNA/lu3V8cISSVJiy0b689WX6JJI/psDXKzKfn5TRgDgtPyiUs34JkWvr05Tid0hi0W6qWtT/WFwG4UH+pmOB7gdyggAXKSM4yf11yW79fkPhyVJ/t423TcgTqP7x6meT5W/3QY8FmUEAH6jzQd+0rNf7NDmA7mSpPBAXz0yqI1u7NpUNiv3kwAXQhkBgGrgdDq1cOsR/WXxLmUcL1/r5pImgXpiSDv1aVn5hzkCnogyAgDVqLjMrrfWpuufX+9TQVGZJCmpXZgeu6qdWoY1MJwOqJ0oIwBQA44XlujlZXv0zrcHZHc4ZbNadEdCjMYntVZIfR/T8YBahTICADVoX/YJvbBop5btzJYkBfh56cFLW+r3ibHy9bIZTgfUDpQRAHCBtSk5evaLndpxJF+SFB3ir0evbKshHZvw0DR4PMoIALiI3eHUx5sP6u9f7VZWfrEkqWtMsJ645hJ1jWloOB1gDmUEAFzsZEmZ5q5M06zkFJ0qtUuSrunURI9e2VbRIfUMpwNcjzICAIZk5Rfpxa9268NNB+V0Sj42q0b2jdXYS1sq0M/bdDzAZSgjAGDY9sN5ev7LnVqz75gkKaS+DysDw6NQRgCgFnA6nfpmd7aeW7hTKUcLJUktGtdnZWB4BMoIANQirAwMT0QZAYBaiJWB4UkoIwBQi7EyMDwBZQQA3MC5Vgb+w+mVga2sDAw3RxkBADfBysCoqygjAOBmWBkYdQ1lBADcFCsDo66gjACAm2NlYLg7yggA1BGsDAx3RRkBgDqElYHhjigjAFAHsTIw3AllBADqsLNWBvayamQiKwOjdqGMAIAHONfKwBNOrwzsxcrAMIwyAgAegpWBUVtRRgDAw7AyMGobyggAeChWBkZtQRkBAA/HysAwjTICAJDEysAwhzICAKjAysAwgTICADjL+VYGfuiyVoqPDjYbDnUOZQQAcF7/uzKwJPWKC9F9A1poYOvGTAdGtaCMAAAuaF/2Cc1YsU+fbTmsstOlpE14gEb3j9PQ+Ej5ePHgNFw8yggAoNIO557S66vT9N6GAyosKV/zpkmQn+5ObK5be0YrgEfM4yJQRgAAVZZ3qlT//na/3liTrqMF5asDB/h56Y6EZhqZGMtzSlAllBEAwEUrLrPr0+8Pa/bKlIpHzHvbLLqhS5RG949Ty7AAwwnhDigjAIDfzOFw6utd2Zq9MkUb03+q2J7ULkz3DWih7s0acrMrzosyAgCoVpv2/6Q5K1P01Y4s/fzJ0SUmWPf1j9MVl0TIxgPU8D8oIwCAGpF69ITmrkrTfzYfVEmZQ5LUPLS+7u0Xp991jZKft81wQtQWlBEAQI06WlCst9am61/r0pV/+gFqoQ189Ps+sbqzVzMF1/MxnBCmUUYAAC5RWFym+Rsz9NrqNB3KLX/UfD0fm4b1iNY9fZuracN6hhPCFMoIAMClSu0Ofbn1iGYlp2rnkXxJks1q0TWdmmh0/zi1jwwynBCuRhkBABjhdDq1el+OZienavW+nIrt/VqFanT/OPVtGcoMHA9BGQEAGLftUJ7mrEzVwq1HKtbAuaRJoO4bEKchHZvIy8bj5usyyggAoNbIOH5Sr61O0/yNGTpVWv64+ahgf93Tt7mG9YhWfV8vwwlREygjAIBa56fCEr2zfr/eWpeunBMlkqQgf2/d1auZRvSJVeMAX8MJUZ0oIwCAWquo1K7/bD6oeavSlJZT/rh5Hy+rburWVPf2i1Pz0PqGE6I6UEYAALWe3eHU0h2ZmpWcqi0ZuZIki0UafEmERg+IU9eYhmYD4jehjAAA3IbT6dTG9PLHzS/bmV2xvWdsiEb3j9NlbcNk5XHzbocyAgBwS3uzCjR3VaoWfH9Ipfbyj6iWYQ00ul+crusSKV8vHjfvLigjAAC3lpVfpDfWpOvf6/eroLj8cfNhAb4amdhctyfEKMjf23BCXAhlBABQJxQUler9DeWPm8/ML5IkNfD10m09o3V33+ZqEuRvOCHOhzICAKhTSsoc+vyHw5q9MkV7sk5IkrysFl3bOVL39W+hNhEBhhPif1FGAAB1ktPp1IrdRzV7ZYrWpx6v2D6wTWPd17+FesWF8Lj5WoIyAgCo837IyNWclalatO2ITj9tXp2aBum+/i10ZYcI2ZiBYxRlBADgMfYfK9S8VWn6cFOGikodkqSYkHq6t19z3dQtWv4+zMAxgTICAPA4x04U61/r9utf69L108lSSVJIfR8N791Mw3vHKqS+j+GEnoUyAgDwWKdK7PpwU4bmrkpVxvFTkiQ/b6tu6R6tUX3jFNOonuGEnoEyAgDweGV2hxZvz9Ts5FRtPZQnSbJapKs6NtF9/ePUqWmw2YB1HGUEAIDTnE6n1qUe0+zkVCXvOVqxvXdcI903IE4DWjdmBk4NoIwAAHAOO4/ka+7KVH32w2GVnZ6CExXsr8HtI3Rlhwh1a9aQWTjVhDICAMCvOJx7Sq+vTtN7Gw6osMResT20gY+uuCRcg9pHqE+LRqyF8xtQRgAAqIRTJXat3HtUS7ZnatmOLOUXlVX8LMDXS5e2DdOVHSI0oHVj1ff1MpjU/VBGAACoolK7Q9+mHtfi7Uf01fYsZRcUV/zMx8uq/q1CNbh9hJLahash04QviDICAMBv4HA4teVgrpZsy9SS7ZlKP3ay4mc2q0UJzUM0uH2EBrUPZ7G+86CMAABQTZxOp3ZnFWjJtiwt3p6pnUfyz/h5fHSwBrcP15XtIxTXuIGhlLUPZQQAgBpy4NhJLdlePmKy6cBP+u9P0lZhDXRlhwgNbh+h9pGBHj1lmDICAIALZBcUaemOLC3ZnqW1+3IqpgtLv0wZHtw+XN1jQzxuynBlP7+tF/Pm06dPV2xsrPz8/JSQkKANGzacd9+5c+eqX79+atiwoRo2bKikpKRf3R8AAHcSFuCnOxKa6V9399SmJ6/QS8M668r2EfL3tulQ7im9viZNw+asV8/nlumx//yob3Znq7jMfuE39iBVHhmZP3++hg8frlmzZikhIUEvvfSSPvzwQ+3evVthYWFn7X/HHXcoMTFRffr0kZ+fn/7yl79owYIF2r59u6Kioip1TkZGAADu5r+nDC/fma28U6UVP/t5yvDg9hEa2KbuThmusa9pEhIS1KNHD7366quSJIfDoejoaD300EN67LHHLni83W5Xw4YN9eqrr2r48OGVOidlBADgzn6eMvzzfSbnmjI8qH2ErqhjU4Yr+/ldpSpWUlKiTZs2adKkSRXbrFarkpKStG7dukq9x8mTJ1VaWqqQkJDz7lNcXKzi4l/+QeXn5593XwAAajtvm1V9W4Wqb6tQPX1t+/Ipw9sztWRb+ZThZTuztWxntmxWi3rGhujKDp41ZbhKZSQnJ0d2u13h4eFnbA8PD9euXbsq9R6PPvqoIiMjlZSUdN59pk6dqqeffroq0QAAcAtWq0VdYxqqa0xDPXZlW+3JOqHFp59lsuNIvtalHtO61GOa8tl2j5ky7NIvqV544QW9//77WrFihfz8/M6736RJkzRx4sSKv8/Pz1d0dLQrIgIA4DIWi0VtIgLUJiJA45Ja6cCxk/pqR6YWbyufMvxDRq5+yMjVXxfvrtNThqtURkJDQ2Wz2ZSVlXXG9qysLEVERPzqsX//+9/1wgsvaNmyZerUqdOv7uvr6ytfX9+qRAMAwO3FNKqnUf3iNKpf3BlThtel5Ghv9gnt/Xqf/vn1vjo3ZfiibmDt2bOn/vnPf0oqv4E1JiZGDz744HlvYP3rX/+q5557TkuWLFGvXr2qHJIbWAEAnizvVKm+2ZWtJdsztWL3UZ0q/WVqcKP65asMD+5Q+1YZrrHZNPPnz9eIESM0e/Zs9ezZUy+99JI++OAD7dq1S+Hh4Ro+fLiioqI0depUSdJf/vIXTZ48We+++64SExMr3qdBgwZq0KBy339RRgAAKHeqxK5Ve49q8TmmDDfw9dJltWjKcI3MppGkYcOG6ejRo5o8ebIyMzPVuXNnLV68uOKm1gMHDshq/eVZajNnzlRJSYluuummM95nypQpeuqpp6p6egAAPJq/j02D2kdoUPuIM6YMf7UjU1n5xfrsh8P67IfDZ0wZTmoXrpBaPGWYx8EDAFAHVKwy/F9Thn9masowa9MAAOChnE7nWVOG/9vPU4YHt49QixqcMkwZAQAAkqSM47+sMvzd/rNXGR7cPkLDekQrOqRetZ6XMgIAAM6SXVCkZTuytXh7ptal5KjUXl4D/j0qQYktQ6v1XDV2AysAAHBfYQF+uj0hRrcnxCjvVKlW7M7Wit1H1bP5+ZdpqWmUEQAAPFSQv7eu6xyl6zpHGc1hvfAuAAAANYcyAgAAjKKMAAAAoygjAADAKMoIAAAwijICAACMoowAAACjKCMAAMAoyggAADCKMgIAAIyijAAAAKMoIwAAwCjKCAAAMMotVu11Op2SpPz8fMNJAABAZf38uf3z5/j5uEUZKSgokCRFR0cbTgIAAKqqoKBAQUFB5/25xXmhulILOBwOHT58WAEBAbJYLNX2vvn5+YqOjlZGRoYCAwOr7X3diaf/Djz9+iV+B1y/Z1+/xO+gJq/f6XSqoKBAkZGRslrPf2eIW4yMWK1WNW3atMbePzAw0CP/AP43T/8dePr1S/wOuH7Pvn6J30FNXf+vjYj8jBtYAQCAUZQRAABglEeXEV9fX02ZMkW+vr6moxjj6b8DT79+id8B1+/Z1y/xO6gN1+8WN7ACAIC6y6NHRgAAgHmUEQAAYBRlBAAAGEUZAQAARlFGAACAUR5dRqZPn67Y2Fj5+fkpISFBGzZsMB3JZVauXKmhQ4cqMjJSFotFn3zyielILjV16lT16NFDAQEBCgsL0/XXX6/du3ebjuUyM2fOVKdOnSqeuNi7d28tWrTIdCxjXnjhBVksFo0fP950FJd56qmnZLFYzni1bdvWdCyXOnTokO688041atRI/v7+6tixo7777jvTsVwmNjb2rD8DFotFY8eOdXkWjy0j8+fP18SJEzVlyhRt3rxZ8fHxGjx4sLKzs01Hc4nCwkLFx8dr+vTppqMYkZycrLFjx2r9+vVaunSpSktLNWjQIBUWFpqO5hJNmzbVCy+8oE2bNum7777TZZddpuuuu07bt283Hc3lNm7cqNmzZ6tTp06mo7hc+/btdeTIkYrX6tWrTUdymZ9++kmJiYny9vbWokWLtGPHDr344otq2LCh6Wgus3HjxjP++S9dulSSdPPNN7s+jNND9ezZ0zl27NiKv7fb7c7IyEjn1KlTDaYyQ5JzwYIFpmMYlZ2d7ZTkTE5ONh3FmIYNGzrnzZtnOoZLFRQUOFu1auVcunSpc8CAAc5x48aZjuQyU6ZMccbHx5uOYcyjjz7q7Nu3r+kYtcq4ceOcLVq0cDocDpef2yNHRkpKSrRp0yYlJSVVbLNarUpKStK6desMJoMpeXl5kqSQkBDDSVzPbrfr/fffV2FhoXr37m06jkuNHTtWQ4YMOeO/BZ5k7969ioyMVFxcnO644w4dOHDAdCSX+eyzz9S9e3fdfPPNCgsLU5cuXTR37lzTsYwpKSnRO++8o7vvvlsWi8Xl5/fIMpKTkyO73a7w8PAztoeHhyszM9NQKpjicDg0fvx4JSYmqkOHDqbjuMzWrVvVoEED+fr66v7779eCBQt0ySWXmI7lMu+//742b96sqVOnmo5iREJCgt58800tXrxYM2fOVFpamvr166eCggLT0VwiNTVVM2fOVKtWrbRkyRKNGTNGDz/8sN566y3T0Yz45JNPlJubq9///vdGzu9l5KxALTJ27Fht27bNo74vl6Q2bdpoy5YtysvL00cffaQRI0YoOTnZIwpJRkaGxo0bp6VLl8rPz890HCOuuuqqir/u1KmTEhIS1KxZM33wwQe65557DCZzDYfDoe7du+v555+XJHXp0kXbtm3TrFmzNGLECMPpXO+1117TVVddpcjISCPn98iRkdDQUNlsNmVlZZ2xPSsrSxEREYZSwYQHH3xQX3zxhb755hs1bdrUdByX8vHxUcuWLdWtWzdNnTpV8fHxevnll03HcolNmzYpOztbXbt2lZeXl7y8vJScnKxXXnlFXl5estvtpiO6XHBwsFq3bq19+/aZjuISTZo0Oat4t2vXzqO+qvrZ/v37tWzZMo0aNcpYBo8sIz4+PurWrZuWL19esc3hcGj58uUe9525p3I6nXrwwQe1YMECff3112revLnpSMY5HA4VFxebjuESl19+ubZu3aotW7ZUvLp376477rhDW7Zskc1mMx3R5U6cOKGUlBQ1adLEdBSXSExMPGs6/549e9SsWTNDicx54403FBYWpiFDhhjL4LFf00ycOFEjRoxQ9+7d1bNnT7300ksqLCzUyJEjTUdziRMnTpzxf0BpaWnasmWLQkJCFBMTYzCZa4wdO1bvvvuuPv30UwUEBFTcKxQUFCR/f3/D6WrepEmTdNVVVykmJkYFBQV69913tWLFCi1ZssR0NJcICAg46/6g+vXrq1GjRh5z39Af/vAHDR06VM2aNdPhw4c1ZcoU2Ww23XbbbaajucSECRPUp08fPf/887rlllu0YcMGzZkzR3PmzDEdzaUcDofeeOMNjRgxQl5eBiuBy+fv1CL//Oc/nTExMU4fHx9nz549nevXrzcdyWW++eYbp6SzXiNGjDAdzSXOde2SnG+88YbpaC5x9913O5s1a+b08fFxNm7c2Hn55Zc7v/rqK9OxjPK0qb3Dhg1zNmnSxOnj4+OMiopyDhs2zLlv3z7TsVzq888/d3bo0MHp6+vrbNu2rXPOnDmmI7nckiVLnJKcu3fvNprD4nQ6nWZqEAAAgIfeMwIAAGoPyggAADCKMgIAAIyijAAAAKMoIwAAwCjKCAAAMIoyAgAAjKKMAAAAoygjAADAKMoIAAAwijICAACM+n8XJBjtZv7TUwAAAABJRU5ErkJggg==",
            "text/plain": [
              "<Figure size 640x480 with 1 Axes>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from matplotlib import pyplot as plt\n",
        "\n",
        "plt.title(\"loss\")\n",
        "plt.plot(history.history[\"loss\"])\n",
        "plt.show()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "eQa-h43R7-Vy"
      },
      "outputs": [],
      "source": [
        "gemma_lm.backbone.save_lora_weights(\"pirate.gemma1.1_7bi.lora.h5\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fi-za4uMyh_H"
      },
      "outputs": [],
      "source": [
        "# Load from previously trained LoRA checkpoint\n",
        "#!gsutil cp gs://gemma-pirate/pirate.gemma7bi.lora.h5 .\n",
        "#!gsutil cp gs://gemma-pirate/pirate.gemma1.1_7bi.lora.h5 .\n",
        "# gemma_lm.backbone.enable_lora(rank=8)\n",
        "# gemma_lm.backbone.load_lora_weights(\"pirate.gemma7bi.lora.h5\")\n",
        "# gemma_lm.backbone.load_lora_weights(\"pirate.gemma1.1_7bi.lora.h5\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lBiOKlAy2MAe"
      },
      "source": [
        "## Let's chat as pirates. Arrrr!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "539ZjgJimaCO"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Ahoy there</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> Ahoy! It's nice to meet ye. What would ye like to talk about?\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 17,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "chat = ChatState(gemma_lm)\n",
        "message = \"Ahoy there\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9yNyJ8CLXfw0"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Prime numbers, for sure!?</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> Arrr! That be a very interesting topic. Do ye have any specific questions about them?\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 18,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "message = \"Prime numbers, for sure!?\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "S0dKbvvnl6lW"
      },
      "outputs": [
        {
          "data": {
            "text/markdown": [
              "<font size='+1' color='brown'>🙋‍♂️<blockquote>Yes, give my Python computing them primes up to 1000!</blockquote></font><font size='+1' color='teal'>🤖\n",
              "\n",
              "> Here be some python code that prints all th' prime numbers from 1 to 1000:\n",
              "> \n",
              "> ```python\n",
              "> print('Prime numbers from 1 to 1000:\\n')\n",
              "> i = 2\n",
              "> while i <= 1000:\n",
              ">     is_prime = True\n",
              ">     for j in range(2, i):\n",
              ">         if i % j == 0:\n",
              ">             is_prime = False\n",
              ">             break\n",
              ">     if is_prime:\n",
              ">         print(i)\n",
              ">     i += 1\n",
              "> ```\n",
              "</font>"
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "message = \"Yes, give my Python computing them primes up to 1000!\"\n",
        "display_chat(message, chat.send_message(message))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F7YOuhxYvV7d"
      },
      "source": [
        "Copy-pasted code from Gemma's answer:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "L8cYF42hH2id"
      },
      "outputs": [],
      "source": [
        "print(\"Prime numbers from 1 to 1000:\\n\")\n",
        "i = 2\n",
        "while i <= 1000:\n",
        "    is_prime = True\n",
        "    for j in range(2, i):\n",
        "        if i % j == 0:\n",
        "            is_prime = False\n",
        "            break\n",
        "    if is_prime:\n",
        "        print(i)\n",
        "    i += 1"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bzKsCGIN0yX5"
      },
      "source": [
        "# What's next\n",
        "\n",
        "In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, using model parallelism.\n",
        "\n",
        "Here are a few suggestions for what else to learn, about Keras and JAX:\n",
        "* [Distributed training with Keras 3](https://keras.io/guides/distribution/).\n",
        "* [Writing a custom training loop for a Keras model in JAX](https://keras.io/guides/writing_a_custom_training_loop_in_jax/).\n",
        "\n",
        "And a couple of more basic Gemma tutorials:\n",
        "\n",
        "* [Get started with Keras Gemma](https://ai.google.dev/gemma/docs/get_started).\n",
        "* [Finetune the Gemma model on GPU](https://ai.google.dev/gemma/docs/lora_tuning)."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "[Gemma_1]Finetune_distributed.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
