{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_3ba_pruning_llama_instruct_optipfair.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DT0FAsUnncFY"
      },
      "source": [
        "<div>\n",
        "    <h1>Large Language Models Projects</a></h1>\n",
        "    <h3>Apply and Implement Strategies for Large Language Models</h3>\n",
        "    <h2>Width Pruning Llama 3.2. with OptiPFair (Adapted to Instruct models)</h2>\n",
        "    <h3>Using OptiPFair to prune MLP Layers with GLU structure.</h3>\n",
        "</div>\n",
        "\n",
        "by [Pere Martra](https://www.linkedin.com/in/pere-martra/)\n",
        "\n",
        "________\n",
        "Models: meta-llama/Llama-3.2-1B\n",
        "\n",
        "Colab Environment: GPU T4.\n",
        "\n",
        "Keys:\n",
        "* Pruning\n",
        "* Structured pruning\n",
        "* optiPfair\n",
        "\n",
        "_______\n",
        "**disclaimer: The pruning section was created after the first edition of the book was published. They are not included in the book’s original content but are intended to supplement and expand on the topics covered.**\n",
        "\n",
        "This is the unofficial repository for the book:\n",
        "        <a href=\"https://amzn.to/4eanT1g\"> <b>Large Language Models:</b> Apply and Implement Strategies for Large Language Models</a> (Apress).\n",
        "        The book is based on the content of this repository, but the notebooks are being updated, and I am incorporating new examples and chapters.\n",
        "        If you are looking for the official repository for the book, with the original notebooks, you should visit the\n",
        "        <a href=\"https://github.com/Apress/Large-Language-Models-Projects\">Apress repository</a>, where you can find all the notebooks in their original format as they appear in the book.\n",
        "\n",
        "This notebook serves as a demonstration code for the paper [Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models.](https://doi.org/10.31219/osf.io/qgxea)\n",
        "\n",
        "The paper studies how the % of expansion produced in the GLU layers influences performance and consumption. For this purpose, seven different models have been generated from the Llama-3.2-1B and Llama-3.2-3B base models, reaching the conclusion that the optimal balance is achieved with an expansion of 140%.\n",
        "______"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iEwxZCVsoIau"
      },
      "source": [
        "# Introduction\n",
        "This notebook cotinues the work done at: [6_3_pruning_structured_llama3.2-1b_OK.ipynb](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_3_pruning_structured_llama3.2-1b_OK.ipynb) the pruning process was done manually, and you can find the implementation code there. In this notebook, we use the [OptiPFair](https://github.com/peremartra/optipfair) library, developed by myself, which simplifies the pruning process for LLMs.\n",
        "\n",
        "En este notebook nos focalizamo en explicar el funcionamiento de la libreria OptiPFair y sus diversas opciones para realizar el pruning de las capas MLP de modelos con estructura GLU: Llama, Gemma, QWen, Mistral y otros.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eQIxAOPZtPBN"
      },
      "source": [
        "#Install libraries & Configure variables."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "5zHApVm41HWq",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "27a21bc9-dc1c-48b6-bb9a-eba06839657f"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[?25l   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/44.9 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m44.9/44.9 kB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "!pip install -q transformers\n",
        "!pip install -q optipfair"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "pip show optipfair"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "eqSfFAukLZ_J",
        "outputId": "7dc32557-66d1-433f-9488-302ac748e31c"
      },
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Name: optipfair\n",
            "Version: 0.1.5\n",
            "Summary: A library for structured pruning & Bias visualization of large language models\n",
            "Home-page: https://github.com/peremartra/optipfair\n",
            "Author: Pere Martra\n",
            "Author-email: peremartra@uadla.com\n",
            "License: \n",
            "Location: /usr/local/lib/python3.12/dist-packages\n",
            "Requires: click, torch, tqdm, transformers\n",
            "Required-by: \n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "GJNgRj4M187E"
      },
      "outputs": [],
      "source": [
        "import torch\n",
        "import os\n",
        "\n",
        "from tqdm import tqdm\n",
        "from optipfair import prune_model\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "tbIyUlXEtbqs",
        "outputId": "f5daaf15-d209-4e95-cf04-18d0214395bf"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Using device: cuda\n"
          ]
        }
      ],
      "source": [
        "# Check if GPU is available\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"Using device: {device}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sM-QwxyKw-YG"
      },
      "source": [
        "#Download model and explore structure"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "q-z_1Zpg2I6u"
      },
      "outputs": [],
      "source": [
        "model_name = 'meta-llama/Llama-3.2-1B-Instruct'\n",
        "model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(device)\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
        "#tokenizer.pad_token = tokenizer.eos_token  # Set pad token"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "9UpMD4Hw2MWg"
      },
      "outputs": [],
      "source": [
        "def get_output(prompt, model=model, tokenizer=tokenizer):\n",
        "    # Chat forma for modelInstruct\n",
        "    messages = [\n",
        "        {\"role\": \"user\", \"content\": prompt}\n",
        "    ]\n",
        "\n",
        "    # Aply chat template\n",
        "    inputs = tokenizer.apply_chat_template(\n",
        "        messages,\n",
        "        add_generation_prompt=True,\n",
        "        return_tensors='pt'\n",
        "    ).to(device)\n",
        "\n",
        "    outputs = model.generate(\n",
        "        inputs,\n",
        "        max_length=150,\n",
        "        num_return_sequences=1,\n",
        "        pad_token_id=tokenizer.pad_token_id,\n",
        "        temperature=None,\n",
        "        top_p=None,\n",
        "        do_sample=False,\n",
        "        num_beams=5,\n",
        "        early_stopping=True,\n",
        "        no_repeat_ngram_size=2\n",
        "    )\n",
        "    generated_tokens = outputs[0][len(inputs[0]):]\n",
        "    generated = tokenizer.decode(generated_tokens, skip_special_tokens=True)\n",
        "\n",
        "\n",
        "    return generated.strip()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4muyx_8M5OAu"
      },
      "source": [
        "## studying the model structure\n",
        "As demonstrated in the [previous notebook](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6_2_pruning_structured_llama3.2-1b_KO.ipynb), studying the structure of the model that will undergo pruning is crucial.\n",
        "\n",
        "In this notebook, we’re going to fine-tune the pruning process for the Llama3.2 model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "y5Hs4oQ4B7Z0",
        "outputId": "1f136698-5453-493f-e5fc-e6c15eb9ce51"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 2048)\n",
            "    (layers): ModuleList(\n",
            "      (0-15): 16 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "          (k_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (v_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
            "          (up_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
            "          (down_proj): Linear(in_features=8192, out_features=2048, bias=False)\n",
            "          (act_fn): SiLUActivation()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=2048, out_features=128256, bias=False)\n",
            ")\n"
          ]
        }
      ],
      "source": [
        "print(model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yPMslK3QCAb1"
      },
      "source": [
        "\n",
        "An MLP block typically consists of layers that scale the data to larger dimensions and others that return it to its original size.\n",
        "\n",
        "In the MLP block of the model, we find two projection layers: `gat_proj` and `down_proj`, both scaling from 2048 to 8192. The purpose of having two layers projecting to the same intermediate size might be related to gating mechanisms. A gating mechanism selectively controls information flow in neural networks by using learned weights to \"gate\" or filter inputs.\n",
        "\n",
        "However, to truly understand how these layers function, we’d need to refer to the model's documentation or even the source code. But, this structure usually indicates, at least, I haven't encountered a case where it doesn't, that the layers performing the upsizing work in pairs, and they cannot be treated as independent linear layers.\n",
        "\n",
        "In other words, any operation we apply to one layer must be replicated in the other. Most importantly, when identifying which neurons have more or less importance, we can't evaluate the neurons of a single layer in isolation; we need to treat them as pairs.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "alKH3QH64WFL",
        "outputId": "bcfce958-b606-424e-8a83-d136b31c63e7"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n",
            "The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Generated text: Paris is not only a city in France, but it's also the country's capital. It's a beautiful and iconic city known for its stunning architecture, art museums, fashion, and cuisine.\n"
          ]
        }
      ],
      "source": [
        "# Test the original model\n",
        "prompt = \"What is the capital of France?\"\n",
        "generated = get_output(prompt)\n",
        "print(f\"Generated text: {generated}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CK9NwmBWnkSP"
      },
      "source": [
        "#Pruning the Model.\n",
        "##Support pruning functions.\n",
        "###Compute neuron importance functions.\n",
        "\n",
        "To perform the pruning process, you need to call the `prune_model` function from the library (OptiPFair)[https://github.com/peremartra/optipfair].\n",
        "\n",
        "To use this function, you need to provide the following parameters:\n",
        "\n",
        "* model: The model to be pruned.\n",
        "* pruning_type: The type of pruning to apply. In this case, it will be \"MLP_GLU\", which is currently the only option supported by the library.\n",
        "* neuron_selection_method:\n",
        "  * MAW: Maximum Absolute Weight Values.\n",
        "  * VOW: Variance Of Weigths.\n",
        "  * PON: Product Of Norms.\n",
        "  With LLaMA models, the method that works best without requiring further training is MAW.\n",
        "* pruning_percentage o expansion_rate: you need to provide one of them. In this notebook, we’ll use pruning_percentage.\n",
        "* show_progress: By default, it's set to True. It displays the progress of the pruning process.\n",
        "* return_stats: By default, it's set to True. It returns the percentage of neurons removed and the resulting expansion rate.\n",
        "\n",
        "\n",
        "*I’m leaving the others in the notebook purely as an exercise.*\n",
        "\n",
        "The **MAW** method works better because it directly identifies the most influential neurons based on the magnitude of their connections. These neurons are likely responsible for key decisions, making the model more accurate after pruning. The Variance of Weights method, while useful in some contexts, can retain neurons that may not contribute significantly to the task, leading to less coherent model outputs.\n",
        "\n",
        "However, we shouldn’t fall into the trap of assuming that this neuron selection method will work best across all model structures. It works well with Llama models, and this may be due to several factors:\n",
        "\n",
        "* The relatively large projection from 2048 to 8192.\n",
        "* The use of a GLU structure.\n",
        "* The type of activation function used.\n",
        "\n",
        "So, if we use a model from another family, like Gemma or Mistral, the neuron selection method might need to be entirely different."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KtHtSbRmS267"
      },
      "source": [
        "## Obtain & test the pruned model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "NIUnFU5R3n42",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "ad6ee0bb-2f77-4fc6-8716-46f6e2da5479"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Pruning layers: 100%|██████████| 16/16 [00:04<00:00,  3.35it/s]\n"
          ]
        }
      ],
      "source": [
        "# Prune 10% of neurons from MLP layers using MAW method\n",
        "pruned_model, stats = prune_model(\n",
        "    model=model,\n",
        "    pruning_type=\"MLP_GLU\",\n",
        "    neuron_selection_method=\"MAW\",\n",
        "    pruning_percentage=20,\n",
        "    show_progress=True,\n",
        "    return_stats=True\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "tdJUkfWI3qMM",
        "outputId": "c054474d-ba64-422f-ba30-d8fa2aac3342"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Original parameters: 1,235,814,400\n",
            "Pruned parameters: 1,074,792,448\n",
            "Reduction: 161,021,952 parameters (13.03%)\n",
            "Expansion rate: 320.01953125%\n"
          ]
        }
      ],
      "source": [
        "# Print pruning statistics\n",
        "print(f\"Original parameters: {stats['original_parameters']:,}\")\n",
        "print(f\"Pruned parameters: {stats['pruned_parameters']:,}\")\n",
        "print(f\"Reduction: {stats['reduction']:,} parameters ({stats['percentage_reduction']:.2f}%)\")\n",
        "print(f\"Expansion rate: {stats['expansion_rate']:,}%\")\n"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# prompt: call the generate_response for the pruned_model\n",
        "\n",
        "prompt = \"What is the capital of France?\"\n",
        "generated = get_output(prompt)\n",
        "print(f\"Generated text: {generated}\")"
      ],
      "metadata": {
        "id": "9_Zc5VM86KTQ",
        "outputId": "b82ae806-749f-4378-bc3d-042a7da88015",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Generated text: Yes, it is Paris.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JGzXMQrVTULv"
      },
      "source": [
        "The result is slightly different from what the original model produced, but it’s still a fairly accurate response.\n",
        "\n",
        "In contrast to the model created in notebook: [6_2_pruning_structured_llama3.2-1b_KO.ipynb](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6_2_pruning_structured_llama3.2-1b_KO.ipynb) where the pruned Llama model lost almost all its utility, the model in this notebook retains a good portion of its knowledge."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dDQrSrf-VCyI"
      },
      "source": [
        "Looking at the model’s new structure, we can see that the `gate_proj` and `up_proj` layers have had their `out_features` reduced to 6554 from 8192. Consequently, the `down_proj` layer has its `in_features` adjusted to match the new size."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "ATAiqZW30NYN",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "0c1a7be9-82d4-4466-b20f-f749b4d25f28"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 2048)\n",
            "    (layers): ModuleList(\n",
            "      (0-15): 16 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "          (k_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (v_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=2048, out_features=6554, bias=False)\n",
            "          (up_proj): Linear(in_features=2048, out_features=6554, bias=False)\n",
            "          (down_proj): Linear(in_features=6554, out_features=2048, bias=False)\n",
            "          (act_fn): SiLUActivation()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=2048, out_features=128256, bias=False)\n",
            ")\n"
          ]
        }
      ],
      "source": [
        "print(model)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Upload the model to HF\n"
      ],
      "metadata": {
        "id": "h-VPRqbeSuJs"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "new_model_name = 'width20-llama-3.2-1b-Instruct'\n",
        "output_dir = './'+new_model_name\n",
        "if not os.path.exists(output_dir):\n",
        "    os.makedirs(output_dir)\n",
        "\n",
        "pruned_model.save_pretrained(output_dir)\n",
        "tokenizer.save_pretrained(output_dir)\n",
        "print(f\"Pruned model saved to {output_dir}\")"
      ],
      "metadata": {
        "id": "1dbk6x0yStqM",
        "outputId": "7c82cbad-1161-459a-91d5-4e777c79765d",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 18,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Pruned model saved to ./width20-llama-3.2-1b-Instruct\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Push the model to your Hugging Face repository\n",
        "\n",
        "pruned_model.push_to_hub(new_model_name, private=True)"
      ],
      "metadata": {
        "id": "sdHcgXFgTDib"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "tokenizer.push_to_hub(new_model_name)"
      ],
      "metadata": {
        "id": "lBSo69r0TEAp"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Evaluating models"
      ],
      "metadata": {
        "id": "MpAedlf1TLrK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -q lm-eval\n",
        "from lm_eval import evaluator, tasks, models"
      ],
      "metadata": {
        "id": "U_maImbyTPRF",
        "outputId": "3f4f70ed-e0e6-4eaa-8962-d917a0369d47",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.6/53.6 kB\u001b[0m \u001b[31m2.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m51.8/51.8 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.5/7.5 MB\u001b[0m \u001b[31m61.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m491.5/491.5 kB\u001b[0m \u001b[31m42.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m84.1/84.1 kB\u001b[0m \u001b[31m8.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m293.6/293.6 kB\u001b[0m \u001b[31m27.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m104.1/104.1 kB\u001b[0m \u001b[31m10.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m91.1/91.1 kB\u001b[0m \u001b[31m9.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for rouge-score (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Building wheel for sqlitedict (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Building wheel for word2number (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "def evaluate_hf_model(model_name, tasks=['arc_easy'], num_fewshot=0):\n",
        "    \"\"\"\n",
        "    It calls the evaluator to evaluate a model available on Hugging Face.\n",
        "\n",
        "    Args:\n",
        "    - model_name: The model name in hugging Face.\n",
        "    - tasks: Tasks to evaluate.\n",
        "    - num_fewshot: Number of examples of few-shot learning\n",
        "\n",
        "    Returns:\n",
        "    - metrics.\n",
        "    \"\"\"\n",
        "    model_args = f\"pretrained={model_name},device=cuda\"\n",
        "    tasks = tasks\n",
        "\n",
        "    results = evaluator.simple_evaluate(\n",
        "      model=\"hf\",\n",
        "      model_args=model_args,\n",
        "      tasks=tasks,\n",
        "      num_fewshot=0,  # Number of few-shot smaples.\n",
        "      limit=None,  # Use all the samples in the Evaluate Dataset.\n",
        "      bootstrap_iters=10\n",
        "    )\n",
        "\n",
        "    metrics = results.get('results', {})\n",
        "    return metrics"
      ],
      "metadata": {
        "id": "5Xiopk_HTSgj"
      },
      "execution_count": 17,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Select tasks to evaluate.\n",
        "tasks = ['lambada', 'boolq', 'arc_easy']\n",
        "metrics_pruned = evaluate_hf_model(\"width20-llama-3.2-1b-Instruct\", tasks=tasks)"
      ],
      "metadata": {
        "id": "uhphLLbdTVLZ",
        "outputId": "191d267b-5ba9-4012-b2a2-17e187761215",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "WARNING:lm_eval.evaluator:pretrained=pretrained=width20-llama-3.2-1b-Instruct,device=cuda appears to be an instruct or chat variant but chat template is not applied.\n",
            "        Recommend setting `apply_chat_template` (optionally `fewshot_as_multiturn`).\n",
            "WARNING:lm_eval.api.task:[Task: boolq] metric acc is defined, but aggregation is not. using default aggregation=mean\n",
            "WARNING:lm_eval.api.task:[Task: boolq] metric acc is defined, but higher_is_better is not. using default higher_is_better=True\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of arc_easy from None to 0\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of boolq from None to 0\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of lambada_standard from None to 0\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of lambada_openai from None to 0\n",
            "100%|██████████| 2376/2376 [00:02<00:00, 1062.59it/s]\n",
            "100%|██████████| 3270/3270 [00:01<00:00, 1911.22it/s]\n",
            "100%|██████████| 5153/5153 [00:08<00:00, 590.24it/s]\n",
            "100%|██████████| 5153/5153 [00:08<00:00, 593.20it/s]\n",
            "Running loglikelihood requests: 100%|██████████| 26347/26347 [09:47<00:00, 44.84it/s]\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "bootstrapping for stddev: perplexity\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "100%|██████████| 1/1 [00:00<00:00, 116.00it/s]"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "bootstrapping for stddev: perplexity\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "100%|██████████| 1/1 [00:00<00:00, 97.84it/s]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "metrics_pruned"
      ],
      "metadata": {
        "id": "48YH0CYLTiRA",
        "outputId": "e8855dd5-2652-4328-f77b-93a8a4b53a9c",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'arc_easy': {'alias': 'arc_easy',\n",
              "  'acc,none': 0.5707070707070707,\n",
              "  'acc_stderr,none': 0.01015667807591118,\n",
              "  'acc_norm,none': 0.49663299663299665,\n",
              "  'acc_norm_stderr,none': 0.010259550893799069},\n",
              " 'boolq': {'alias': 'boolq',\n",
              "  'acc,none': 0.6418960244648318,\n",
              "  'acc_stderr,none': 0.008385509472671695},\n",
              " 'lambada_openai': {'alias': 'lambada_openai',\n",
              "  'perplexity,none': 23.300879131683228,\n",
              "  'perplexity_stderr,none': 1.4637687468808533,\n",
              "  'acc,none': 0.4459538133126334,\n",
              "  'acc_stderr,none': 0.006925162981362285},\n",
              " 'lambada_standard': {'alias': 'lambada_standard',\n",
              "  'perplexity,none': 45.57827807258359,\n",
              "  'perplexity_stderr,none': 2.4242749584851424,\n",
              "  'acc,none': 0.37745002910925674,\n",
              "  'acc_stderr,none': 0.006753500136843553}}"
            ]
          },
          "metadata": {},
          "execution_count": 27
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "metrics_base= evaluate_hf_model(\"meta-llama/Llama-3.2-1B-Instruct\", tasks=tasks)"
      ],
      "metadata": {
        "id": "DekJRxvQTdTc",
        "outputId": "75080a46-41a8-426b-ec6a-2a7468926c61",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "WARNING:lm_eval.evaluator:pretrained=pretrained=meta-llama/Llama-3.2-1B-Instruct,device=cuda appears to be an instruct or chat variant but chat template is not\n",
            "        applied. Recommend setting `apply_chat_template` (optionally `fewshot_as_multiturn`).\n",
            "WARNING:lm_eval.api.task:[Task: boolq] metric acc is defined, but aggregation is not. using default aggregation=mean\n",
            "WARNING:lm_eval.api.task:[Task: boolq] metric acc is defined, but higher_is_better is not. using default higher_is_better=True\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of arc_easy from None to 0\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of boolq from None to 0\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of lambada_standard from None to 0\n",
            "WARNING:lm_eval.evaluator:Overwriting default num_fewshot of lambada_openai from None to 0\n",
            "100%|██████████| 2376/2376 [00:02<00:00, 1099.39it/s]\n",
            "100%|██████████| 3270/3270 [00:01<00:00, 1958.37it/s]\n",
            "100%|██████████| 5153/5153 [00:08<00:00, 593.35it/s]\n",
            "100%|██████████| 5153/5153 [00:09<00:00, 552.35it/s]\n",
            "Running loglikelihood requests: 100%|██████████| 26347/26347 [43:01<00:00, 10.21it/s]\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "bootstrapping for stddev: perplexity\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "100%|██████████| 1/1 [00:00<00:00, 119.07it/s]"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "bootstrapping for stddev: perplexity\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "100%|██████████| 1/1 [00:00<00:00, 120.55it/s]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "metrics_base"
      ],
      "metadata": {
        "id": "d3H9SOVeTfNk",
        "outputId": "28ee6d2c-baa7-4bea-b047-1740c3339977",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 29,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'arc_easy': {'alias': 'arc_easy',\n",
              "  'acc,none': 0.6851851851851852,\n",
              "  'acc_stderr,none': 0.009530150430975505,\n",
              "  'acc_norm,none': 0.6313131313131313,\n",
              "  'acc_norm_stderr,none': 0.009899640855681058},\n",
              " 'boolq': {'alias': 'boolq',\n",
              "  'acc,none': 0.6951070336391437,\n",
              "  'acc_stderr,none': 0.008051783411024705},\n",
              " 'lambada_openai': {'alias': 'lambada_openai',\n",
              "  'perplexity,none': 6.578904811488175,\n",
              "  'perplexity_stderr,none': 0.3159444429362019,\n",
              "  'acc,none': 0.5977100718028333,\n",
              "  'acc_stderr,none': 0.006831670941073277},\n",
              " 'lambada_standard': {'alias': 'lambada_standard',\n",
              "  'perplexity,none': 13.07208941627685,\n",
              "  'perplexity_stderr,none': 0.665570718637886,\n",
              "  'acc,none': 0.47952648942363674,\n",
              "  'acc_stderr,none': 0.006960135424338517}}"
            ]
          },
          "metadata": {},
          "execution_count": 29
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "\n",
        "def plot_model_comparison(metrics_base, metrics_pruned):\n",
        "\n",
        "    tasks_to_plot = ['boolq', 'lambada_openai', 'lambada_standard', 'arc_easy']\n",
        "    display_labels = ['BoolQ', 'Lambada OpenAI', 'Lambada Standard', 'arc_easy']\n",
        "\n",
        "    try:\n",
        "        base_scores = [metrics_base[task]['acc,none'] for task in tasks_to_plot]\n",
        "        pruned_scores = [metrics_pruned[task]['acc,none'] for task in tasks_to_plot]\n",
        "    except KeyError as e:\n",
        "        print(f\"Error: Key not found {e}.\")\n",
        "        print(f\"Be sure all tasks ({tasks_to_plot})\")\n",
        "        print(\"and variable 'acc,none' exists in both diccionaries\")\n",
        "        return\n",
        "\n",
        "    n_groups = len(tasks_to_plot)\n",
        "    index = np.arange(n_groups)\n",
        "    bar_width = 0.35\n",
        "\n",
        "    fig, ax = plt.subplots(figsize=(10, 6))\n",
        "\n",
        "    color_base = '#3366CC'\n",
        "    color_pruned = '#DC3912'\n",
        "\n",
        "    ax.bar(index - bar_width / 2,\n",
        "           base_scores,\n",
        "           bar_width,\n",
        "           label='Base Model',\n",
        "           color=color_base)\n",
        "\n",
        "    ax.bar(index + bar_width / 2,\n",
        "           pruned_scores,\n",
        "           bar_width,\n",
        "           label='Pruned Model',\n",
        "           color=color_pruned)\n",
        "\n",
        "\n",
        "    ax.set_ylim([0, 0.9])\n",
        "    ax.set_yticks(np.arange(0, 0.9, 0.2))\n",
        "    ax.tick_params(axis='both', which='major', labelsize=12)\n",
        "\n",
        "    ax.set_xticks(index)\n",
        "    ax.set_xticklabels(display_labels)\n",
        "\n",
        "    ax.yaxis.grid(True, linestyle='-', linewidth=0.5, color='lightgray')\n",
        "    ax.set_axisbelow(True)\n",
        "\n",
        "    ax.legend(fontsize=12, loc='upper center', bbox_to_anchor=(0.5, 1.1),\n",
        "              ncol=2, frameon=False)\n",
        "\n",
        "    plt.tight_layout()\n",
        "    plt.show()"
      ],
      "metadata": {
        "id": "MlXF_qVCTn9M"
      },
      "execution_count": 24,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "plot_model_comparison(metrics_base, metrics_pruned)"
      ],
      "metadata": {
        "id": "mE4QMhGbT3W1",
        "outputId": "cf6e2dd1-30bb-4f29-8199-1ca3bc2f92e1",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 502
        }
      },
      "execution_count": 30,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "<Figure size 1000x600 with 1 Axes>"
            ],
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAA90AAAJSCAYAAADeX6Y/AAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjAsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvlHJYcgAAAAlwSFlzAAAPYQAAD2EBqD+naQAASaBJREFUeJzt3Xuc1nP+P/7nNNXMdJhKB2o7oFSSZKN2W0lCVEtsOUWKxFrr2EZ2LeUsG61zn7WVaC05rZwKSdE6LDktCh0skUMpqUn1/v3hN9fXmKnmorcp7vfbrRvX6/16Xe/n+5rrNXM9rvcpJ0mSJAAAAIDNrlJFFwAAAAA/VkI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AWAzefLJJyMnJyeefPLJrMeOHz8+cnJyYsGCBZu9Ln48LrzwwsjJyanoMjbIHAAoTegGYIOKPwR/81+DBg2iW7du8fDDD1d0eRs0cODAyMnJicLCwli1alWp5fPmzctsz1VXXVUBFZK2b7938/Pzo2XLlnHqqafGRx99VNHlpc4cANhyVK7oAgB+zPY46T8VXUK8cHOH7/0cI0eOjB122CGSJImPPvooxo8fHz179owHHnggevfuvRmq3PwqV64cX375ZTzwwANx+OGHl1h2++23R35+fqxevbqCqts6vNO5bkWXEM2f+fR7jS9+765evTpmzZoVN954Yzz00EPx2muvRbVq1TZTlVsmcwBgy2BPNwCbdNBBB8UxxxwTxx57bAwdOjRmzpwZVapUiX/84x8VXdoG5eXlRffu3cuscdKkSdGrV68KqIofWvF7d/DgwTF+/Pg444wzYv78+XH//fdvcMzKlSt/wArTYw4AbBmEbgCyVrt27SgoKIjKlUseMHXVVVdF586do27dulFQUBAdOnSIyZMnlxo/bdq02GuvvaJ27dpRo0aNaNWqVZx33nkl+hQVFcUFF1wQLVq0iLy8vGjSpEkMGzYsioqKyl3n0UcfHQ8//HAsW7Ys0/b888/HvHnz4uijjy5zzLvvvhv9+vWLbbbZJqpVqxa/+MUv4sEHHyzV73//+1/06dMnqlevHg0aNIgzzzxzg7U9++yzceCBB0atWrWiWrVq0bVr13j66afLvR1sPvvuu29ERMyfPz8ivj4Mu0aNGvHOO+9Ez549o2bNmtG/f/+IiNh+++1j4MCBpZ5jn332iX322SfzuPg85jvvvDMuueSSaNy4ceTn50f37t3j7bffLjW+vO+HWbNmxZ577hn5+fnRvHnzuPnmm7PeXnMAoOI5vByATfr888/jk08+iSRJYsmSJXHttdfGF198Ecccc0yJfmPGjImDDz44+vfvH2vWrIk77rgj+vXrF1OmTMnsVXv99dejd+/e0a5duxg5cmTk5eXF22+/XeID+Pr16+Pggw+OWbNmxZAhQ2LnnXeOV199Na6++uqYO3du3HfffeWq+7DDDouTTz457rnnnjj++OMj4us9fK1bt46f//znpfp/9NFH0blz5/jyyy/jtNNOi7p168aECRPi4IMPjsmTJ8ehhx4aERGrVq2K7t27x6JFi+K0006LRo0axcSJE+OJJ54o9ZxPPPFEHHTQQdGhQ4e44IILolKlSjFu3LjYd999Y+bMmdGxY8dybQubxzvvvBMREXXr/r9D59euXRs9evSIvfbaK6666qrvfNj55ZdfHpUqVYqhQ4fG559/HldeeWX0798/nn322Uyf8r4fXn311TjggAOifv36ceGFF8batWvjggsuiG233TarmswBgIondAOwSfvtt1+Jx3l5efH3v/899t9//xLtc+fOjYKCgszjU089NX7+85/H6NGjM6F72rRpsWbNmnj44YejXr16Za5v0qRJ8dhjj8WMGTNir732yrS3bds2Tj755HjmmWeic+fOm6y7Zs2a0bt375g0aVIcf/zxsX79+rjjjjvit7/9bZn9L7/88vjoo49i5syZmfWeeOKJ0a5duzjrrLPikEMOiUqVKsXYsWNj7ty5ceedd0a/fv0y/XbbbbcSz5ckSZx88smZC88VX3X6pJNOil122SX+9Kc/xdSpUze5HXx3xV8YrV69Op5++ukYOXJkFBQUlLgWQVFRUfTr1y8uu+yy77Wu1atXx5w5c6Jq1aoREVGnTp04/fTT47XXXou2bdtm9X7485//HEmSxMyZM6Np06YREfGb3/wmdt1116xqMgcAKp7DywHYpOuvvz6mTZsW06ZNi9tuuy26desWgwcPjnvuuadEv28G7qVLl8bnn38eXbp0iRdffDHTXrt27YiIuP/++2P9+vVlru+uu+6KnXfeOVq3bh2ffPJJ5l/xocHTp08vd+1HH310PPnkk/Hhhx/GE088ER9++OEGD6t96KGHomPHjiWCfo0aNWLIkCGxYMGC+O9//5vp17Bhw+jbt2+mX7Vq1WLIkCElnm/OnDmZw3g//fTTzHasXLkyunfvHk899dQGXwM2j/322y/q168fTZo0iSOPPDJq1KgR9957b/zsZz8r0W9DITQbgwYNygTuiIguXbpExNeHa0eU//2wbt26ePTRR6NPnz6ZwB0RsfPOO0ePHj2yrsscAKhY9nQDsEkdO3aMPfbYI/P4qKOOit133z1OPfXU6N27dyZoTJkyJS6++OKYM2dOiXM7v3lf4SOOOCL+9re/xeDBg+Pcc8+N7t27x2GHHRZ9+/aNSpW+/i543rx58cYbb0T9+vXLrGfJkiXlrr34PN1//vOfMWfOnNhzzz2jRYsWZd4LeOHChdGpU6dS7TvvvHNmedu2bWPhwoXRokWLUvdLbtWqVYnH8+bNi4iI4447boP1ff7551GnTp1ybw/Zuf7666Nly5ZRuXLl2HbbbaNVq1aZ91mxypUrR+PGjb/3ur4ZkCMi83NdunRpRJT//VBUVBSrVq2KnXbaqdTyVq1axUMPPZRVXeYAQMUSugHIWqVKlaJbt24xZsyYmDdvXuyyyy4xc+bMOPjgg2PvvfeOG264IRo2bBhVqlSJcePGxaRJkzJjCwoK4qmnnorp06fHgw8+GI888kj885//jH333TemTp0aubm5sX79+th1111j9OjRZa6/SZMm5a41Ly8vDjvssJgwYUK8++67ceGFF37fzS+34j14o0aNivbt25fZp0aNGj9YPT9F3/7CqCx5eXmlgnhElAqUxdatWxe5ubml2stqi/j6EOuI8r8fsrlYYHmYAwAVS+gG4DtZu3ZtRER88cUXERFx9913R35+fjz66KORl5eX6Tdu3LhSYytVqhTdu3eP7t27x+jRo+PSSy+NP/7xjzF9+vTYb7/9onnz5vHyyy9H9+7dNxh8snH00UfH3//+96hUqVIceeSRG+zXrFmzeOutt0q1v/nmm5nlxf997bXXIkmSEvV9e2zz5s0jIqKwsLDUefFs+erUqVPiqt/FFi5cGDvuuGPWz1fe90P9+vWjoKAgs5f4m8p6f5aHOQBQcZzTDUDWvvrqq5g6dWpUrVo1c9hpbm5u5OTkxLp16zL9FixYUOpK45999lmp5yveA1a8h+/www+P999/P/7v//6vVN9Vq1ZlfR/lbt26xUUXXRTXXXddbLfddhvs17Nnz3juuedi9uzZmbaVK1fG2LFjY/vtt482bdpk+n3wwQclbof25ZdfxtixY0s8X4cOHaJ58+Zx1VVXZb6c+KaPP/44q+3gh9W8efP497//HWvWrMm0TZkyJd57773v9HzlfT/k5uZGjx494r777otFixZllr/xxhvx6KOPfqd1mwMAFceebgA26eGHH87s6VqyZElMmjQp5s2bF+eee24UFhZGRESvXr1i9OjRceCBB8bRRx8dS5Ysieuvvz5atGgRr7zySua5Ro4cGU899VT06tUrmjVrFkuWLIkbbrghGjdunLl407HHHht33nlnnHzyyTF9+vT41a9+FevWrYs333wz7rzzznj00Uc3ecjwN1WqVCn+9Kc/bbLfueeeG//4xz/ioIMOitNOOy222WabmDBhQsyfPz/uvvvuzCHIJ554Ylx33XUxYMCA+M9//hMNGzaMiRMnlrrVVKVKleJvf/tbHHTQQbHLLrvEoEGD4mc/+1m8//77MX369CgsLIwHHnig3NvBD2vw4MExefLkOPDAA+Pwww+Pd955J2677bbM3ttsZfN+GDFiRDzyyCPRpUuXOOWUU2Lt2rVx7bXXxi677FJiPmWzbnMAoIIkALAB48aNSyKixL/8/Pykffv2yY033pisX7++RP9bbrkl2WmnnZK8vLykdevWybhx45ILLrgg+eafm8cffzw55JBDkkaNGiVVq1ZNGjVqlBx11FHJ3LlzSzzXmjVrkiuuuCLZZZddkry8vKROnTpJhw4dkhEjRiSff/75Rus+7rjjkurVq2+0z/z585OISEaNGlWi/Z133kn69u2b1K5dO8nPz086duyYTJkypdT4hQsXJgcffHBSrVq1pF69esnpp5+ePPLII0lEJNOnTy/R96WXXkoOO+ywpG7dukleXl7SrFmz5PDDD08ef/zxUq/1/PnzN1o35VP8ej7//PMb7bep98pf/vKX5Gc/+1mSl5eX/OpXv0peeOGFpGvXrknXrl0zfaZPn55ERHLXXXeVGFv8Hhs3blyJ9vK8H5IkSWbMmJF06NAhqVq1arLjjjsmN910U6n59F2365v1mQMA6cpJkv//6h4AAADAZuWcbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICWVK7qA72L9+vXxwQcfRM2aNSMnJ6eiywEAAOAnJkmSWLFiRTRq1CgqVdrw/uytMnR/8MEH0aRJk4ouAwAAgJ+49957Lxo3brzB5Vtl6K5Zs2ZEfL1xhYWFFVwNAAAAPzXLly+PJk2aZPLphmyVobv4kPLCwkKhGwAAgAqzqVOeXUgNAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUZB26i4qK4pxzzolGjRpFQUFBdOrUKaZNm1ausY899lh069Yt6tWrF7Vr146OHTvGxIkTsy4aAAAAtgZZh+6BAwfG6NGjo3///jFmzJjIzc2Nnj17xqxZszY67l//+lcccMABsWbNmrjwwgvjkksuiYKCghgwYEBcffXV33kDAAAAYEuVkyRJUt7Ozz33XHTq1ClGjRoVQ4cOjYiI1atXR9u2baNBgwbxzDPPbHDsAQccEK+//nq8++67kZeXFxERa9eujdatW0f16tXj5ZdfLnfRy5cvj1q1asXnn38ehYWF5R4HAAAAm0N5c2lWe7onT54cubm5MWTIkExbfn5+nHDCCTF79ux47733NlpQnTp1MoE7IqJy5cpRr169KCgoyKYMAAAA2CpkFbpfeumlaNmyZakU37Fjx4iImDNnzgbH7rPPPvH666/H+eefH2+//Xa88847cdFFF8ULL7wQw4YNy75yAAAA2MJVzqbz4sWLo2HDhqXai9s++OCDDY49//zzY/78+XHJJZfExRdfHBER1apVi7vvvjsOOeSQja63qKgoioqKMo+XL1+eTdkAAABQIbIK3atWrSpxeHix/Pz8zPINycvLi5YtW0bfvn3jsMMOi3Xr1sXYsWPjmGOOiWnTpsUvfvGLDY697LLLYsSIEaXaFy1aFDVr1sxmEwAAAOB7W7FiRbn6ZRW6CwoKSuxxLrZ69erM8g059dRT49///ne8+OKLUanS10e1H3744bHLLrvE6aefHs8+++wGxw4fPjzOOuuszOPly5dHkyZNomnTpi6kBgAAwA+uvEdgZ3VOd8OGDWPx4sWl2ovbGjVqVOa4NWvWxC233BK9evXKBO6IiCpVqsRBBx0UL7zwQqxZs2aD683Ly4vCwsIS/wAAAGBLl1Xobt++fcydO7dUoi/eS92+ffsyx3366aexdu3aWLduXallX331Vaxfv77MZQAAALA1yyp09+3bN3MudrGioqIYN25cdOrUKZo0aRIRX59r/eabb2b6NGjQIGrXrh333ntviT3aX3zxRTzwwAPRunVrtw0DAADgRyerc7o7deoU/fr1i+HDh8eSJUuiRYsWMWHChFiwYEHccsstmX4DBgyIGTNmRJIkERGRm5sbQ4cOjT/96U/xi1/8IgYMGBDr1q2LW265Jf73v//Fbbfdtnm3CgAAALYAWYXuiIhbb701zj///Jg4cWIsXbo02rVrF1OmTIm99957o+P++Mc/xg477BBjxoyJESNGRFFRUbRr1y4mT54cv/nNb77zBgAAAMCWKicp3h29FVm+fHnUqlUrPv/8cxdVAwAA4AdX3lya1TndAAAAQPkJ3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSUrmiC/gp2OOk/1R0CVu1F27uUNElAAAAfCf2dAMAAEBKhG4AAABIidANAAAAKRG6AQAAICUupAYAAJTgQsDfjwsB8032dAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFJSuaILgE15p3Pdii5hq9b8mU8rugQAAPjJsqcbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkJKsQ3dRUVGcc8450ahRoygoKIhOnTrFtGnTyj3+n//8Z/zyl7+M6tWrR+3ataNz587xxBNPZFsGAAAAbPGyDt0DBw6M0aNHR//+/WPMmDGRm5sbPXv2jFmzZm1y7IUXXhhHHXVUNGnSJEaPHh0XX3xxtGvXLt5///3vVDwAAABsySpn0/m5556LO+64I0aNGhVDhw6NiIgBAwZE27ZtY9iwYfHMM89scOy///3vGDlyZPzlL3+JM8888/tVDQAAAFuBrPZ0T548OXJzc2PIkCGZtvz8/DjhhBNi9uzZ8d57721w7DXXXBPbbbddnH766ZEkSXzxxRffvWoAAADYCmQVul966aVo2bJlFBYWlmjv2LFjRETMmTNng2Mff/zx2HPPPeOvf/1r1K9fP2rWrBkNGzaM6667LvuqAQAAYCuQ1eHlixcvjoYNG5ZqL2774IMPyhy3dOnS+OSTT+Lpp5+OJ554Ii644IJo2rRpjBs3Ln7/+99HlSpV4qSTTtrgeouKiqKoqCjzePny5dmUDQAAABUiq9C9atWqyMvLK9Wen5+fWV6W4kPJP/3007jjjjviiCOOiIiIvn37xq677hoXX3zxRkP3ZZddFiNGjCjVvmjRoqhZs2Y2mwA/OQsXLqzoEgAAflJ8/vppWLFiRbn6ZRW6CwoKSuxxLrZ69erM8g2Ni4ioUqVK9O3bN9NeqVKlOOKII+KCCy6IRYsWRdOmTcscP3z48DjrrLMyj5cvXx5NmjSJpk2bljrUfcv0SUUXwE9Ys2bNKrqELc4eJ/2nokvYqr1wc4eKLgGA1Pn8+n34/PXTUN4jsLMK3Q0bNizz9l6LFy+OiIhGjRqVOW6bbbaJ/Pz8qF27duTm5pZY1qBBg4j4+hD0DYXuvLy8MvewAwAAwJYsqwuptW/fPubOnVsq0T/77LOZ5WWupFKlaN++fXz88cexZs2aEsuKzwOvX79+NqUAAADAFi+r0N23b99Yt25djB07NtNWVFQU48aNi06dOkWTJk0i4utzrd98880SY4844ohYt25dTJgwIdO2evXquP3226NNmzYb3EsOAAAAW6usDi/v1KlT9OvXL4YPHx5LliyJFi1axIQJE2LBggVxyy23ZPoNGDAgZsyYEUmSZNpOOumk+Nvf/ha/+93vYu7cudG0adOYOHFiLFy4MB544IHNt0UAAACwhcgqdEdE3HrrrXH++efHxIkTY+nSpdGuXbuYMmVK7L333hsdV1BQEE888UQMGzYs/v73v8fKlSujffv28eCDD0aPHj2+8wYAAADAlirr0J2fnx+jRo2KUaNGbbDPk08+WWZ7gwYNYvz48dmuEgAAALZKWZ3TDQAAAJSf0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASipXdAEAAAA/Ju90rlvRJWz1mj/zaUWXsNnY0w0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApKRyRRcAAPy07XHSfyq6hK3eCzd3qOgSANgAe7oBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICWVK7oAALYu73SuW9ElbPWaP/NpRZcAAPxA7OkGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSknXoLioqinPOOScaNWoUBQUF0alTp5g2bVrWK95///0jJycnTj311KzHAgAAwNYg69A9cODAGD16dPTv3z/GjBkTubm50bNnz5g1a1a5n+Oee+6J2bNnZ7tqAAAA2KpkFbqfe+65uOOOO+Kyyy6LUaNGxZAhQ+KJJ56IZs2axbBhw8r1HKtXr46zzz47zjnnnO9UMAAAAGwtsgrdkydPjtzc3BgyZEimLT8/P0444YSYPXt2vPfee5t8jiuvvDLWr18fQ4cOzb5aAAAA2IpkFbpfeumlaNmyZRQWFpZo79ixY0REzJkzZ6PjFy1aFJdffnlcccUVUVBQkF2lAAAAsJWpnE3nxYsXR8OGDUu1F7d98MEHGx1/9tlnx+677x5HHnlkNquNoqKiKCoqyjxevnx5VuMBAACgImQVuletWhV5eXml2vPz8zPLN2T69Olx9913x7PPPptliRGXXXZZjBgxolT7okWLombNmlk/H/yULFy4sKJLAL7FvGRz854Cfmy2ht9rK1asKFe/rEJ3QUFBiT3OxVavXp1ZXpa1a9fGaaedFscee2zsueee2awyIiKGDx8eZ511Vubx8uXLo0mTJtG0adNSh7pvmT6p6AL4CWvWrFlFl7AFMiepWOblt5mT35f3FJufeUnF2hp+r5X3COysQnfDhg3j/fffL9W+ePHiiIho1KhRmeNuvfXWeOutt+Lmm2+OBQsWlFi2YsWKWLBgQTRo0CCqVatW5vi8vLwy97ADAADAliyrC6m1b98+5s6dWyrRFx8y3r59+zLHLVq0KL766qv41a9+FTvssEPmX8TXgXyHHXaIqVOnfofyAQAAYMuV1Z7uvn37xlVXXRVjx47N3PKrqKgoxo0bF506dYomTZpExNch+8svv4zWrVtHRMSRRx5ZZiA/9NBDo2fPnnHiiSdGp06dvuemAAAAwJYlq9DdqVOn6NevXwwfPjyWLFkSLVq0iAkTJsSCBQvilltuyfQbMGBAzJgxI5IkiYiI1q1bZwL4t+2www7Rp0+f774FAAAAsIXKKnRHfH04+Pnnnx8TJ06MpUuXRrt27WLKlCmx9957p1EfAAAAbLWyDt35+fkxatSoGDVq1Ab7PPnkk+V6ruI94QAAAPBjlNWF1AAAAIDyE7oBAAAgJUI3AAAApEToBgAAgJQI3QAAAJCSrK9eDgDAluWdznUruoStWvNnPq3oEoAfMXu6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkJKsQ3dRUVGcc8450ahRoygoKIhOnTrFtGnTNjnunnvuiSOOOCJ23HHHqFatWrRq1SrOPvvsWLZs2XepGwAAALZ4WYfugQMHxujRo6N///4xZsyYyM3NjZ49e8asWbM2Om7IkCHxxhtvxDHHHBN//etf48ADD4zrrrsufvnLX8aqVau+8wYAAADAlqpyNp2fe+65uOOOO2LUqFExdOjQiIgYMGBAtG3bNoYNGxbPPPPMBsdOnjw59tlnnxJtHTp0iOOOOy5uv/32GDx4cPbVAwAAwBYsqz3dkydPjtzc3BgyZEimLT8/P0444YSYPXt2vPfeexsc++3AHRFx6KGHRkTEG2+8kU0ZAAAAsFXIKnS/9NJL0bJlyygsLCzR3rFjx4iImDNnTlYr//DDDyMiol69elmNAwAAgK1BVoeXL168OBo2bFiqvbjtgw8+yGrlV1xxReTm5kbfvn032q+oqCiKiooyj5cvX57VegAAAKAiZBW6V61aFXl5eaXa8/PzM8vLa9KkSXHLLbfEsGHDYqeddtpo38suuyxGjBhRqn3RokVRs2bNcq8TfooWLlxY0SUA32JewpbFnIQtz9YwL1esWFGuflmF7oKCghJ7nIutXr06s7w8Zs6cGSeccEL06NEjLrnkkk32Hz58eJx11lmZx8uXL48mTZpE06ZNSx3qvmX6pKIL4CesWbNmFV3CFsicpGKZl99mTlKxzMmymJdUrK1hXpb3COysQnfDhg3j/fffL9W+ePHiiIho1KjRJp/j5ZdfjoMPPjjatm0bkydPjsqVN11CXl5emXvYAQAAYEuW1YXU2rdvH3Pnzi2V6J999tnM8o1555134sADD4wGDRrEQw89FDVq1MiuWgAAANiKZBW6+/btG+vWrYuxY8dm2oqKimLcuHHRqVOnaNKkSUR8fa71m2++WWLshx9+GAcccEBUqlQpHn300ahfv/5mKB8AAAC2XFkdXt6pU6fo169fDB8+PJYsWRItWrSICRMmxIIFC+KWW27J9BswYEDMmDEjkiTJtB144IHx7rvvxrBhw2LWrFkxa9aszLJtt9029t9//82wOQAAALDlyCp0R0Tceuutcf7558fEiRNj6dKl0a5du5gyZUrsvffeGx338ssvR0TElVdeWWpZ165dhW4AAAB+dLIO3fn5+TFq1KgYNWrUBvs8+eSTpdq+udcbAAAAfgqyOqcbAAAAKD+hGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKhG4AAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEqEbgAAAEiJ0A0AAAApEboBAAAgJUI3AAAApEToBgAAgJQI3QAAAJASoRsAAABSInQDAABASoRuAAAASInQDQAAACkRugEAACAlQjcAAACkROgGAACAlAjdAAAAkBKhGwAAAFIidAMAAEBKsg7dRUVFcc4550SjRo2ioKAgOnXqFNOmTSvX2Pfffz8OP/zwqF27dhQWFsYhhxwS7777btZFAwAAwNYg69A9cODAGD16dPTv3z/GjBkTubm50bNnz5g1a9ZGx33xxRfRrVu3mDFjRpx33nkxYsSIeOmll6Jr167x6aeffucNAAAAgC1V5Ww6P/fcc3HHHXfEqFGjYujQoRERMWDAgGjbtm0MGzYsnnnmmQ2OveGGG2LevHnx3HPPxZ577hkREQcddFC0bds2/vKXv8Sll176PTYDAAAAtjxZ7emePHly5ObmxpAhQzJt+fn5ccIJJ8Ts2bPjvffe2+jYPffcMxO4IyJat24d3bt3jzvvvPM7lA4AAABbtqz2dL/00kvRsmXLKCwsLNHesWPHiIiYM2dONGnSpNS49evXxyuvvBLHH398qWUdO3aMqVOnxooVK6JmzZplrreoqCiKiooyjz///POIiFi+fHk25VeYdWu+qOgStmor1iYVXcJWbWuZJz8kc/L7MSe/P/OyJHPy+zMvvx9zsjTz8vsxJ7+/rWFeFteYJBv/eWcVuhcvXhwNGzYs1V7c9sEHH5Q57rPPPouioqJNjm3VqlWZ4y+77LIYMWJEqfayAj4/PrtXdAFbu1q1KroCfmTMyc3AvGQzMy+/J3OSzcyc3Ay2onm5YsWKqLWRerMK3atWrYq8vLxS7fn5+ZnlGxoXEd9pbETE8OHD46yzzso8Xr9+fXz22WdRt27dyMnJKf8GsNVZvnx5NGnSJN57771SR1gAPzxzErY85iVsWczJn44kSWLFihXRqFGjjfbLKnQXFBSUOMy72OrVqzPLNzQuIr7T2Iivw/q3A3vt2rXLVTM/DoWFhX5pwRbEnIQtj3kJWxZz8qdhY3u4i2V1IbWGDRvG4sWLS7UXt20o4W+zzTaRl5f3ncYCAADA1iqr0N2+ffuYO3duqZPan3322czyMldSqVLsuuuu8cILL5Ra9uyzz8aOO+64wYuoAQAAwNYqq9Ddt2/fWLduXYwdOzbTVlRUFOPGjYtOnTplLmy2aNGiePPNN0uNff7550sE77feeiueeOKJ6Nev3/fZBn7E8vLy4oILLijzegDAD8+chC2PeQlbFnOSb8tJNnV98285/PDD4957740zzzwzWrRoERMmTIjnnnsuHn/88dh7770jImKfffaJGTNmlLh0+ooVK2L33XePFStWxNChQ6NKlSoxevToWLduXcyZMyfq16+/ebcMAAAAKlhWF1KLiLj11lvj/PPPj4kTJ8bSpUujXbt2MWXKlEzg3pCaNWvGk08+GWeeeWZcfPHFsX79+thnn33i6quvFrgBAAD4Ucp6TzcAAABQPlmd0w0AAACUn9DNVm+fffaJffbZp6LLgNRtv/320bt379TX8+STT0ZOTk48+eSTqa8LtnTm3Q/jh/xbfuGFF0ZOTs4Psi6ACKGbzWT8+PGRk5NT4l+DBg2iW7du8fDDD1dITZ9++mn84Q9/iFatWkV+fn5ss8020aNHj3jwwQcrpB62PsXv67Jud0j2Fi1aFCeffHJsv/32kZeXFw0aNIg+ffrE008/XdGlbdCyZcsiPz8/cnJy4o033iizz8CBA6NGjRo/cGU/Xubd5rNgwYIYNGhQNG/ePPLz82O77baLvffeOy644IIS/W644YYYP358xRQJ8BOQ9YXUYGNGjhwZO+ywQyRJEh999FGMHz8+evbsGQ888MAPsqeg2FtvvRXdu3ePjz/+OAYNGhR77LFHLFu2LG6//fbo3bt3nHPOOXH55Zf/YPXAT93TTz8dPXv2jIiIwYMHR5s2beLDDz+M8ePHR5cuXWLMmDHx+9//voKrLO2uu+6KnJyc2G677eL222+Piy++uKJLgnJ5++23Y88994yCgoI4/vjjY/vtt4/FixfHiy++GFdccUWMGDEi0/eGG26IevXqxcCBAyuuYIAfMaGbzeqggw6KPfbYI/P4hBNOiG233Tb+8Y9//GCh+6uvvoq+ffvG0qVL46mnnopOnTpllp155pnRv3//uOKKK6JDhw7uEQ8/gKVLl0bfvn2joKAgnn766WjevHlm2VlnnRU9evSIM844Izp06BCdO3euwEpLu+2226Jnz57RrFmzmDRpktDNVuPqq6+OL774IubMmRPNmjUrsWzJkiUVVNUPY+3atbF+/fqoWrVqRZfCVmTlypVRvXr1ii6DHymHl5Oq2rVrR0FBQVSu/P++31m5cmWcffbZ0aRJk8jLy4tWrVrFVVddFd++kP7atWvjoosuiubNm0deXl5sv/32cd5550VRUdFG13n33XfHa6+9Fueee26JwB0RkZubGzfffHPUrl271OF18F2sWbMm/vznP0eHDh2iVq1aUb169ejSpUtMnz69RL8FCxZETk5OXHXVVXH99dfHjjvuGNWqVYsDDjgg3nvvvUiSJC666KJo3LhxFBQUxCGHHBKfffZZmeucOnVqtG/fPvLz86NNmzZxzz33lFj+2WefxdChQ2PXXXeNGjVqRGFhYRx00EHx8ssvl3qu//3vf9GnT5+oXr16NGjQIM4888wy59jMmTOjX79+0bRp08jLy4smTZrEmWeeGatWrdrka3TzzTfHhx9+GKNGjSoRuCMiCgoKYsKECZGTkxMjR47MtBcfYvzUU0/FSSedFHXr1o3CwsIYMGBALF26tNQ6Hn744ejSpUtUr149atasGb169YrXX3+9RJ/iw8Dff//96NOnT9SoUSPq168fQ4cOjXXr1pV6zkWLFsXMmTPjyCOPjCOPPDLmz58fzzzzzCa3l/SZd5ued++88040bty4VOCOiGjQoEHm/7fffvt4/fXXY8aMGZnTw4rPrS7vNhWfj37nnXfGJZdcEo0bN478/Pzo3r17vP3226XWP3bs2GjevHkUFBREx44dY+bMmaX6fJef8TXXXJP5zPDf//43IiJmzZoVe+65Z+Tn50fz5s3j5ptv3uRrx9Zr4cKFccopp0SrVq2ioKAg6tatG/369YsFCxaU6Ff8N2bGjBlxyimnRIMGDaJx48aZ5Q8//HB07do1atasGYWFhbHnnnvGpEmTsqpl2bJlccYZZ2Q+77Zo0SKuuOKKWL9+fYl+V111VXTu3Dnq1q0bBQUF0aFDh5g8eXKp55s2bVrstddeUbt27ahRo0a0atUqzjvvvIiI+OKLL6J69epx+umnlxr3v//9L3Jzc+Oyyy7Lqn42swQ2g3HjxiURkTz22GPJxx9/nCxZsiR57bXXkpNOOimpVKlSMnXq1CRJkmT9+vXJvvvum+Tk5CSDBw9OrrvuuuTXv/51EhHJGWecUeI5jzvuuCQikr59+ybXX399MmDAgCQikj59+pTo17Vr16Rr166Zx0cffXQSEcmCBQs2WG/xc7/99tub70XgR6f4ff38889vsM/HH3+cNGzYMDnrrLOSG2+8MbnyyiuTVq1aJVWqVEleeumlTL/58+cnEZG0b98+adOmTTJ69OjkT3/6U1K1atXkF7/4RXLeeeclnTt3Tv76178mp512WpKTk5MMGjSoxLqaNWuWtGzZMqldu3Zy7rnnJqNHj0523XXXEnMsSZLk+eefT5o3b56ce+65yc0335yMHDky+dnPfpbUqlUref/99zP9vvzyy6Rly5ZJfn5+MmzYsOSaa65JOnTokLRr1y6JiGT69OmZvr///e+Tnj17Jpdeemly8803JyeccEKSm5ub9O3bd5OvY+fOnZP8/Pxk9erVG+zTtWvXpEqVKsmXX35Z4rXfddddky5duiR//etfk9/97ndJpUqVkr333jtZv359Zuytt96a5OTkJAceeGBy7bXXJldccUWy/fbbJ7Vr107mz5+f6Xfccccl+fn5yS677JIcf/zxyY033pj85je/SSIiueGGG0rVdPnllyc1atTI1NS8efPklFNOKdXvuOOOS6pXr77J14HyMe+mZ/p+n3k3ZMiQJDc3N3n88cc32u/ee+9NGjdunLRu3TqZOHFiMnHixMx2lXebpk+fnkREsvvuuycdOnRIrr766uTCCy9MqlWrlnTs2LHE+v72t78lEZF53c8444ykdu3ayY477ljib3m2P+M2bdokO+64Y3L55ZcnV199dbJw4cLklVdeSQoKCpKmTZsml112WXLRRRcl2267bea15sfnrrvuSnbbbbfkz3/+czJ27NjkvPPOS+rUqZM0a9YsWblyZaZf8e+ZNm3aJF27dk2uvfba5PLLL88sy8nJSdq2bZtccsklyfXXX58MHjw4OfbYY8tdx8qVK5N27doldevWTc4777zkpptuSgYMGJDk5OQkp59+eom+jRs3Tk455ZTkuuuuS0aPHp107NgxiYhkypQpmT6vvfZaUrVq1WSPPfZIxowZk9x0003J0KFDk7333jvTp3///sm2226brF27tsTzX3nllUlOTk6ycOHCbF5KNjO/cdgsin95fftfXl5eMn78+Ey/++67L4mI5OKLLy4xvm/fvklOTk4mBM+ZMyeJiGTw4MEl+g0dOjSJiOSJJ57ItH07dLdv3z6pVavWRusdPXp0EhHJv/71r++4xfwUlOfD/9q1a5OioqISbUuXLk223Xbb5Pjjj8+0FX8wrF+/frJs2bJM+/Dhw5OISHbbbbfkq6++yrQfddRRSdWqVUsE1WbNmiURkdx9992Zts8//zxp2LBhsvvuu2faVq9enaxbt65ETfPnz0/y8vKSkSNHZtquueaaJCKSO++8M9O2cuXKpEWLFqU+/BcHz2+67LLLyvWHvHbt2sluu+220T6nnXZaEhHJK6+8kiTJ/3vtO3TokKxZsybT78orr0wiIrn//vuTJEmSFStWJLVr105OPPHEEs/34YcfJrVq1SrRXvxl2zdfgyRJMkHh23bdddekf//+mcfnnXdeUq9evRI/p+LnFbo3H/Nueqb9+8y71157LSkoKMh86XD66acn9913X4ngUWyXXXYp8Xc0220qDt0777xziZ/LmDFjkohIXn311SRJkmTNmjVJgwYNkvbt25foN3bs2CQiStSQ7c+4sLAwWbJkSYn+ffr0SfLz80u8Vv/973+T3NxcoftHqqw5M3v27CQikltvvTXTVvx7Zq+99ioRUpctW5bUrFkz6dSpU7Jq1aoSz/PNL3s35aKLLkqqV6+ezJ07t0T7ueeem+Tm5iaLFi3aYM1r1qxJ2rZtm+y7776ZtquvvjqJiOTjjz/e4DofffTRJCKShx9+uER7u3btypzf/LAcXs5mdf3118e0adNi2rRpcdttt0W3bt1i8ODBmcPwHnroocjNzY3TTjutxLizzz47kiTJXOn8oYceioivz/f8dr+I2OgVyFesWBE1a9bcaJ3Fy1esWJHF1kFpubm5mfMG169fH5999lmsXbs29thjj3jxxRdL9e/Xr1/UqlUr87j4FIhjjjmmxGkYnTp1ijVr1sT7779fYnyjRo3i0EMPzTwuPuT6pZdeig8//DAiIvLy8qJSpa9/va9bty4+/fTTzKFo36zpoYceioYNG0bfvn0zbdWqVYshQ4aUqrugoCDz/ytXroxPPvkkOnfuHEmSxEsvvbTR1yibObl8+fIS7UOGDIkqVapkHv/2t7+NypUrZ35HTJs2LZYtWxZHHXVUfPLJJ5l/ubm50alTp1KHokZEnHzyySUed+nSJd59990Sba+88kq8+uqrcdRRR2Xaitfx6KOPbnRbSJ95t+l5t8suu8ScOXPimGOOiQULFsSYMWOiT58+se2228b//d//bXRssfJuU7FBgwaVOI+6S5cuERGZ+fXCCy/EkiVL4uSTTy7Rb+DAgSV+PhHZ/4x/85vfRP369TOP161bF48++mj06dMnmjZtmmnfeeedo0ePHuXafrY+35wzX331VXz66afRokWLqF27dpnvmxNPPDFyc3Mzj6dNmxYrVqyIc889N/Lz80v0zeY2c3fddVd06dIl6tSpU+Jv03777Rfr1q2Lp556qsyaly5dGp9//nl06dKlRL21a9eOiIj777+/1OHpxfbbb79o1KhR3H777Zm21157LV555ZU45phjyl076RC62aw6duwY++23X+y3337Rv3//ePDBB6NNmzZx6qmnxpo1a2LhwoXRqFGjUh/Ad95554j4+lyc4v9WqlQpWrRoUaLfdtttF7Vr1870K0vNmjU3GaaLl3/zvDb4riZMmBDt2rWL/Pz8qFu3btSvXz8efPDB+Pzzz0v1/eaHv4jIfNBs0qRJme3fPn+5RYsWpf7wt2zZMiIic87a+vXr4+qrr46ddtop8vLyol69elG/fv145ZVXStS0cOHCMp+vVatWpepetGhRDBw4MLbZZpvMudBdu3aNiChzO78pmzn57d8NO+20U4nHNWrUiIYNG2a2dd68eRERse+++0b9+vVL/Js6dWqpC0bl5+eX+GAeEVGnTp1Sr/Ntt90W1atXjx133DHefvvtePvttyM/Pz+23377Eh9oqDjm3cbnXXGNEydOjE8++SReeeWVuPTSS6Ny5coxZMiQeOyxxzY5vrzbVOzbr3OdOnUi4v+9nsV/u789r6tUqRI77rhjqefL5me8ww47lHj88ccfx6pVq0qtK6Ls15ofh1WrVsWf//znzHnUxe/ZZcuWlet9884770RERNu2bb9XHfPmzYtHHnmk1N+l/fbbLyJKXsxwypQp8Ytf/CJze9v69evHjTfeWKLeI444In71q1/F4MGDY9ttt40jjzwy7rzzzhIBvFKlStG/f/+477774ssvv4yIiNtvvz3y8/NdOHgL4OrlpKpSpUrRrVu3GDNmTObDcTay+VaxWJs2bWLOnDmxaNGiUh8Air3yyisREWX+kYds3HbbbTFw4MDo06dP/OEPf4gGDRpkLlhS/Mf7m775jXp52pNvXWCwPC699NI4//zz4/jjj4+LLroottlmm6hUqVKcccYZG/yGfGPWrVsX+++/f3z22WdxzjnnROvWraN69erx/vvvx8CBAzf5nDvvvHO89NJLUVRUFHl5eWX2eeWVV6JKlSplfkDemOJ1T5w4MbbbbrtSy7+5FzNiw6/zNyVJEv/4xz9i5cqV0aZNm1LLlyxZEl988YV7c1cg827T8+6bcnNzY9ddd41dd901fvnLX0a3bt3i9ttvzwSAzbVNm/P1zPZn/M29hfx0/f73v49x48bFGWecEb/85S+jVq1akZOTE0ceeWSZ79m03jfr16+P/fffP4YNG1bm8uIv7WbOnBkHH3xw7L333nHDDTdEw4YNo0qVKjFu3LgSF24rKCiIp556KqZPnx4PPvhgPPLII/HPf/4z9t1335g6dWpm7g0YMCBGjRoV9913Xxx11FExadKk6N27d6kjSfjhCd2kbu3atRHx9ZUVmzVrFo899lipw03ffPPNiIjMVVabNWsW69evj3nz5mX2gkdEfPTRR7Fs2bIyr8Za7Ne//nVMmjQpbr311vjTn/5Uavny5cvj/vvvj5///OdCN9/b5MmTY8cdd4x77rmnxJdEaV0d/+23344kSUqsa+7cuRHx9VWIi2vq1q1b3HLLLSXGLlu2LOrVq5d53KxZs3jttddKPd9bb71VYtyrr74ac+fOjQkTJsSAAQMy7dOmTStXzb17947Zs2fHXXfdVeYhbgsWLIiZM2fGfvvtV+oD0Lx586Jbt26Zx1988UUsXrw4c8/v4quhN2jQYJMBorxmzJgR//vf/2LkyJElfv9EfL3HbsiQIXHfffc5XK8CmXffXfFtPRcvXpxp29AX3OXdpvIq/ts9b9682HfffTPtX331VcyfPz922223Euv+Pj/j+vXrR0FBQZlf+H/7tebHY/LkyXHcccfFX/7yl0zb6tWrY9myZeUaX/w35bXXXit1tGU2mjdvHl988cUm/y7dfffdkZ+fH48++miJL6XHjRtXqm+lSpWie/fu0b179xg9enRceuml8cc//jGmT5+eWU/btm1j9913j9tvvz0aN24cixYtimuvvfY7bwebj8PLSdVXX30VU6dOjapVq8bOO+8cPXv2jHXr1sV1111Xot/VV18dOTk5cdBBB0VEZD5QX3PNNSX6jR49OiIievXqtcF1/uY3v4lddtklLr/88njhhRdKLFu/fn389re/jaVLl8Yf//jH77t5kPl2+Zt7cp599tmYPXt2Kuv74IMP4t577808Xr58edx6663Rvn37zJ7e3NzcUnuW7rrrrlLnqfbs2TM++OCDErcm+fLLL2Ps2LEl+pW1jUmSxJgxY8pV80knnRQNGjSIP/zhD6XOnV69enUMGjQokiSJP//5z6XGjh07Nr766qvM4xtvvDHWrl2b+V3Ro0ePKCwsjEsvvbREv2Iff/xxuWr8puJDy//whz9E3759S/w78cQTY6eddnKIeQUz7zZt5syZZc6J4ushfPMQ6+rVq5cZSsq7TeW1xx57RP369eOmm26KNWvWZNrHjx9fav3f92ecm5sbPXr0iPvuuy8WLVqUaX/jjTdcl+FHrKz37LXXXlvmbSHLcsABB0TNmjXjsssui9WrV5dYls0RG4cffnjMnj27zPfasmXLMjukcnNzIycnp0R9CxYsiPvuu6/EmLJuZdi+ffuIiFK3Gzz22GNj6tSpcc0110TdunUzfy+pWPZ0s1k9/PDDmb3WS5YsiUmTJsW8efPi3HPPjcLCwvj1r38d3bp1iz/+8Y+xYMGC2G233WLq1Klx//33xxlnnJH5hnG33XaL4447LsaOHRvLli2Lrl27xnPPPRcTJkyIPn36lNjz9W1VqlSJu+++O/bdd9/Ya6+9YtCgQbHHHnvEsmXLYtKkSfHiiy/GeeedF4cddtgP8pqw9fv73/8ejzzySKn2008/PXr37h333HNPHHroodGrV6+YP39+3HTTTdGmTZv44osvNnstLVu2jBNOOCGef/752HbbbePvf/97fPTRRyW+Fe/du3eMHDkyBg0aFJ07d45XX301br/99lJHdpx44olx3XXXxYABA+I///lPNGzYMCZOnBjVqlUr0a9169bRvHnzGDp0aLz//vtRWFgYd999d5n3yy5L3bp1Y/LkydGrV6/4+c9/HoMHD442bdrEhx9+GOPHj4+33347xowZE507dy41ds2aNdG9e/c4/PDD46233oobbrgh9tprrzj44IMj4usLWt14441x7LHHxs9//vM48sgjo379+rFo0aJ48MEH41e/+lWpL/k2pqioKO6+++7Yf//9S11Ep9jBBx8cY8aMiSVLlrguRIrMu+8376644or4z3/+E4cddli0a9cuIiJefPHFuPXWW2ObbbaJM844I9O3Q4cOceONN8bFF18cLVq0iAYNGsS+++5b7m0qrypVqsTFF18cJ510Uuy7775xxBFHxPz582PcuHGlnnNz/IxHjBgRjzzySHTp0iVOOeWUWLt2bVx77bWxyy67ZE4z48eld+/eMXHixKhVq1a0adMmZs+eHY899ljUrVu3XOMLCwvj6quvjsGDB8eee+4ZRx99dNSpUydefvnl+PLLL2PChAnlep4//OEP8a9//St69+4dAwcOjA4dOsTKlSvj1VdfjcmTJ8eCBQuiXr160atXrxg9enQceOCBcfTRR8eSJUvi+uuvjxYtWpR4j44cOTKeeuqp6NWrVzRr1iyWLFkSN9xwQzRu3Dj22muvEus++uijY9iwYXHvvffGb3/72xIXI6UC/XAXSufHrKxbhuXn5yft27dPbrzxxhK3WVixYkVy5plnJo0aNUqqVKmS7LTTTsmoUaNK3Yrhq6++SkaMGJHssMMOSZUqVZImTZokw4cPL3Wv32/fMqzYxx9/nJx99tlJixYtkqpVq2bquuWWW1J5Dfjx2dCt8Ir/vffee8n69euTSy+9NGnWrFmSl5eX7L777smUKVOS4447LmnWrFnmuYpvazNq1KgS6yi+1c5dd91V5rq/edukZs2aJb169UoeffTRpF27dkleXl7SunXrUmNXr16dnH322UnDhg2TgoKC5Fe/+lUye/bsMufKwoULk4MPPjipVq1aUq9eveT0009PHnnkkVK3Lvrvf/+b7LfffkmNGjWSevXqJSeeeGLy8ssvJxGRjBs3rlyv5/z585MTTzwxadq0aVKlSpWkXr16ycEHH5zMnDlzg6/9jBkzkiFDhiR16tRJatSokfTv3z/59NNPS/WfPn160qNHj6RWrVpJfn5+0rx582TgwIHJCy+8kOmzoVt7XXDBBZnbB919992b/D3x5JNPJhGRjBkzZqPPy3dj3k3P9Ps+8+7pp59Ofve73yVt27ZNatWqlVSpUiVp2rRpMnDgwOSdd94p0ffDDz9MevXqldSsWbPErbvKu00bej2LX/9v13rDDTckO+ywQ5KXl5fsscceyVNPPVXqOb/vz7jYjBkzkg4dOiRVq1ZNdtxxx+Smm24qMef5cVm6dGkyaNCgpF69ekmNGjWSHj16JG+++WbSrFmz5Ljjjsv029StCf/1r38lnTt3TgoKCpLCwsKkY8eOyT/+8Y+salmxYkUyfPjwzOfQevXqJZ07d06uuuqqErfCvOWWW5Kddtop87tl3Lhxpd6jjz/+eHLIIYckjRo1SqpWrZo0atQoOeqoo0rdkqxYz549k4hInnnmmaxqJj05SfIdrm4BW6FXX301unTpEk2aNIlZs2a5qARswcaPHx+DBg2K559/PnMOKgCwaYceemi8+uqr8fbbb1d0Kfz/nNPNT8auu+4a999/f8ybNy/69OlT4nwyAADY2i1evDgefPDBOPbYYyu6FL7BOd38pHTt2rXUhTEAAGBTVq1aVeb9vr9pm222iapVq/5AFf0/8+fPj6effjr+9re/RZUqVeKkk076wWtgw4RuAACATfjnP/8ZgwYN2mif6dOnxz777PPDFPQNM2bMiEGDBkXTpk1jwoQJmTsrsGVwTjcAAMAmLF68OF5//fWN9unQoUPUqVPnB6qIrYXQDQAAAClxITUAAABIidANAAAAKRG6AQAAICVCNwAAAKRE6AYAAICUCN0AAACQEqEbAAAAUiJ0AwAAQEr+PySRw2D3vG5zAAAAAElFTkSuQmCC\n"
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Conclusion\n",
        "This notebook serves as a demonstration of how simple it can be to create your own optimized models using the [OptiPFair](https://github.com/peremartra/optipfair/tree/main) library.\n",
        "\n",
        "The result is a model that uses less memory and is faster at inference than the original, while still retaining most of its knowledge.\n",
        "\n",
        "Of course, many more tests and a wider range of rankings are needed to fully understand the model’s performance. The evaluation module of the [OptiPFair](https://github.com/peremartra/optipfair/tree/main) library is still under development, so in the future it will be possible to apply more types of pruning and evaluate the resulting models with better benchmarks."
      ],
      "metadata": {
        "id": "2lBSAKQAUqQW"
      }
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lW-biFT6U0zQ"
      },
      "source": [
        "##Authors Note.\n",
        "In addition to creating content like this notebook and offering it under the MIT license, I have also contributed to repositories such as those of Hugging Face and Google Gemini.\n",
        "\n",
        "I am especially proud of my book: <a href=\"https://amzn.to/4eanT1g\"><b>Large Language Models:</b> Apply and Implement Strategies for Large Language Models</a> (Apress).\n",
        "\n",
        "You can find it on both <a href=\"https://amzn.to/4eanT1g\">Amazon</a> and <a href=\"https://link.springer.com/book/10.1007/979-8-8688-0515-8\">Springer</a>, where they often have good deals on the purchase price.\n",
        "\n",
        "If you take a look and end up purchasing it, keep in mind that you can reach out with any questions via the Discussions section of this same repository or on any of my social media channels. I’ll do my best to respond as quickly as possible.\n",
        "\n",
        "## References.\n",
        "* Martra, P. (2024). EXPLORING GLU EXPANSION RATIOS: STRUCTURED PRUNING IN LLAMA-3.2 MODELS. https://doi.org/https://doi.org/10.31219/osf.io/qgxea"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "T4",
      "provenance": [],
      "machine_shape": "hm",
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}