{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_2_pruning_structured_llama3.2-1b_KO.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hP9S5ckLF9lo"
      },
      "source": [
        "\n",
        "<div>\n",
        "    <h1>Large Language Models Projects</a></h1>\n",
        "    <h3>Apply and Implement Strategies for Large Language Models</h3>\n",
        "    <h2>Pruning Llama 3.2.</h2>\n",
        "    <h3>Example of INCORRECT approach to pruning a Llama Model.</h3>\n",
        "</div>\n",
        "\n",
        "by [Pere Martra](https://www.linkedin.com/in/pere-martra/)\n",
        "_______\n",
        "Models: meta-llama/Llama-3.2-1B\n",
        "\n",
        "Colab Environment: GPU T4.\n",
        "\n",
        "Keys:\n",
        "* Pruning\n",
        "* Structured pruning\n",
        "\n",
        "\n",
        "Related article: --.\n",
        "_______\n",
        "**disclaimer: The pruning section was created after the first edition of the book was published. They are not included in the book’s original content but are intended to supplement and expand on the topics covered.**\n",
        "\n",
        "This is the unofficial repository for the book:\n",
        "        <a href=\"https://amzn.to/4eanT1g\"> <b>Large Language Models:</b> Apply and Implement Strategies for Large Language Models</a> (Apress).\n",
        "        The book is based on the content of this repository, but the notebooks are being updated, and I am incorporating new examples and chapters.\n",
        "        If you are looking for the official repository for the book, with the original notebooks, you should visit the\n",
        "        <a href=\"https://github.com/Apress/Large-Language-Models-Projects\">Apress repository</a>, where you can find all the notebooks in their original format as they appear in the book.\n",
        "______"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "v4DPE3cPHIZC"
      },
      "source": [
        "# Introduction\n",
        "This notebook cotinues the work done at: [6_1_pruning_structured_l1_diltilgpt2.ipynb](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_1_pruning_structured_l1_diltilgpt2.ipynb) where pruning was applied to a distilGPT2 model.\n",
        "\n",
        "The pruning process was based on selecting neurons from the model's feedforward layers that have the least importance using the L1 norm, assuming these contributed the least to the model's output.\n",
        "\n",
        "In this notebook, the same process is applied to a state-of-the-art model from the Llama family. The results, however, are not as expected, simply because the model's structure is very different, and the method needs to be adapted to these characteristics.\n",
        "\n",
        "**In this notebook, we'll identify the main issues so we can address them in a follow-up notebook.**"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7d6zxPkdR88i"
      },
      "source": [
        "#Install libraries & Configure variables."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "5zHApVm41HWq",
        "outputId": "6174e450-ecff-49e2-8be2-4c66f695921a",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m363.4/363.4 MB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.8/13.8 MB\u001b[0m \u001b[31m77.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m24.6/24.6 MB\u001b[0m \u001b[31m67.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m883.7/883.7 kB\u001b[0m \u001b[31m53.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m664.8/664.8 MB\u001b[0m \u001b[31m1.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m211.5/211.5 MB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.3/56.3 MB\u001b[0m \u001b[31m13.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m127.9/127.9 MB\u001b[0m \u001b[31m7.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m207.5/207.5 MB\u001b[0m \u001b[31m5.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.1/21.1 MB\u001b[0m \u001b[31m103.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "#Install necessary libraries.\n",
        "!pip install -q transformers\n",
        "!pip install -q torch\n",
        "!pip install -q sentencepiece  # Required for LLaMA tokenizer"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "GJNgRj4M187E",
        "outputId": "2133766e-4e3f-42e5-8dcc-0a70da56eb84"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Using device: cuda\n"
          ]
        }
      ],
      "source": [
        "#Import libraries\n",
        "import torch\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
        "from torch import nn\n",
        "import os\n",
        "\n",
        "# Check if GPU is available\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"Using device: {device}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n0MedSz-SIYH"
      },
      "source": [
        "#Download model and explore structure"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "q-z_1Zpg2I6u"
      },
      "outputs": [],
      "source": [
        "model_name = 'meta-llama/Llama-3.2-1B'\n",
        "model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(device)\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
        "tokenizer.pad_token = tokenizer.eos_token  # Set pad token"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "9UpMD4Hw2MWg"
      },
      "outputs": [],
      "source": [
        "def get_output(prompt, model=model, tokenizer=tokenizer):\n",
        "    inputs = tokenizer(prompt, return_tensors='pt').to(device)\n",
        "    outputs = model.generate(\n",
        "        inputs['input_ids'],\n",
        "        attention_mask=inputs['attention_mask'],\n",
        "        max_length=50,\n",
        "        num_return_sequences=1,\n",
        "        pad_token_id=tokenizer.pad_token_id\n",
        "    )\n",
        "    generated = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
        "    return generated"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "m-3LoobgkA-x"
      },
      "source": [
        "## studying the model structure\n",
        "\n",
        "As you already know, studying the model’s structure is crucial for a successful pruning process.\n",
        "\n",
        "In this example, I’ll use the same pruning approach as in the previous example with a distilGPT2 model, which has a different structure. You can see the structure and the example in the notebook: [6_1_pruning_structured_l1_diltilgpt2.ipynb](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_1_pruning_structured_l1_diltilgpt2.ipynb).\n",
        "\n",
        "The process involved removing a percentage of the neurons with the lowest weights from the feedforward layers of the model, located within the MLP module. In the GPT2 model, these layers were called `c_fc` and `c_proj`, while in the Llama model, these layers are `gat_proj`, `up_proj`, and additionally `down_proj`.\n",
        "\n",
        "But the name isn’t the most important part, these layers have a very different structure and function compared to the `MLP` module layers in the distilGPT2 model.\n",
        "\n",
        "Understanding these differences will be crucial for defining the pruning process. In this notebook, we will examine how the Llama model  is negatively affected by a pruning process that worked correctly with the distilGPT2 model, even though both target the MLP layers and use the same neuron selection method.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "vkcJcUs6jLIn",
        "outputId": "78aaff12-37f8-481a-eb64-99d205e73ca7"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 2048)\n",
            "    (layers): ModuleList(\n",
            "      (0-15): 16 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "          (k_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (v_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
            "          (up_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
            "          (down_proj): Linear(in_features=8192, out_features=2048, bias=False)\n",
            "          (act_fn): SiLU()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=2048, out_features=128256, bias=False)\n",
            ")\n"
          ]
        }
      ],
      "source": [
        "print(model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WBIdtgJUEe1-"
      },
      "source": [
        "Each transformer block `lamaDecoderLayer` contains a MLP `LlamaMLP` Layer with a GLU (Gated Linear Structured.)\n",
        "\n",
        "It is a sophisticated structure whe comparing with other transformer models.\n",
        "\n",
        "Let's see each layer:\n",
        "* `gate_proj`: Projects the input to a higher dimension (2048 to 8192).\n",
        "\n",
        "* `up_proj`: Another projection to the higher dimension (2048 to 8192).\n",
        "\n",
        "* `down_proj`: Projects back to the original dimension (8192 to 2048).\n",
        "\n",
        "When prunig you should have in mind the relationship between these layers.\n",
        "\n",
        "Another important consideration is the model's configuration file. Since the pruning process alters the model's structure, the resulting structure must be reflected in the configuration file.\n",
        "\n",
        "Otherwise, we might encounter issues where the model doesn't work properly with the Transformers library or produces errors or incorrect results during inference.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "juOU8ij6Hhlv",
        "outputId": "3be377ec-3c3c-4a1b-ec33-2a6ef88d7d24"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "LlamaConfig {\n",
            "  \"_attn_implementation_autoset\": true,\n",
            "  \"architectures\": [\n",
            "    \"LlamaForCausalLM\"\n",
            "  ],\n",
            "  \"attention_bias\": false,\n",
            "  \"attention_dropout\": 0.0,\n",
            "  \"bos_token_id\": 128000,\n",
            "  \"eos_token_id\": 128001,\n",
            "  \"head_dim\": 64,\n",
            "  \"hidden_act\": \"silu\",\n",
            "  \"hidden_size\": 2048,\n",
            "  \"initializer_range\": 0.02,\n",
            "  \"intermediate_size\": 8192,\n",
            "  \"max_position_embeddings\": 131072,\n",
            "  \"mlp_bias\": false,\n",
            "  \"model_type\": \"llama\",\n",
            "  \"num_attention_heads\": 32,\n",
            "  \"num_hidden_layers\": 16,\n",
            "  \"num_key_value_heads\": 8,\n",
            "  \"pretraining_tp\": 1,\n",
            "  \"rms_norm_eps\": 1e-05,\n",
            "  \"rope_scaling\": {\n",
            "    \"factor\": 32.0,\n",
            "    \"high_freq_factor\": 4.0,\n",
            "    \"low_freq_factor\": 1.0,\n",
            "    \"original_max_position_embeddings\": 8192,\n",
            "    \"rope_type\": \"llama3\"\n",
            "  },\n",
            "  \"rope_theta\": 500000.0,\n",
            "  \"tie_word_embeddings\": true,\n",
            "  \"torch_dtype\": \"float16\",\n",
            "  \"transformers_version\": \"4.51.3\",\n",
            "  \"use_cache\": true,\n",
            "  \"vocab_size\": 128256\n",
            "}\n",
            "\n"
          ]
        }
      ],
      "source": [
        "print(model.config)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "QCxjsR4nSP31",
        "outputId": "17c511a9-60c3-4ef8-aa83-58e7edd7cfc5"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Generated text: Paris is the capital of France and the most populous city in the European Union. The city is located in the north of France, on the river Seine. Paris is the most visited city in the world, with over 21 million visitors in\n"
          ]
        }
      ],
      "source": [
        "# Test the original model\n",
        "prompt = \"Paris is the capital of\"\n",
        "generated = get_output(prompt)\n",
        "print(f\"Generated text: {generated}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "8WR96iwq2XYH"
      },
      "outputs": [],
      "source": [
        "# Support function to check the size reduction.\n",
        "def count_parameters(model):\n",
        "    return sum(p.numel() for p in model.parameters())"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "YiZf0g9pQvbD",
        "outputId": "8667dca3-b7da-4a94-9d4d-6c0081f2682e"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Original model parameters: 1235814400\n"
          ]
        }
      ],
      "source": [
        "original_param_count = count_parameters(model)\n",
        "print(f\"Original model parameters: {original_param_count}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LN5-UakQJOPU"
      },
      "source": [
        "# Pruning Model.\n",
        "\n",
        "## Support pruning functions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "_mMGhTtD4PRX"
      },
      "outputs": [],
      "source": [
        "# Function to compute importance scores (L1 norm)\n",
        "def compute_importance_scores(layer_weight):\n",
        "    \"\"\"\n",
        "    compute importance scores (L1 norm)\n",
        "\n",
        "    Args:\n",
        "    - layer_weight: Weight matrix from a gate_proj / up_proj layer.\n",
        "\n",
        "    Returns:\n",
        "    - importance_scores: L1 norm Importance scores for each neuron.\n",
        "    \"\"\"\n",
        "    weight = layer_weight.float()\n",
        "    return torch.sum(torch.abs(weight), dim=1)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "NX9Boph94RWA"
      },
      "outputs": [],
      "source": [
        "def prune_neurons(mlp, prune_percent):\n",
        "    \"\"\"\n",
        "    Prune neurons from the gate_weight and c_proj layers of the MLP based on importance scores.\n",
        "\n",
        "    Args:\n",
        "    - mlp: The MLP layer (contains gate_proj and up_proj) to prune.\n",
        "    - prune_percent: Percentage of neurons to prune.\n",
        "\n",
        "    Returns:\n",
        "    - new_gate_proj: New pruned c_fc layer.\n",
        "    - new_up_proj: New pruned c_proj layer.\n",
        "    - new_down_proj: New down_proj layer.\n",
        "    - new_size: Number of neurons after pruning.\n",
        "    - indices_to_keep: Indices of neurons to keep.\n",
        "    \"\"\"\n",
        "    # Get the weights of the gate_proj and up_proj layers\n",
        "    gate_weight = mlp.gate_proj.weight.data.float()  # Shape: [output_features, input_features]\n",
        "    up_weight = mlp.up_proj.weight.data.float()      # Shape: [output_features, input_features]\n",
        "\n",
        "    print(f\"gate_weight.shape: {gate_weight.shape}\")\n",
        "    print(f\"up_weight.shape: {up_weight.shape}\")\n",
        "\n",
        "    # Compute importance scores for each neuron separately and sum them\n",
        "    importance_scores_gate = compute_importance_scores(gate_weight)\n",
        "    importance_scores_up = compute_importance_scores(up_weight)\n",
        "    importance_scores = importance_scores_gate + importance_scores_up\n",
        "\n",
        "    # Check for NaNs or Infs\n",
        "    if torch.isnan(importance_scores).any():\n",
        "        print(\"Warning: importance_scores contains NaNs\")\n",
        "    if torch.isinf(importance_scores).any():\n",
        "        print(\"Warning: importance_scores contains Infs\")\n",
        "\n",
        "    # Determine the number of neurons to prune\n",
        "    original_intermediate_size = gate_weight.size(0)  # This is output_features\n",
        "    num_neurons_to_prune = int(prune_percent * original_intermediate_size)\n",
        "\n",
        "    # Ensure num_neurons_to_prune is valid\n",
        "    num_neurons_to_prune = max(0, min(num_neurons_to_prune, original_intermediate_size - 1))\n",
        "    k = original_intermediate_size - num_neurons_to_prune\n",
        "\n",
        "    print(f\"Original intermediate size: {original_intermediate_size}\")\n",
        "    print(f\"Number of neurons to prune: {num_neurons_to_prune}\")\n",
        "    print(f\"Number of neurons to keep (k): {k}\")\n",
        "\n",
        "    if k <= 0:\n",
        "        raise ValueError(f\"Invalid number of neurons to keep: {k}. Adjust the prune_percent or check the layer sizes.\")\n",
        "\n",
        "    # Ensure importance_scores is on the same device\n",
        "    importance_scores = importance_scores.to(device)\n",
        "\n",
        "    # Get indices of neurons to keep (those with highest importance)\n",
        "    _, indices_to_keep = torch.topk(importance_scores, k)\n",
        "\n",
        "    # Sort indices to maintain order\n",
        "    indices_to_keep, _ = torch.sort(indices_to_keep)\n",
        "\n",
        "    # Create new Linear layers with reduced size\n",
        "    new_gate_proj = nn.Linear(mlp.gate_proj.in_features, len(indices_to_keep), bias=False).to(device)\n",
        "    new_up_proj = nn.Linear(mlp.up_proj.in_features, len(indices_to_keep), bias=False).to(device)\n",
        "    new_down_proj = nn.Linear(len(indices_to_keep), mlp.down_proj.out_features, bias=False).to(device)\n",
        "\n",
        "    return new_gate_proj, new_up_proj, new_down_proj, len(indices_to_keep), indices_to_keep\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "FxJEWg1X3j0m"
      },
      "outputs": [],
      "source": [
        "# Function to copy weights and biases to new pruned layers\n",
        "def copy_weights_and_biases(mlp, new_gate_proj, new_up_proj, new_down_proj, indices_to_keep):\n",
        "    \"\"\"\n",
        "    Copy the weights and biases from the original layers to the new pruned layers.\n",
        "\n",
        "    Args:\n",
        "    - mlp: The original MLP layer.\n",
        "    - new_cnew_gate_proj_fc: New pruned gate_proj layer.\n",
        "    - new_up_proj: New pruned up_proj layer.\n",
        "    - new_down_proj: New pruned down_proj layer.\n",
        "    - indices_to_keep: Indices of neurons that are retained.\n",
        "    \"\"\"\n",
        "    # Copy weights for gate_proj and up_proj (input features remain the same)\n",
        "    new_gate_proj.weight.data = mlp.gate_proj.weight.data[indices_to_keep, :]\n",
        "    new_up_proj.weight.data = mlp.up_proj.weight.data[indices_to_keep, :]\n",
        "\n",
        "    # Copy weights for down_proj (output features remain the same)\n",
        "    new_down_proj.weight.data = mlp.down_proj.weight.data[:, indices_to_keep]\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C3s_MoQdT40P"
      },
      "source": [
        "# Prune Loop\n",
        "The update_model function iterates through the blocks within the model's Transformer structure. This structure consists of multiple `LlamaDecoderLayer` blocks, and each of these blocks contains a pair of `LlamaSdpaAttention` and `LlamaMLP` components. The latter contains the MLP layers that will be the target of the pruning process.\n",
        "\n",
        "```\n",
        "(layers): ModuleList(\n",
        "      (0-15): 16 x LlamaDecoderLayer(\n",
        "        (self_attn): LlamaSdpaAttention(\n",
        "          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
        "          (k_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
        "          (v_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
        "          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
        "          (rotary_emb): LlamaRotaryEmbedding()\n",
        "        )\n",
        "        (mlp): LlamaMLP(\n",
        "          (gate_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
        "          (up_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
        "          (down_proj): Linear(in_features=8192, out_features=2048, bias=False)\n",
        "          (act_fn): SiLU()\n",
        "        )\n",
        "        (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
        "        (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
        "      )\n",
        "  )    \n",
        "```\n",
        "\n",
        "The layers that will undergo the removal of neurons identified as less useful are:\n",
        "```\n",
        "(gate_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
        "(up_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
        "(down_proj): Linear(in_features=8192, out_features=2048, bias=False)\n",
        "```\n",
        "The neurons are removed in the `prune_neurons` function based on the values returned by `compute_importance_scores`.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "id": "CxblrrCXPzEt"
      },
      "outputs": [],
      "source": [
        "# Function to update the model\n",
        "def update_model(model, prune_percent):\n",
        "    new_intermediate_size = None\n",
        "\n",
        "    for idx, layer in enumerate(model.model.layers):\n",
        "        mlp = layer.mlp\n",
        "\n",
        "        # Prune the neurons and create new layers\n",
        "        new_gate_proj, new_up_proj, new_down_proj, new_size, indices_to_keep = prune_neurons(mlp, prune_percent)\n",
        "\n",
        "        # Copy weights from old layers to new pruned layers\n",
        "        copy_weights_and_biases(mlp, new_gate_proj, new_up_proj, new_down_proj, indices_to_keep)\n",
        "\n",
        "        # Replace old layers with new pruned layers\n",
        "        mlp.gate_proj = new_gate_proj\n",
        "        mlp.up_proj = new_up_proj\n",
        "        mlp.down_proj = new_down_proj\n",
        "\n",
        "        # Update the intermediate size for the first layer\n",
        "        if new_intermediate_size is None:\n",
        "            new_intermediate_size = new_size\n",
        "\n",
        "    # Update the model configuration with the new intermediate size\n",
        "    model.config.intermediate_size = new_intermediate_size\n",
        "\n",
        "    return model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VILv7TS5P4XJ"
      },
      "source": [
        "## Obtain & test the model.  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NIUnFU5R3n42",
        "outputId": "5abef345-9e61-4ee0-eb68-7a955a3c2ad9"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n",
            "gate_weight.shape: torch.Size([8192, 2048])\n",
            "up_weight.shape: torch.Size([8192, 2048])\n",
            "Original intermediate size: 8192\n",
            "Number of neurons to prune: 1638\n",
            "Number of neurons to keep (k): 6554\n"
          ]
        }
      ],
      "source": [
        "prune_percent = 0.2  # Prune 20% of neurons\n",
        "model = update_model(model, prune_percent)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sSWBiV9-YpL5"
      },
      "source": [
        "As is posible to see in this simple log we are reducing the number of features in the upgrad layers from 8192 to 6554. Ther are 16 * 2 layers affected by the reduction."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "tdJUkfWI3qMM",
        "outputId": "ffba0484-c702-4d5b-be86-655a056d2f0e"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Pruned model parameters: 1074792448\n",
            "Reduction in parameters: 161021952\n",
            "Percentage of weight savings: 13.03%\n"
          ]
        }
      ],
      "source": [
        "# Recalculate the number of parameters\n",
        "pruned_param_count = count_parameters(model)\n",
        "reduction_in_params = original_param_count - pruned_param_count\n",
        "percentage_savings = (reduction_in_params / original_param_count) * 100\n",
        "\n",
        "print(f\"Pruned model parameters: {pruned_param_count}\")\n",
        "print(f\"Reduction in parameters: {reduction_in_params}\")\n",
        "print(f\"Percentage of weight savings: {percentage_savings:.2f}%\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wvj-iIsO5M6U",
        "outputId": "0e61d429-2268-4395-e964-0597f418a3ee"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Generated text after pruning: Paris is the capital of of of the the most important the the country. It is the the the the the the the the the the the the\n",
            "Paris is the the the the\n",
            "Paris is the\n",
            "France is\n",
            "France is\n",
            "France is\n"
          ]
        }
      ],
      "source": [
        "# Test the pruned model\n",
        "generated = get_output(prompt, model, tokenizer)\n",
        "print(f\"Generated text after pruning: {generated}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "200MPWCnZGKC"
      },
      "source": [
        "**-- WARNING --**\n",
        "\n",
        "Although it's normal for a model to lose some capabilities due to a pruning process, what has happened to our model is not normal.\n",
        "\n",
        "It's not just a matter of reducing the pruning percentage. The issue here runs deeper. There are a couple of problems in the pruning process that need to be addressed."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D0ZJObyYi55q"
      },
      "source": [
        "## Identifying the problems.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "_hNaWinNXkhz",
        "outputId": "44fba1c2-3879-4a81-db3a-69d445f90682"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 2048)\n",
            "    (layers): ModuleList(\n",
            "      (0-15): 16 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "          (k_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (v_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
            "          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=2048, out_features=6554, bias=False)\n",
            "          (up_proj): Linear(in_features=2048, out_features=6554, bias=False)\n",
            "          (down_proj): Linear(in_features=6554, out_features=2048, bias=False)\n",
            "          (act_fn): SiLU()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((2048,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=2048, out_features=128256, bias=False)\n",
            ")\n"
          ]
        }
      ],
      "source": [
        "print(model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PfxxChCWXrco"
      },
      "source": [
        "\n",
        "In the model’s structure, at first glance, there doesn’t seem to be any error, but the MLP block structure has not been properly considered.\n",
        "\n",
        "The layers are being treated as if they were from the distilGPT model, whereas Llama uses a `GLU (Gated Linear Unit)` structure, where the `gate_proj` and `up_proj` layers work together. Therefore, pruning cannot be done by calculating the importance of neurons separately and removing different neurons in each layer. Instead, the pruning process should respect that these layers function as pairs.\n",
        "\n",
        "Thus, the evaluation of which neurons to prune should take into account that they must be assessed together, and pruning should be done on pairs of neurons.\n",
        "\n",
        "We now have some key points that need to be addressed in order to develop a pruning solution that suits the Llama model’s structure.\n",
        "\n",
        "* Consider the GLU (Gated Linear Unit) structure of the MLP layers.\n",
        "* Use a neuron selection method that is compatible with the GLU structure.\n",
        "\n",
        "**We will explore this in the next notebook.**"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "myfZBi1wktTo"
      },
      "source": [
        "# Upload the model to Hugging Face & Download to test.\n",
        "\n",
        "Even if the model isn't fully functional, let's at least check that it can work properly with the Transformers library."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "S2Ll_kqe5QzO"
      },
      "outputs": [],
      "source": [
        "output_dir = './pruned_llama_1b'\n",
        "if not os.path.exists(output_dir):\n",
        "    os.makedirs(output_dir)\n",
        "\n",
        "model.save_pretrained(output_dir)\n",
        "tokenizer.save_pretrained(output_dir)\n",
        "print(f\"Pruned model saved to {output_dir}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3LjjsGZV5ZHJ"
      },
      "outputs": [],
      "source": [
        "# Push the model to your Hugging Face repository\n",
        "model.push_to_hub('pruned-llama-1b')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6xNN-aYa5h9B"
      },
      "outputs": [],
      "source": [
        "tokenizer.push_to_hub('pruned-llama-1b')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "81iZY5ru6B7h"
      },
      "outputs": [],
      "source": [
        "# Download the pruned model\n",
        "pruned_model_name = 'oopere/pruned-llama-1b'\n",
        "pruned_model = AutoModelForCausalLM.from_pretrained(pruned_model_name, torch_dtype=torch.float16).to(device)\n",
        "pruned_tokenizer = AutoTokenizer.from_pretrained(pruned_model_name)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "MqCTEMGq6E6i"
      },
      "outputs": [],
      "source": [
        "# Test the downloaded pruned model\n",
        "generated = get_output(prompt, pruned_model, pruned_tokenizer)\n",
        "print(f\"Generated text from downloaded pruned model: {generated}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y1KC2x0CurG2"
      },
      "source": [
        "#Conclusion.\n",
        "This notebook reminds us that a pruning process must consider the model’s structure and cannot simply reuse the same approach for models with different architectures.\n",
        "\n",
        "What previously worked perfectly with the distilGPT2 model has rendered the Llama3.2 model completely unusable.\n",
        "\n",
        "## Future work.\n",
        "It's clear that the task for the next notebook will be to develop a pruning process that, while inspired by this one, is able to respect the model's structure and reduce its size without significantly impacting its functionality.\n",
        "\n",
        "##Authors Note.\n",
        "In addition to creating content like this notebook and offering it under the MIT license, I have also contributed to repositories such as those of Hugging Face and Google Gemini.\n",
        "\n",
        "I am especially proud of my book: <a href=\"https://amzn.to/4eanT1g\"><b>Large Language Models:</b> Apply and Implement Strategies for Large Language Models</a> (Apress).\n",
        "\n",
        "You can find it on both <a href=\"https://amzn.to/4eanT1g\">Amazon</a> and <a href=\"https://link.springer.com/book/10.1007/979-8-8688-0515-8\">Springer</a>, where they often have good deals on the purchase price.\n",
        "\n",
        "If you take a look and end up purchasing it, keep in mind that you can reach out with any questions via the Discussions section of this same repository or on any of my social media channels. I’ll do my best to respond as quickly as possible."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tEZXDfYoXG7m"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "T4",
      "provenance": [],
      "authorship_tag": "ABX9TyNtyMzezZNzznP1h+a1anYg",
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}