{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_6b_Adaptive_Inference_Attention_Pruning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LV5KPbfseL8S"
      },
      "source": [
        "<div>\n",
        "    <h1>Large Language Models Projects</a></h1>\n",
        "    <h3>Apply and Implement Strategies for Large Language Models</h3>\n",
        "    <h2>Adaptative Attention Bypass</h2>\n",
        "    <h3>Sometimes, not All Attention is needed</h3>\n",
        "</div>\n",
        "\n",
        "by [Pere Martra](https://www.linkedin.com/in/pere-martra/)\n",
        "\n",
        "_______\n",
        "Models: meta-llama/Llama-3.2\n",
        "\n",
        "Colab Environment: GPU L4 for 3B Models\n",
        "\n",
        "T4 for 1B Model.\n",
        "\n",
        "Keys:\n",
        "* Pruning\n",
        "* Attention\n",
        "* Adaptative Attention Bypass\n",
        "\n",
        "References:\n",
        "* [Resource-Efficient Transformer Pruning for Finetuning of Large Models](https://openaccess.thecvf.com/content/CVPR2024/html/Ilhan_Resource-Efficient_Transformer_Pruning_for_Finetuning_of_Large_Models_CVPR_2024_paper.html)\n",
        "\n",
        "_______\n",
        "**This pruning & optimization section may be further developed in a future book focused on engineering efficient and ethical LLMs**\n",
        "\n",
        "This is the unofficial repository for the book:\n",
        "        <a href=\"https://amzn.to/4eanT1g\"> <b>Large Language Models:</b> Apply and Implement Strategies for Large Language Models</a> (Apress).\n",
        "        The book is based on the content of this repository, but the notebooks are being updated, and I am incorporating new examples and chapters.\n",
        "        If you are looking for the official repository for the book, with the original notebooks, you should visit the\n",
        "        <a href=\"https://github.com/Apress/Large-Language-Models-Projects\">Apress repository</a>, where you can find all the notebooks in their original format as they appear in the book.\n",
        "\n",
        "**disclaimer**: The pruning / knowledge distillation section has been created after the first edition of the book was published. They are not included in the book’s original content but are intended to supplement and expand on the topics covered.\n",
        "______\n",
        "# Introduction\n",
        "This notebook introduces an innovative approach: **Adaptive Attention Bypass, AAB**.\n",
        "\n",
        "It allows the model to dynamically decide how many attention layers to use based on the complexity of each input prompt. In this way, simple prompts are processed faster and consume fewer resources, while complex prompts maintain maximum quality by using all available layers.\n",
        "\n",
        "Currently, the attention layer is one of the most redundant within modern models, specially for the bigger models of each family, because they must respond to disproportionate context windows.\n",
        "\n",
        "With AAB, the model will choose the necessary number of layers for each prompt to perform its task. In the case of chatbots, this is especially useful, since at the beginning of the conversation a very low percentage of layers might be used, and as the size of the prompt increases with the entire conversation, the model can incorporate layers until it reaches 100%.\n",
        "\n",
        "This approach is compatible with already trained models (it does not require retraining) and can be combined with classical structured pruning techniques to maximize efficiency in production.\n",
        "\n",
        "Throughout this tutorial, we will see how to configure the model to decide how many layers to activate, how it measures the importance of its layers, and how it skips the execution of those that are not necessary for a specific prompt.\n",
        "\n",
        "# Methodology.\n",
        "\n",
        "The methodology implemented in this notebook follows these key steps:\n",
        "\n",
        "**Calibration of layer importance**: A series of prompts are used to measure the relative importance of each attention layer of the model, assigning a score to each based on its contribution to the result.\n",
        "\n",
        "**Calculation of prompt complexity**: For each input prompt, an ultralight, configurable complexity score is calculated, which combines:\n",
        "\n",
        "* Prompt length (number of tokens, normalized).\n",
        "* Semantic diversity (variance of input embeddings).\n",
        "\n",
        "**Adaptive assignment of active layers**: Depending on the complexity score and the size of the model, it is determined how many layers should be active, using a parameterized continuous function that avoids abrupt jumps and allows a smooth transition between difficulty levels. The larger the model, the more layers it supports being bypassed.\n",
        "\n",
        "**Dynamic execution**: During inference, only the most important attention layers, considering the prompt score, are executed. The rest are “bypassed,” meaning their computation is omitted to save time and resources.\n",
        "\n",
        "**Flexible configuration**: The entire system is controlled by a configuration file (adaptive_config.json) that allows the method to be adapted to different model sizes, domains, and efficiency requirements.\n",
        "\n",
        "# Main uses and advantages.\n",
        "1. Optimization of models for specific sectors.\n",
        "2. Acceleration of inference in production.\n",
        "3. Reduction of model consumption.\n",
        "4. Chatbots and conversational assistants.\n",
        "5. Compatible with other techniques such as Quantization or structured Pruning.\n",
        "6. Does not require recovery through fine-tuning or Knowledge Distillation.\n",
        "______"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DwYeKwswnkTG"
      },
      "source": [
        "# Install libraries & Configure variables."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PblPrYCiYTl8",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "bdbde816-cbe6-4c9a-ac95-168630848ebf"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m363.4/363.4 MB\u001b[0m \u001b[31m2.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.8/13.8 MB\u001b[0m \u001b[31m99.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m24.6/24.6 MB\u001b[0m \u001b[31m81.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m883.7/883.7 kB\u001b[0m \u001b[31m47.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m664.8/664.8 MB\u001b[0m \u001b[31m1.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m211.5/211.5 MB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.3/56.3 MB\u001b[0m \u001b[31m38.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m127.9/127.9 MB\u001b[0m \u001b[31m15.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m207.5/207.5 MB\u001b[0m \u001b[31m4.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.1/21.1 MB\u001b[0m \u001b[31m103.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m10.4/10.4 MB\u001b[0m \u001b[31m108.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m491.5/491.5 kB\u001b[0m \u001b[31m14.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m193.6/193.6 kB\u001b[0m \u001b[31m19.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
            "gcsfs 2025.3.2 requires fsspec==2025.3.2, but you have fsspec 2025.3.0 which is incompatible.\u001b[0m\u001b[31m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.5/50.5 kB\u001b[0m \u001b[31m2.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m51.8/51.8 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.9/3.9 MB\u001b[0m \u001b[31m73.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m84.0/84.0 kB\u001b[0m \u001b[31m7.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m243.3/243.3 kB\u001b[0m \u001b[31m24.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m104.1/104.1 kB\u001b[0m \u001b[31m11.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m91.1/91.1 kB\u001b[0m \u001b[31m9.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for rouge-score (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Building wheel for sqlitedict (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Building wheel for word2number (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "Collecting hf_xet\n",
            "  Downloading hf_xet-1.1.2-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (879 bytes)\n",
            "Downloading hf_xet-1.1.2-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.2 MB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.2/5.2 MB\u001b[0m \u001b[31m62.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hInstalling collected packages: hf_xet\n",
            "Successfully installed hf_xet-1.1.2\n"
          ]
        }
      ],
      "source": [
        "!pip install -q torch==2.6.0\n",
        "!pip install -q torchvision==0.21.0\n",
        "!pip install -q transformers==4.51.3\n",
        "!pip install -q datasets==3.6.0\n",
        "!pip install -q lm-eval==0.4.8\n",
        "\n",
        "!pip install hf_xet #To speed up downloads from HF."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6qN0mu6IHqpy"
      },
      "outputs": [],
      "source": [
        "import logging\n",
        "import math\n",
        "import os\n",
        "import sys\n",
        "import shutil\n",
        "from copy import deepcopy\n",
        "\n",
        "import torch\n",
        "import torch.nn.functional as F\n",
        "import json\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer\n"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "logging.basicConfig(level=logging.INFO)"
      ],
      "metadata": {
        "id": "1k0hnzPg302j"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# AAB Configuration.\n",
        "This section defines the key parameters that control the behavior of the Adaptive Attention Bypass (AAB). These parameters allow the system to be adjusted according to the model size and the difficulty of the prompt, achieving a balance between efficiency and response quality.\n",
        "\n",
        "**GLOBAL_COMPLEXITIES**: A list of predefined complexity scores. These values will be used later, for example, to test how the system responds to different levels of complexity or during calibration.\n",
        "\n",
        "**COMPLEXITY_WEIGHTS**: A dictionary that assigns weights to the different metrics we will use to calculate the complexity of a prompt. In this first version of AAB, \"token count\" and \"embedding variance\" are considered."
      ],
      "metadata": {
        "id": "hYdRIDoV8R3h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "GLOBAL_COMPLEXITIES = [0.1, 0.3, 0.5, 0.7, 0.9]\n",
        "\n",
        "COMPLEXITY_WEIGHTS = {\n",
        "    \"token_count\": 0.75,\n",
        "    \"embedding_variance\": 0.25\n",
        "}"
      ],
      "metadata": {
        "id": "feRLhLsw8Voz"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "*ADAPTIVE_CONFIG*: This is the main dictionary that contains the adaptation logic, in which the number of layers to bypass is decided depending on the model size.\n",
        "It is divided into two fundamental parts:\n",
        "* **model_size_ratios**: Defines, for different model size ranges, how the number of active layers is calculated. For each size and complexity level, a `min_ratio` and a `scaling_factor` are specified, which indicate how to scale the use of additional layers based on the complexity score. The idea is that larger models can afford to omit a higher percentage of layers in simple prompts.\n",
        "* **complexity_levels**: Establishes the thresholds for categorizing a prompt into one of five complexity levels: \"trivial,\" \"simple,\" \"medium,\" \"complex,\" \"very\\_complex,\" based on its calculated complexity score, which ranges from 0.0 to 1.0."
      ],
      "metadata": {
        "id": "HXE8M226he_N"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Only 1B and 3B configurations has been tested.\n",
        "ADAPTIVE_CONFIG = {\n",
        "    # Model size-based ratios with proportional scaling to 100%\n",
        "    \"model_size_ratios\": {\n",
        "        \"70B+\": {\n",
        "            \"trivial\": {\"min_ratio\": 0.15, \"scaling_factor\": 0.85},\n",
        "            \"simple\": {\"min_ratio\": 0.35, \"scaling_factor\": 0.65},\n",
        "            \"medium\": {\"min_ratio\": 0.55, \"scaling_factor\": 0.45},\n",
        "            \"complex\": {\"min_ratio\": 0.75, \"scaling_factor\": 0.25},\n",
        "            \"very_complex\": {\"min_ratio\": 1.0, \"scaling_factor\": 0.0}\n",
        "        },\n",
        "        \"30B-70B\": {\n",
        "            \"trivial\": {\"min_ratio\": 0.25, \"scaling_factor\": 0.75},\n",
        "            \"simple\": {\"min_ratio\": 0.40, \"scaling_factor\": 0.60},\n",
        "            \"medium\": {\"min_ratio\": 0.60, \"scaling_factor\": 0.40},\n",
        "            \"complex\": {\"min_ratio\": 0.80, \"scaling_factor\": 0.20},\n",
        "            \"very_complex\": {\"min_ratio\": 1.0, \"scaling_factor\": 0.0}\n",
        "        },\n",
        "        \"10B-30B\": {\n",
        "            \"trivial\": {\"min_ratio\": 0.30, \"scaling_factor\": 0.75},\n",
        "            \"simple\": {\"min_ratio\": 0.45, \"scaling_factor\": 0.55},\n",
        "            \"medium\": {\"min_ratio\": 0.65, \"scaling_factor\": 0.35},\n",
        "            \"complex\": {\"min_ratio\": 0.82, \"scaling_factor\": 0.18},\n",
        "            \"very_complex\": {\"min_ratio\": 1.0, \"scaling_factor\": 0.0}\n",
        "        },\n",
        "        \"5B-10B\": {\n",
        "            \"trivial\": {\"min_ratio\": 0.45, \"scaling_factor\": 0.60},\n",
        "            \"simple\": {\"min_ratio\": 0.55, \"scaling_factor\": 0.45},\n",
        "            \"medium\": {\"min_ratio\": 0.75, \"scaling_factor\": 0.25},\n",
        "            \"complex\": {\"min_ratio\": 0.87, \"scaling_factor\": 0.13},\n",
        "            \"very_complex\": {\"min_ratio\": 1.0, \"scaling_factor\": 0.0}\n",
        "        },\n",
        "        \"2B-5B\": {\n",
        "            \"trivial\": {\"min_ratio\": 0.80, \"scaling_factor\": 0.55},\n",
        "            \"simple\": {\"min_ratio\": 0.87, \"scaling_factor\": 0.55},\n",
        "            \"medium\": {\"min_ratio\": 0.90, \"scaling_factor\": 0.30},\n",
        "            \"complex\": {\"min_ratio\": 0.95, \"scaling_factor\": 0.10},\n",
        "            \"very_complex\": {\"min_ratio\": 1.0, \"scaling_factor\": 0.0}\n",
        "        },\n",
        "        \"<2B\": {\n",
        "            \"trivial\": {\"min_ratio\": 0.85, \"scaling_factor\": 0.50},\n",
        "            \"simple\": {\"min_ratio\": 0.90, \"scaling_factor\": 0.35},\n",
        "            \"medium\": {\"min_ratio\": 0.93, \"scaling_factor\": 0.35},\n",
        "            \"complex\": {\"min_ratio\": 0.97, \"scaling_factor\": 0.05},\n",
        "            \"very_complex\": {\"min_ratio\": 1.0, \"scaling_factor\": 0.0}\n",
        "        }\n",
        "    },\n",
        "\n",
        "    # 5-level complexity thresholds and descriptions\n",
        "    \"complexity_levels\": {\n",
        "        \"trivial\": {\n",
        "            \"range\": [0.0, 0.2],\n",
        "        },\n",
        "        \"simple\": {\n",
        "            \"range\": [0.2, 0.4],\n",
        "        },\n",
        "        \"medium\": {\n",
        "            \"range\": [0.4, 0.6],\n",
        "        },\n",
        "        \"complex\": {\n",
        "            \"range\": [0.6, 0.8],\n",
        "        },\n",
        "        \"very_complex\": {\n",
        "            \"range\": [0.8, 1.0],\n",
        "        }\n",
        "    },\n",
        "}\n"
      ],
      "metadata": {
        "id": "lDhqfprweo_R"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Support & calculate functions\n",
        "Once the main configuration variables are defined: GLOBAL_COMPLEXITIES, COMPLEXITY_WEIGHTS, and ADAPTIVE_CONFIG, a set of auxiliary functions is created to interpret and apply this configuration effectively.\n",
        "\n",
        "These functions allow us to interact with the model and use the complexity scores to decide how many attention layers should remain active.\n"
      ],
      "metadata": {
        "id": "tlPEOEaj8b7k"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**detect_model_size_category**: Inspects the loaded model and, based on the total number of its parameters, classifies it into one of the categories defined in ADAPTIVE_CONFIG.\n",
        "\n",
        "The function code has been kept simple for comprehension purposes in the notebook, but it should be noted that it must return exactly the same name contained in the ADAPTIVE_CONFIG variable. Failure to do so will prevent the system from correctly detecting the model's category, and the defined ranges for its size will not be applied."
      ],
      "metadata": {
        "id": "O7BSkzHY8TjG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def detect_model_size_category(model):\n",
        "    \"\"\"\n",
        "    Automatically detect model size category from model parameters.\n",
        "    \"\"\"\n",
        "    try:\n",
        "        total_params = sum(p.numel() for p in model.parameters())\n",
        "        size_billion = total_params / 1e9\n",
        "        print(f\"🔍 Detected model size: {size_billion:.2f}B parameters\")\n",
        "\n",
        "        # Define categories and their lower bounds (sorted descending)\n",
        "        # These keys must match ADAPTIVE_CONFIG[\"model_size_ratios\"]\n",
        "        size_categories = [\n",
        "            (70, \"70B+\"),\n",
        "            (30, \"30B-70B\"),\n",
        "            (10, \"10B-30B\"),\n",
        "            (5, \"5B-10B\"),\n",
        "            (2, \"2B-5B\"),\n",
        "        ]\n",
        "\n",
        "        for limit, category_name in size_categories:\n",
        "            if size_billion >= limit:\n",
        "                return category_name\n",
        "        return \"<2B\" # Default for smallest models\n",
        "\n",
        "    except RuntimeError as e: # Catch more specific PyTorch/tensor errors\n",
        "        print(f\"Error calculating model parameters: {e}\")\n",
        "    except Exception as e: # Broader fallback\n",
        "        print(f\"Unexpected error detecting model size: {e}\")\n",
        "\n",
        "    # Fallback if an error occurs. Consider if a more specific category or None is better.\n",
        "    # For this notebook, returning a default that exists in ADAPTIVE_CONFIG is safer.\n",
        "    print(\"Defaulting to '<2B' size category due to error.\")\n",
        "    return \"<2B\""
      ],
      "metadata": {
        "id": "z6WYnuh70Eq7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**count_attention_layers_correctly**: To be able to determine a percentage of active layers, we first need to know precisely how many attention layers the model contains.\n",
        "\n",
        "This function is responsible for counting these layer, using different methods, from the most precise to the least reliable."
      ],
      "metadata": {
        "id": "Yl2RM65WBEsz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def count_attention_layers_correctly(model):\n",
        "    \"\"\"\n",
        "    Correctly count attention layers.\n",
        "    Prioritizes Hugging Face model.config, then specific architectural patterns,\n",
        "    and finally module name inspection.\n",
        "    \"\"\"\n",
        "    # Method 1: Use model.config (most reliable for HF models)\n",
        "    if hasattr(model, 'config'):\n",
        "        config_attrs = ['num_hidden_layers', 'n_layer', 'num_layers', 'n_layers']\n",
        "        for attr in config_attrs:\n",
        "            if hasattr(model.config, attr) and getattr(model.config, attr) is not None:\n",
        "                num_layers = getattr(model.config, attr)\n",
        "                if isinstance(num_layers, int) and num_layers > 0:\n",
        "                    # print(f\"INFO: Found layer count '{num_layers}' via model.config.{attr}\")\n",
        "                    return num_layers\n",
        "\n",
        "    # Method 2: Direct access for known architectures (e.g., Llama, GPT-2)\n",
        "    # Fallback for model.model.layers (Llama-like)\n",
        "    if hasattr(model, 'model') and hasattr(model.model, 'layers') and isinstance(model.model.layers, torch.nn.ModuleList):\n",
        "        # print(f\"INFO: Found layer count '{len(model.model.layers)}' via model.model.layers\")\n",
        "        return len(model.model.layers)\n",
        "    # Fallback for model.transformer.h (GPT-2-like)\n",
        "    if hasattr(model, 'transformer') and hasattr(model.transformer, 'h') and isinstance(model.transformer.h, torch.nn.ModuleList):\n",
        "        # print(f\"INFO: Found layer count '{len(model.transformer.h)}' via model.transformer.h\")\n",
        "        return len(model.transformer.h)\n",
        "\n",
        "    # Method 3: Count main decoder layers by module inspection (more generic fallback)\n",
        "    # This was the original \"Method 1\"\n",
        "    decoder_layer_count = 0\n",
        "    # Potential common prefixes for layer containers in various architectures\n",
        "    # e.g. model.layers.0, model.transformer.h.0, model.encoder.layer.0\n",
        "    # We're looking for modules named like 'prefix.X' where X is an integer.\n",
        "    # Example: 'model.layers.0', 'model.layers.1', ...\n",
        "\n",
        "    # Attempt to identify the common parent module for layers if possible\n",
        "    parent_module_names = []\n",
        "    if hasattr(model, 'model') and hasattr(model.model, 'layers'): # Llama-like\n",
        "        parent_module_names.append(\"model.layers\")\n",
        "    elif hasattr(model, 'transformer') and hasattr(model.transformer, 'h'): # GPT-like\n",
        "        parent_module_names.append(\"transformer.h\")\n",
        "    # Add more known patterns if needed\n",
        "\n",
        "    if parent_module_names:\n",
        "        for parent_name in parent_module_names:\n",
        "            try:\n",
        "                parent_module = model.get_submodule(parent_name)\n",
        "                if isinstance(parent_module, torch.nn.ModuleList):\n",
        "                    # print(f\"INFO: Found layer count '{len(parent_module)}' via known parent '{parent_name}'\")\n",
        "                    return len(parent_module)\n",
        "            except AttributeError:\n",
        "                continue # Parent module not found, try next pattern\n",
        "\n",
        "    # Last resort: original more generic module type and name inspection\n",
        "    # This is less reliable as module names/types can vary widely.\n",
        "    # The original logic for this part can be kept if other methods fail.\n",
        "    # However, it's prone to overcounting or miscounting.\n",
        "    # For this tutorial, if the above direct methods fail, it implies an unknown arch.\n",
        "    # A fixed fallback or raising an error might be better than a potentially incorrect guess.\n",
        "\n",
        "    logging.warning(\n",
        "        \"Could not reliably determine attention layer count. \"\n",
        "        \"Using a conservative fallback of 16. \"\n",
        "        \"Please verify model architecture for accurate AAB.\"\n",
        "    )\n",
        "    return 16  # Conservative fallback"
      ],
      "metadata": {
        "id": "MDZKbB6d0FeB"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**classify_complexity_level**: Receives the numerical complexity score, a value between 0 and 1, which is calculated for each prompt, and assigns it to one of the predefined complexity levels in ADAPTIVE_CONFIG.\n"
      ],
      "metadata": {
        "id": "8fQohPsBBG9S"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def classify_complexity_level(complexity_score):\n",
        "    \"\"\"\n",
        "    Classify complexity score into one of 5 levels.\n",
        "    Assumes complexity_score is between 0.0 and 1.0.\n",
        "    \"\"\"\n",
        "    levels_config = ADAPTIVE_CONFIG[\"complexity_levels\"]\n",
        "\n",
        "    # Ensure \"very_complex\" is checked last if scores can reach 1.0\n",
        "    # or handle 1.0 explicitly.\n",
        "    # Iterate in a defined order if needed, e.g., by sorting keys or using OrderedDict\n",
        "    # For this specific config, order doesn't strictly matter due to non-overlapping ranges.\n",
        "\n",
        "    for level_name, level_config in levels_config.items():\n",
        "        min_val, max_val = level_config[\"range\"]\n",
        "        # Ensure that if complexity_score == 1.0, it's caught by \"very_complex\"\n",
        "        if level_name == \"very_complex\":\n",
        "            if min_val <= complexity_score <= max_val: # Use <= for the top category's max_val\n",
        "                return level_name\n",
        "        elif min_val <= complexity_score < max_val:\n",
        "            return level_name\n",
        "\n",
        "    # Fallback: if score is exactly 1.0 and \"very_complex\" range was [0.8, 1.0)\n",
        "    # this ensures 1.0 is classified correctly.\n",
        "    if complexity_score == 1.0 and \"very_complex\" in levels_config and \\\n",
        "       levels_config[\"very_complex\"][\"range\"][0] <= 1.0 <= levels_config[\"very_complex\"][\"range\"][1]:\n",
        "         return \"very_complex\"\n",
        "\n",
        "    # Default fallback if no range matches (e.g., score outside 0-1, or misconfig)\n",
        "    # For scores within 0-1, this should ideally not be reached if ranges are comprehensive.\n",
        "    logging.warning(f\"Complexity score {complexity_score} did not fit any defined range. Defaulting to trivial.\")\n",
        "    return \"trivial\""
      ],
      "metadata": {
        "id": "KjmmMFJX0Lr0"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**calculate_active_layers**: Integrates the information from the previous functions.\n",
        "\n",
        "It uses the total number of layers in the model, its size category, and the complexity score of the prompt to determine exactly how many attention layers should be activated.\n",
        "\n",
        "It applies the corresponding `min_ratio` and `scaling_factor`, defined in `ADAPTIVE_CONFIG[\"model_size_ratios\"]`, to calculate this number, ensuring that the model adapts the number of active attention layers dynamically and according to the configuration."
      ],
      "metadata": {
        "id": "XAzQAfRzBI5m"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def calculate_active_layers(total_layers, model_size_category, complexity_score):\n",
        "    \"\"\"\n",
        "    Calculate number of active layers based on complexity and model size\n",
        "\n",
        "    Args:\n",
        "        total_layers (int): Total number of attention layers\n",
        "        model_size_category (str): Model size category\n",
        "        complexity_score (float): Complexity score (0.0-1.0)\n",
        "\n",
        "    Returns:\n",
        "        tuple: (active_layers_count, complexity_level, layer_groups_used, min_guaranteed, max_possible)\n",
        "    \"\"\"\n",
        "    # Classify complexity level\n",
        "    complexity_level = classify_complexity_level(complexity_score)\n",
        "\n",
        "    # Get configuration for this model size and complexity\n",
        "    config = ADAPTIVE_CONFIG[\"model_size_ratios\"][model_size_category][complexity_level]\n",
        "    min_ratio = config[\"min_ratio\"]\n",
        "    scaling_factor = config[\"scaling_factor\"]\n",
        "\n",
        "    # Calculate layer counts\n",
        "    min_guaranteed = int(total_layers * min_ratio)\n",
        "    remaining_layers = total_layers - min_guaranteed\n",
        "    additional_layers = 0\n",
        "    if scaling_factor > 0: #SOnly if scaling_factor is bigger than 0\n",
        "      additional_layers = int(complexity_score * scaling_factor * remaining_layers)\n",
        "    active_layers = min_guaranteed + additional_layers\n",
        "\n",
        "    # Ensure we don't exceed total layers\n",
        "    active_layers = min(active_layers, total_layers)\n",
        "    max_possible = total_layers  # Always can reach 100%\n",
        "\n",
        "\n",
        "    return active_layers, complexity_level,  min_guaranteed, max_possible\n"
      ],
      "metadata": {
        "id": "BCmt5_BH0VsW"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "J8iRr5iapy5q"
      },
      "source": [
        "# Download & Study the Model.\n",
        "We download the model from Hugging Face and study its structure a bit.\n",
        "\n",
        "Although AAB is designed to be agnostic to the model structure, this notebook has only been tested with two models from the Llama family: Llama-3.2-1B and Llama-3.2-3B."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "u9IFTYa9P6Zy",
        "outputId": "815515bb-ab35-44f5-f768-cd88e70f6f87"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Using device: cuda\n"
          ]
        }
      ],
      "source": [
        "# Check if GPU is available\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"Using device: {device}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sbnDW_tZRTGp"
      },
      "outputs": [],
      "source": [
        "#model_name = 'meta-llama/Llama-3.2-1B'\n",
        "model_name = 'meta-llama/Llama-3.2-3B'\n",
        "model = AutoModelForCausalLM.from_pretrained(model_name).to(device)\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
        "#tokenizer.pad_token = tokenizer.eos_token  # Set pad token"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "w0ifeungrbHw"
      },
      "source": [
        "## Study the structure.\n",
        "* Llama-3.2-1B\n",
        "```\n",
        "LlamaForCausalLM(\n",
        "  (model): LlamaModel(\n",
        "    (embed_tokens): Embedding(128256, 2048)\n",
        "    (layers): ModuleList(\n",
        "      (0-15): 16 x LlamaDecoderLayer(\n",
        "        (self_attn): LlamaSdpaAttention(\n",
        "          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
        "          (k_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
        "          (v_proj): Linear(in_features=2048, out_features=512, bias=False)\n",
        "          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)\n",
        "          (rotary_emb): LlamaRotaryEmbedding()\n",
        "        )\n",
        "        (mlp): LlamaMLP(\n",
        "          (gate_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
        "          (up_proj): Linear(in_features=2048, out_features=8192, bias=False)\n",
        "          (down_proj): Linear(in_features=8192, out_features=2048, bias=False)\n",
        "          (act_fn): SiLU()\n",
        "        )\n",
        "        (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
        "        (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)\n",
        "      )\n",
        "    )\n",
        "    (norm): LlamaRMSNorm((2048,), eps=1e-05)\n",
        "    (rotary_emb): LlamaRotaryEmbedding()\n",
        "  )\n",
        "  (lm_head): Linear(in_features=2048, out_features=128256, bias=False)\n",
        ")\n",
        "```\n",
        "\n",
        "\n",
        "The model follows the typical structure of modern Llama models, consisting of blocks made up of an Attention layer and an MLP layer with a GLU structure.\n",
        "\n",
        "> If you want to see an example of how to perform pruning on the MLP layers of the model, you can check out the notebook:[Pruning Llama 3.2.](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_3_pruning_structured_llama3.2-1b_OK.ipynb) y leer el paper [Exploring GLU expansion ratios: Structured pruning in Llama-3.2 models](https://osf.io/preprints/osf/qgxea)\n",
        "\n",
        "\n",
        "Since the layers form a block, the attention layer cannot be removed without also removing the accompanying MLP layer. For this reason, the decision was made to bypass their execution during inference.\n",
        "\n",
        "The 1B model has 16 layers, as shown in the structure above, while the 3B model has 28 layers.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Testing AAB configuration.\n",
        "This code block analyzes the loaded model and simulates how many attention layers would be activated for different prompt complexity levels, based on the previously established functions and configuration.\n",
        "\n",
        "* It uses **count_attention_layers_correctly** to obtain the total number of attention layers in the model.\n",
        "\n",
        "* It calls **detect_model_size_category** to determine the model's size category.\n",
        "\n",
        "* It iterates through a list of predefined complexity scores to calculate the active layers by calling **calculate_active_layers**.\n",
        "\n",
        "* It prints the model information and how many layers would be activated for the different complexity scores.\n",
        "\n",
        "The llama-3.2-3B model would have between 22 and 28 active layers depending on the prompt complexity."
      ],
      "metadata": {
        "id": "ffdi_MYvy2rX"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Test the configuration with clean, simplified output\n",
        "if 'model' in locals():\n",
        "    # Get model information with improved detection\n",
        "    total_attention_layers = count_attention_layers_correctly(model)\n",
        "    model_category = detect_model_size_category(model)\n",
        "\n",
        "    print(f\"\\n Model Analysis:\")\n",
        "    print(f\"   Attention layers: {total_attention_layers}\")\n",
        "    print(f\"   Size category: {model_category}\")\n",
        "    print(f\"   Architecture: {type(model).__name__}\")\n",
        "\n",
        "    # Show layer detection verification\n",
        "    print(f\"\\n Layer Detection Verification:\")\n",
        "    decoder_layers = [name for name, module in model.named_modules()\n",
        "                     if 'DecoderLayer' in type(module).__name__ and '.layers.' in name]\n",
        "    print(f\"   Found DecoderLayers: {len(decoder_layers)}\")\n",
        "\n",
        "    # Test all 5 complexity levels with simplified table\n",
        "    test_complexities = GLOBAL_COMPLEXITIES\n",
        "\n",
        "    print(\"\\n Layer Activation by Complexity Level:\")\n",
        "    print(\"=\" * 50)\n",
        "    print(f\"{'Level':<12} {'Active Layers':<15} {'Usage Ratio':<12}\")\n",
        "    print(\"-\" * 50)\n",
        "\n",
        "    for complexity in test_complexities:\n",
        "        active, level, min_guaranteed, max_possible = calculate_active_layers(\n",
        "            total_attention_layers, model_category, complexity\n",
        "        )\n",
        "        ratio = active / total_attention_layers\n",
        "\n",
        "        print(f\"{level.capitalize():<12} {active:<15} {ratio:<12.1%}\")\n",
        "\n",
        "    print(f\"\\n Summary for {model_category} model:\")\n",
        "    trivial_config = ADAPTIVE_CONFIG['model_size_ratios'][model_category]['trivial']\n",
        "    trivial_min = int(total_attention_layers * trivial_config['min_ratio'])\n",
        "    print(f\"   • Range: {trivial_min}-{total_attention_layers} layers ({trivial_min/total_attention_layers:.1%}-100%)\")\n",
        "    print(f\"   • All complexity levels can reach 100% layer usage\")\n",
        "\n",
        "else:\n",
        "    print(\" Load your model first to test the configuration\")\n",
        "    print(\"\\nTo test, make sure you have:\")\n",
        "    print(\"1. model = ... (your loaded model)\")\n",
        "    print(\"2. tokenizer = ... (optional, your tokenizer)\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "5rUlhopw8s63",
        "outputId": "3550bade-292f-4f75-e027-ce5a5b153db2"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "🔍 Detected model size: 3.21B parameters\n",
            "\n",
            " Model Analysis:\n",
            "   Attention layers: 28\n",
            "   Size category: 2B-5B\n",
            "   Architecture: LlamaForCausalLM\n",
            "\n",
            " Layer Detection Verification:\n",
            "   Found DecoderLayers: 28\n",
            "\n",
            " Layer Activation by Complexity Level:\n",
            "==================================================\n",
            "Level        Active Layers   Usage Ratio \n",
            "--------------------------------------------------\n",
            "Trivial      22              78.6%       \n",
            "Simple       24              85.7%       \n",
            "Medium       25              89.3%       \n",
            "Complex      26              92.9%       \n",
            "Very_complex 28              100.0%      \n",
            "\n",
            " Summary for 2B-5B model:\n",
            "   • Range: 22-28 layers (78.6%-100%)\n",
            "   • All complexity levels can reach 100% layer usage\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Inference function & Test Base Model\n",
        "\n",
        "The `get_output` function is designed to generate text  and measure the time taken for different stages of the generation process.\n",
        "\n",
        "It provides insights into the performance of the model and can be used to evaluate the efficiency of text generation."
      ],
      "metadata": {
        "id": "vF4cHUb_rICs"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2igvy4z6rGgy"
      },
      "outputs": [],
      "source": [
        "import time\n",
        "\n",
        "def get_output(prompt, model=model, tokenizer=tokenizer, num_runs=1, max_length=50):\n",
        "    print(f\"--- get_output ENTERED. Prompt (first 30 chars): '{prompt[:30]}...' ---\") # New log\n",
        "\n",
        "    total_time = 0\n",
        "    generated_outputs = []\n",
        "\n",
        "    for run in range(num_runs):\n",
        "        # Start timing\n",
        "        start_time = time.time()\n",
        "\n",
        "        # Tokenization time\n",
        "        token_start = time.time()\n",
        "        inputs = tokenizer(prompt, return_tensors='pt').to(device)\n",
        "        token_time = time.time() - token_start\n",
        "\n",
        "        # Generation time\n",
        "        gen_start = time.time()\n",
        "        outputs = model.generate(\n",
        "            inputs['input_ids'],\n",
        "            attention_mask=inputs['attention_mask'],\n",
        "            max_length=max_length,\n",
        "            num_return_sequences=1,\n",
        "            pad_token_id=tokenizer.pad_token_id,\n",
        "            temperature=None,\n",
        "            top_p=None,\n",
        "            do_sample=False,  # Disable sampling\n",
        "            num_beams=5,      # Use beam search\n",
        "            early_stopping=True,  # Stop when end-of-sequence token is generated\n",
        "            no_repeat_ngram_size=2  # Prevent repetition of 2-grams\n",
        "        )\n",
        "        gen_time = time.time() - gen_start\n",
        "\n",
        "        # Decoding time\n",
        "        decode_start = time.time()\n",
        "        generated = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
        "        decode_time = time.time() - decode_start\n",
        "\n",
        "        # Total time for this run\n",
        "        total_time += time.time() - start_time\n",
        "        generated_outputs.append(generated)\n",
        "\n",
        "        if num_runs > 1:\n",
        "            print(f\"\\nRun {run + 1}:\")\n",
        "        print(f\"Tokenization time: {token_time*1000:.2f} ms\")\n",
        "        print(f\"Generation time: {gen_time*1000:.2f} ms\")\n",
        "        print(f\"Decoding time: {decode_time*1000:.2f} ms\")\n",
        "        print(f\"Total time: {(time.time() - start_time)*1000:.2f} ms\")\n",
        "\n",
        "    if num_runs > 1:\n",
        "        avg_time = total_time / num_runs\n",
        "        print(f\"\\nAverage time over {num_runs} runs: {avg_time*1000:.2f} ms\")\n",
        "\n",
        "    return generated_outputs[0] if num_runs == 1 else generated_outputs"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "44e1c421-5c1f-4e7d-95a2-92f3fc3acde2",
        "id": "lH7cotAxrhO3"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--- get_output ENTERED. Prompt (first 30 chars): 'Don't worry about ...' ---\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\n",
            "Run 1:\n",
            "Tokenization time: 1.96 ms\n",
            "Generation time: 3981.24 ms\n",
            "Decoding time: 0.30 ms\n",
            "Total time: 3983.60 ms\n",
            "\n",
            "Run 2:\n",
            "Tokenization time: 0.60 ms\n",
            "Generation time: 3023.31 ms\n",
            "Decoding time: 0.20 ms\n",
            "Total time: 3024.21 ms\n",
            "\n",
            "Average time over 2 runs: 3503.81 ms\n",
            "Generated text: [\"Don't worry about 5G, it's not coming to the UK until 2020 at the earliest, says Ofcom\\nThe UK's telecoms regulator has said that it doesn't expect to see the next generation of mobile networks in\", \"Don't worry about 5G, it's not coming to the UK until 2020 at the earliest, says Ofcom\\nThe UK's telecoms regulator has said that it doesn't expect to see the next generation of mobile networks in\"]\n"
          ]
        }
      ],
      "source": [
        "prompt = \"Don't worry about \"\n",
        "generated = get_output(prompt, num_runs=2)\n",
        "print(f\"Generated text: {generated}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "The text generation of the original model, as expected, works perfectly and returns a correct and meaningful sentence."
      ],
      "metadata": {
        "id": "mo4IjOYGry0W"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model.to(\"cpu\")               # actual data moves ↙\n",
        "torch.cuda.empty_cache()      # allocator drops cached blocks"
      ],
      "metadata": {
        "id": "bLN1_gLdt7Rx"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vjgf4WA_vF3B"
      },
      "source": [
        "# Model Calibration\n",
        "\n",
        "As explained at the beginning of the notebook, it is essential to evaluate which layers most significantly modify the model's output in order to decide which ones should be bypassed.\n",
        "\n",
        "The layer evaluation process has been deliberately kept as simple as possible, using a single metric already employed in the notebook [6\\_6\\_pruning\\_attention\\_layers.ipynb](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_6_pruning_attention_layers.ipynb): the cosine distance between the layer's input and output.\n",
        "\n",
        "Unlike the previous notebook, where this distance was measured using only one example prompt, here a set of prompts with different complexities has been used.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "HvhSYyE2Rk1H"
      },
      "outputs": [],
      "source": [
        "# Using multiple prompts for calibration\n",
        "# Using multiple prompts for calibration\n",
        "calibration_prompts = [\n",
        "    \"Hi\",\n",
        "    \"2+2=\",\n",
        "    \"Hello.\",\n",
        "    \"What is 2+2?\",\n",
        "    \"What is the capital of France?\",\n",
        "    \"Paris is the capital of \",\n",
        "    \"Tell me a joke.\",\n",
        "    \"Name the capital of Catalonia.\",\n",
        "    \"Who wrote 'To Kill a Mockingbird'?\",\n",
        "    \"Explain the basic principles of machine learning and how neural networks work.\",\n",
        "    \"What are the main causes of climate change and what can individuals do to help?\",\n",
        "    \"Summarize the plot of 'The Matrix' in one sentence.\",\n",
        "    \"List three benefits of regular exercise.\",\n",
        "    \"Compare and contrast the economic policies of Keynesian and Austrian schools of thought, analyzing their effectiveness during different historical periods and explaining which approach would be most suitable for addressing current global economic challenges.\",\n",
        "    \"Design a comprehensive strategy for a small tech startup to compete against established giants like Google and Microsoft in the cloud computing market, considering market positioning, technological differentiation, partnerships, and funding requirements.\",\n",
        "    \"The sky appears blue during the day, during the night you can see \",\n",
        "    \"Describe how a neural network learns from data.\",\n",
        "    \"Write a detailed philosophical essay examining the ethical implications of artificial intelligence consciousness, incorporating perspectives from utilitarian, deontological, and virtue ethics frameworks, while addressing counterarguments and proposing a novel ethical framework for AI development that balances technological progress with human values and societal well-being.\",\n",
        "    \"Develop a multidisciplinary research proposal that integrates quantum computing, biotechnology, and environmental science to address food security challenges in the context of climate change, including methodology, timeline, budget considerations, potential collaborations, risk assessment, and expected societal impact over the next two decades.\"\n",
        "    \"Given current economic trends, predict one challenge global markets may face in the next decade.\",\n",
        "    \"Write a short poem about the experience of learning something new.\",\n",
        "    \"Produce a 450-word technical tutorial that walks through implementing a transformer-based language model from scratch in NumPy, including positional encoding and scaled-dot-product attention.\"\n",
        "    \"As an expert in global macroeconomics, geopolitical risk assessment, and artificial intelligence ethics, write an in-depth policy advisory report for a coalition of G20 nations facing simultaneous systemic challenges, including post-pandemic inflation volatility, supply chain reconfiguration due to AI-driven automation, increasing regional instability in energy markets, and declining trust in democratic institutions. Your report should propose a coordinated strategy that balances fiscal stimulus with monetary restraint, integrates quantum-secure blockchain for supply chain transparency, and includes AI oversight frameworks aligned with both utilitarian and deontological ethical models. Additionally, evaluate how international institutions like the IMF and the World Bank could modernize their governance structures to reflect multipolar power dynamics, and assess the feasibility of adopting an intergovernmental AI alignment charter inspired by the Paris Agreement model. Your recommendations must be actionable, globally inclusive, and anticipate sociopolitical backlash from both populist and nationalist movements.\",\n",
        "    \"\"\"\n",
        "    Draft Integrated Strategic White-Paper for Inter-Agency Review—\n",
        "\n",
        "Executive Overview:\n",
        "This document synthesises cutting-edge research in climate science, planetary boundaries, quantum-enhanced computation, synthetic bio-manufacturing, neuro-symbolic artificial intelligence, behavioural economics, geopolitics, space-based energy infrastructure, and post-growth macro-finance. It is intended for cabinet-level policymakers across the G20, the African Union, and APEC, as well as multilateral lenders, sovereign wealth funds, philanthropic megadonors, and fourth-sector cooperative alliances.\n",
        "\n",
        "Section 1 – Macroeconomic Volatility & Post-Pandemic Debt Overhang\n",
        "1.1 Analyse the persistence of stagflationary pressures under divergent monetary regimes.\n",
        "1.2 Model cascading default scenarios using agent-based stress tests that incorporate climate-induced supply-chain interruptions, semiconductor chokepoints in Taiwan and the Netherlands, and maritime bottlenecks in the Suez and Panama Canals.\n",
        "1.3 Propose a menu of fiscal-monetary coordination instruments—helicopter stabilisation bonds, biodiversity-linked debt swaps, and anti-fragile carbon border adjustments—scaled to emerging-market liquidity traps.\n",
        "\n",
        "Section 2 – Planetary Health & Regenerative Bio-Economy\n",
        "2.1 Summarise findings from IPCC AR7 draft chapters on irreversible cryosphere tipping points.\n",
        "2.2 Evaluate next-generation direct air capture catalysis that leverages metal-organic frameworks seeded by engineered extremophilic microbes.\n",
        "2.3 Draft a governance blueprint for a Global Soil Microbiome Commons, incorporating indigenous data sovereignty protocols, fair-benefit-sharing algorithms, and quantum-secured telemetry for real-time biodiversity crediting.\n",
        "\n",
        "Section 3 – Quantum-Classical Hybrid Infrastructure\n",
        "3.1 Detail a phased roadmap for 1 000-qubit photonic processors coupled to error-mitigated superconducting qubits for combinatorial optimisation in logistics, drug-discovery, and lattice-QCD.\n",
        "3.2 Define open-standard interfaces that allow sovereign cloud providers to interoperate with NATO-grade zero-trust enclaves and NIST-post-quantum cryptographic suites.\n",
        "3.3 Recommend incentives for talent-mobility corridors bridging quantum start-up clusters in Toronto, Delft, Shenzhen, Sydney, and Kigali.\n",
        "\n",
        "Section 4 – Neuro-Symbolic AI & Alignment Governance\n",
        "4.1 Compare scaling-law extrapolations for transformers, mixture-of-experts, retrieval-augmented decoders, and recursive reasoning agents.\n",
        "4.2 Propose a multi-layer safety stack: interpretability probes, causal influence diagrams, counterfactual policy evaluation, and cooperative inverse-reinforcement architectures monitored by open-weight red-team sandboxes.\n",
        "4.3 Outline a treaty-grade AI Alignment Accord modelled after the Paris Agreement, featuring dynamic capability thresholds, compute-cluster registration, differential privacy audits, and a tiered sanctions regime enforced via programmable CBDCs.\n",
        "\n",
        "Section 5 – Security, Geopolitics & Space-Based Energy\n",
        "5.1 Assess escalation risks stemming from fractional-orbital bombardment systems, low-cost hypersonic glide vehicles, and AI-directed drone swarms.\n",
        "5.2 Present techno-economic viability of kilometre-scale solar power satellites in sun-synchronous orbit, with microwave beaming arrays utilising adaptive phased-conjugate mirrors.\n",
        "5.3 Recommend confidence-building measures: reciprocal on-site inspection, open telemetry APIs, catastrophe-bond insurance pools, and an International Orbital Commons Authority.\n",
        "\n",
        "Section 6 – Behavioural & Cultural Dynamics\n",
        "6.1 Integrate behavioural-nudge frameworks, narrative foresight, and social-network epistemic resilience analytics to counter disinformation loops.\n",
        "6.2 Design outcome-oriented citizen deliberation platforms that leverage quadratic voting, verifiable credentials, and language-agnostic dialogue agents with embedded bias-mitigation layers.\n",
        "\n",
        "Section 7 – Financing Mechanisms & Implementation Timeline\n",
        "7.1 Catalogue blended-finance instruments: catalytic first-loss capital, sovereign green sukuk, resilience impact derivatives, and decentralized autonomous project bonds.\n",
        "7.2 Map a ten-year Gantt chart with critical path analysis, specifying TRL-milestones, regulatory sandboxes, and adaptive procurement clauses.\n",
        "\n",
        "Call to Action:\n",
        "Conclude by articulating how cooperative mission-oriented investment, science-diplomacy trust architecture, and inclusive technology governance can converge to safeguard planetary health while enabling equitable prosperity within the safe-and-just operating space for humanity.\n",
        "    \"\"\"\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "To measure the importance of the different layers, the **measure_layer_importance_simple** function is used.\n",
        "\n",
        "A forward pass is executed for each of the calibration prompts.\n",
        "\n",
        "Through the use of hooks, the input `q_proj` and output `o_proj` are captured, and the cosine similarity between them is calculated. The layers with the least similarity between input and output are the ones that contribute the most."
      ],
      "metadata": {
        "id": "-zmief1T8pt9"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def measure_layer_importance_simple(model, tokenizer, prompts):\n",
        "    \"\"\"Simple layer importance measurement - FIXED using original notebook pattern\"\"\"\n",
        "    model.eval()\n",
        "    device = next(model.parameters()).device\n",
        "    total_layers = len(model.model.layers)\n",
        "\n",
        "    # Accumulate importance scores across all prompts\n",
        "    importance_acc = {idx: 0.0 for idx in range(total_layers)}\n",
        "\n",
        "    print(f\"📊 Processing {len(prompts)} prompts across {total_layers} layers...\")\n",
        "\n",
        "    for prompt_idx, prompt in enumerate(prompts):\n",
        "        print(f\"   Processing prompt {prompt_idx + 1}/{len(prompts)}\")\n",
        "\n",
        "        # Tokenize input (following original notebook pattern)\n",
        "        inputs = tokenizer(prompt, return_tensors=\"pt\").to(device)\n",
        "\n",
        "        # Storage for this prompt's layer inputs/outputs\n",
        "        layer_inputs = {}\n",
        "        layer_outputs = {}\n",
        "\n",
        "        # Create hooks (EXACTLY like the original function)\n",
        "        def q_proj_input_hook(layer_idx):\n",
        "            def _hook(module, module_input):\n",
        "                # Handle tuple input (following original pattern)\n",
        "                inp = module_input[0] if isinstance(module_input, tuple) else module_input\n",
        "                layer_inputs[layer_idx] = inp.detach().clone()\n",
        "            return _hook\n",
        "\n",
        "        def o_proj_output_hook(layer_idx):\n",
        "            def _hook(module, module_input, module_output):\n",
        "                # Handle tuple output (following original pattern)\n",
        "                out = module_output[0] if isinstance(module_output, tuple) else module_output\n",
        "                layer_outputs[layer_idx] = out.detach().clone()\n",
        "            return _hook\n",
        "\n",
        "        # Register hooks for ALL layers (not just unpruned ones)\n",
        "        handles = []\n",
        "        for idx in range(total_layers):\n",
        "            layer = model.model.layers[idx]\n",
        "            handles.append(layer.self_attn.q_proj.register_forward_pre_hook(q_proj_input_hook(idx)))\n",
        "            handles.append(layer.self_attn.o_proj.register_forward_hook(o_proj_output_hook(idx)))\n",
        "\n",
        "        # Forward pass (following original pattern)\n",
        "        with torch.no_grad():\n",
        "            _ = model(**inputs)\n",
        "\n",
        "        # Remove hooks (following original pattern)\n",
        "        for h in handles:\n",
        "            h.remove()\n",
        "\n",
        "        # Calculate importance for each layer (EXACTLY like original)\n",
        "        for idx in range(total_layers):\n",
        "            if idx in layer_inputs and idx in layer_outputs:\n",
        "                inp = layer_inputs[idx]\n",
        "                out = layer_outputs[idx]\n",
        "\n",
        "                # Flatten tensors (following original pattern)\n",
        "                inp_flat = inp.view(inp.size(0), -1)\n",
        "                out_flat = out.view(out.size(0), -1)\n",
        "\n",
        "                # Calculate similarity and importance (following original pattern)\n",
        "                similarity = F.cosine_similarity(inp_flat, out_flat, dim=1).mean().item()\n",
        "                importance_score = 1 - similarity\n",
        "                importance_acc[idx] += importance_score\n",
        "\n",
        "    # Average across all prompts\n",
        "    avg_importance = {idx: importance_acc[idx] / len(prompts) for idx in range(total_layers)}\n",
        "\n",
        "    print(\"✅ Layer importance measurement complete!\")\n",
        "    return avg_importance\n",
        "\n"
      ],
      "metadata": {
        "id": "6yrcseaPSBpb"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The **create_adaptive_config_simple** function is the core of the calibration phase of our AAB system.\n",
        "\n",
        "Its mission is to take the model, a set of example prompts, and the global configuration that has been defined, to generate and save a detailed configuration file: \"adaptive_config.json\".\n",
        "\n",
        "This file will be the \"roadmap\" that the model will use to decide how many attention layers to activate."
      ],
      "metadata": {
        "id": "Uo_2OGKfA_Ud"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def create_adaptive_config_simple(model, tokenizer, prompts):\n",
        "    \"\"\"Create OPTIMIZED adaptive config - ultra-simple format for efficient inference\"\"\"\n",
        "    print(\"Creating optimized adaptive config...\")\n",
        "\n",
        "    # Step 1: Analyze model\n",
        "    model_size_category = detect_model_size_category(model)\n",
        "    total_layers = count_attention_layers_correctly(model)\n",
        "\n",
        "    # Step 2: Measure importance\n",
        "    print(\"Measuring layer importance...\")\n",
        "    importance_scores = measure_layer_importance_simple(model, tokenizer, prompts)\n",
        "\n",
        "    # Step 3: Create layers_by_importance (sorted list)\n",
        "    print(\"Creating layers_by_importance list...\")\n",
        "    sorted_layers = sorted(importance_scores.items(), key=lambda x: x[1], reverse=True)\n",
        "    layers_by_importance = [layer_idx for layer_idx, _ in sorted_layers]\n",
        "\n",
        "    # Step 4: Calculate complexity thresholds using existing notebook functions\n",
        "    print(\"Calculating complexity thresholds...\")\n",
        "    complexity_scores = GLOBAL_COMPLEXITIES\n",
        "    complexity_thresholds = {}\n",
        "\n",
        "    print(\"Using notebook functions to get exact layer counts:\")\n",
        "    for score in complexity_scores:\n",
        "        active_layers_count, _, _, _ = calculate_active_layers(\n",
        "            total_layers, model_size_category, score\n",
        "        )\n",
        "        complexity_thresholds[score] = active_layers_count\n",
        "        level_name = classify_complexity_level(score)\n",
        "        print(f\"   Score {score:3.1f} ({level_name:12}) → {active_layers_count:2d}/{total_layers} layers\")\n",
        "\n",
        "    # Step 5: Build OPTIMIZED config\n",
        "    print(\"⚙️ Building optimized configuration...\")\n",
        "    config = {\n",
        "        \"model_info\": {\n",
        "            \"name\": getattr(model.config, '_name_or_path', 'unknown'),\n",
        "            \"total_parameters\": f\"{sum(p.numel() for p in model.parameters()) / 1e9:.2f}B\",\n",
        "            \"size_category\": model_size_category,\n",
        "            \"total_layers\": total_layers,\n",
        "            \"architecture\": type(model).__name__\n",
        "        },\n",
        "        \"layers_by_importance\": layers_by_importance,\n",
        "        \"complexity_thresholds\": complexity_thresholds,\n",
        "        \"complexity_weights\": COMPLEXITY_WEIGHTS\n",
        "    }\n",
        "\n",
        "    # Step 6: Save optimized config\n",
        "    with open(\"adaptive_config.json\", \"w\") as f:\n",
        "        json.dump(config, f, indent=2)\n",
        "\n",
        "    print(\"✅ OPTIMIZED adaptive_config.json created!\")\n",
        "\n",
        "    # Show optimized results\n",
        "    print(f\"Model: {total_layers} layers, {model_size_category}\")\n",
        "    print(f\"Layers by importance: {layers_by_importance[:5]}... (showing first 5)\")\n",
        "    print(\"Complexity thresholds:\")\n",
        "    for threshold, count in complexity_thresholds.items():\n",
        "        percentage = (count / total_layers) * 100\n",
        "        level = classify_complexity_level(threshold)\n",
        "        print(f\"   {threshold:3.1f} ({level:12}): {count:2d} layers ({percentage:4.1f}%)\")\n",
        "\n",
        "    print(\"\\nULTRA-EFFICIENT RUNTIME FORMAT:\")\n",
        "\n",
        "    return config\n"
      ],
      "metadata": {
        "id": "bYRcO6Fg8ubm"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The `adaptive_config` variable will contain the configuration file that marks the importance of the layers."
      ],
      "metadata": {
        "id": "nyxIwAlbBg32"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create the OPTIMIZED adaptive config using existing calibration_prompts\n",
        "adaptive_config = create_adaptive_config_simple(model, tokenizer, calibration_prompts)\n",
        "\n",
        "print(f\"\\DONE! Optimized adaptive_config.json ready for AAB!\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "XRuB-fZ6VK5H",
        "outputId": "59728eb8-0dea-4b6c-991b-43c972fd0cf7"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Creating optimized adaptive config...\n",
            "🔍 Detected model size: 3.21B parameters\n",
            "Measuring layer importance...\n",
            "📊 Processing 22 prompts across 28 layers...\n",
            "   Processing prompt 1/22\n",
            "   Processing prompt 2/22\n",
            "   Processing prompt 3/22\n",
            "   Processing prompt 4/22\n",
            "   Processing prompt 5/22\n",
            "   Processing prompt 6/22\n",
            "   Processing prompt 7/22\n",
            "   Processing prompt 8/22\n",
            "   Processing prompt 9/22\n",
            "   Processing prompt 10/22\n",
            "   Processing prompt 11/22\n",
            "   Processing prompt 12/22\n",
            "   Processing prompt 13/22\n",
            "   Processing prompt 14/22\n",
            "   Processing prompt 15/22\n",
            "   Processing prompt 16/22\n",
            "   Processing prompt 17/22\n",
            "   Processing prompt 18/22\n",
            "   Processing prompt 19/22\n",
            "   Processing prompt 20/22\n",
            "   Processing prompt 21/22\n",
            "   Processing prompt 22/22\n",
            "✅ Layer importance measurement complete!\n",
            "Creating layers_by_importance list...\n",
            "Calculating complexity thresholds...\n",
            "Using notebook functions to get exact layer counts:\n",
            "   Score 0.1 (trivial     ) → 22/28 layers\n",
            "   Score 0.3 (simple      ) → 24/28 layers\n",
            "   Score 0.5 (medium      ) → 25/28 layers\n",
            "   Score 0.7 (complex     ) → 26/28 layers\n",
            "   Score 0.9 (very_complex) → 28/28 layers\n",
            "⚙️ Building optimized configuration...\n",
            "✅ OPTIMIZED adaptive_config.json created!\n",
            "Model: 28 layers, 2B-5B\n",
            "Layers by importance: [8, 9, 12, 10, 7]... (showing first 5)\n",
            "Complexity thresholds:\n",
            "   0.1 (trivial     ): 22 layers (78.6%)\n",
            "   0.3 (simple      ): 24 layers (85.7%)\n",
            "   0.5 (medium      ): 25 layers (89.3%)\n",
            "   0.7 (complex     ): 26 layers (92.9%)\n",
            "   0.9 (very_complex): 28 layers (100.0%)\n",
            "\n",
            "ULTRA-EFFICIENT RUNTIME FORMAT:\n",
            "\\DONE! Optimized adaptive_config.json ready for AAB!\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "adaptive_config"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ExBmu0i5AMvx",
        "outputId": "b2968a5a-73a2-4a98-f4dc-315b286fc545"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'model_info': {'name': 'meta-llama/Llama-3.2-3B',\n",
              "  'total_parameters': '3.21B',\n",
              "  'size_category': '2B-5B',\n",
              "  'total_layers': 28,\n",
              "  'architecture': 'LlamaForCausalLM'},\n",
              " 'layers_by_importance': [8,\n",
              "  9,\n",
              "  12,\n",
              "  10,\n",
              "  7,\n",
              "  0,\n",
              "  6,\n",
              "  27,\n",
              "  13,\n",
              "  5,\n",
              "  11,\n",
              "  14,\n",
              "  18,\n",
              "  4,\n",
              "  3,\n",
              "  15,\n",
              "  2,\n",
              "  1,\n",
              "  17,\n",
              "  21,\n",
              "  25,\n",
              "  24,\n",
              "  16,\n",
              "  22,\n",
              "  20,\n",
              "  26,\n",
              "  23,\n",
              "  19],\n",
              " 'complexity_thresholds': {0.1: 22, 0.3: 24, 0.5: 25, 0.7: 26, 0.9: 28},\n",
              " 'complexity_weights': {'token_count': 0.75, 'embedding_variance': 0.25}}"
            ]
          },
          "metadata": {},
          "execution_count": 20
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Test prompt complexity\n",
        "This function is one of the most important in the entire notebook and one of the most critical. It is not only used in the calibration process, where the importance of the layers is decided, but the same code must also be used at inference time to classify the prompt depending on its complexity.\n",
        "\n",
        "It calculates a prompt complexity score, between 0 and 1. The calculation considers two variables: the length of the prompt and the variance of the prompt's embeddings.\n",
        "\n",
        "The calculation has been kept simple because it must be executed upon receiving each prompt and should not add computation time to the model."
      ],
      "metadata": {
        "id": "X7OMW6WzMCyU"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def analyze_prompt_complexity(prompts, config, model, tokenizer, verbose: bool = False):\n",
        "    \"\"\"\n",
        "    Compute a complexity score in [0, 1] for each prompt.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    prompts : list[str]\n",
        "        The text prompts to score.\n",
        "    config : dict\n",
        "        adaptive_config.json already loaded as dict.\n",
        "    model : transformers.PreTrainedModel\n",
        "        The HF model (on CPU or GPU).\n",
        "    tokenizer : transformers.PreTrainedTokenizer\n",
        "        Matching tokenizer.\n",
        "    verbose : bool\n",
        "        If True, print a per-prompt breakdown.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    list[tuple[str, float]]\n",
        "        (prompt, complexity_score) for each input string.\n",
        "    \"\"\"\n",
        "\n",
        "    # Get model size and device\n",
        "\n",
        "    device = next(model.parameters()).device\n",
        "    total_params = sum(p.numel() for p in model.parameters())\n",
        "    size_billion = total_params / 1e9\n",
        "    MIN_TOKENS = 4\n",
        "    # Unified size adjustment factor\n",
        "    # Small models (< 2B) get boost, large models (> 10B) get dampening\n",
        "    size_factor = 1.0 + (2.0 - size_billion) * 0.1\n",
        "    size_factor = max(0.5, min(2.0, size_factor))  # Clamp between 0.5 and 2.0\n",
        "\n",
        "\n",
        "    # Length reference scaled by model size\n",
        "    # Smaller models reach max complexity with shorter prompts\n",
        "    base_length = 2000\n",
        "    length_reference = base_length / size_factor\n",
        "    variance_saturation = length_reference / 15\n",
        "\n",
        "    # Get weights from config\n",
        "    weights = config.get(\"complexity_weights\", {\n",
        "        \"token_count\": 0.65,\n",
        "        \"embedding_variance\": 0.35\n",
        "    })\n",
        "\n",
        "    results = []\n",
        "\n",
        "    for prompt in prompts:\n",
        "        # Tokenize\n",
        "        ids = tokenizer(prompt, return_tensors=\"pt\")[\"input_ids\"][0].to(device)\n",
        "        n_tokens = ids.size(0)\n",
        "\n",
        "        # 1. TOKEN SCORE - Simple logarithmic scaling\n",
        "        # Maps token count to [0, 1] with smooth growth\n",
        "        token_score = math.log1p(n_tokens) / math.log1p(length_reference)\n",
        "        token_score = min(token_score * size_factor, 1.0)\n",
        "        if n_tokens < MIN_TOKENS:\n",
        "          dampening = (n_tokens / MIN_TOKENS) ** 2  # Quadratic dampening\n",
        "          token_score = token_score * dampening\n",
        "\n",
        "        # 2. EMBEDDING VARIANCE - Semantic diversity\n",
        "        with torch.no_grad():\n",
        "            emb = model.get_input_embeddings()(ids.unsqueeze(0)).squeeze(0).float()\n",
        "            n = emb.size(0)\n",
        "\n",
        "            if n < 3:\n",
        "                # Too few tokens for meaningful variance\n",
        "                emb_variance = 0.0\n",
        "            else:\n",
        "                # Normalize embeddings\n",
        "                norm_emb = torch.nn.functional.normalize(emb, p=2, dim=1)\n",
        "\n",
        "                # Compute pairwise cosine similarities\n",
        "                sim_matrix = torch.matmul(norm_emb, norm_emb.t())\n",
        "\n",
        "                # Get off-diagonal elements (exclude self-similarity)\n",
        "                mask = ~torch.eye(n, dtype=bool, device=device)\n",
        "                off_diag_sim = sim_matrix[mask]\n",
        "\n",
        "                # Variance = 1 - mean similarity\n",
        "                # Higher variance = more diverse embeddings\n",
        "                emb_variance = 1.0 - off_diag_sim.mean().item()\n",
        "\n",
        "                # Scale by length (longer prompts naturally have more variance)\n",
        "                length_scale = min(n_tokens / variance_saturation, 1.0)\n",
        "                emb_variance = emb_variance * length_scale\n",
        "\n",
        "        # 3. FINAL SCORE - Weighted combination\n",
        "        complexity_score = (\n",
        "            weights[\"token_count\"] * token_score +\n",
        "            weights[\"embedding_variance\"] * emb_variance\n",
        "        )\n",
        "        complexity_score = max(0.0, min(complexity_score, 1.0))\n",
        "\n",
        "        if verbose:\n",
        "            prompt_preview = (prompt[:57] + \"…\") if len(prompt) > 60 else prompt\n",
        "            print(f\"{prompt_preview:<60} | \"\n",
        "                  f\"score={complexity_score:.3f} | \"\n",
        "                  f\"tokens={n_tokens} \"\n",
        "                  f\"[tok={token_score:.3f} var={emb_variance:.3f}]\")\n",
        "\n",
        "        results.append((prompt, round(complexity_score, 4)))\n",
        "\n",
        "    return results"
      ],
      "metadata": {
        "id": "P_C8-flogsDJ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "As can be seen in the list of results, the function is able to correctly evaluate the complexity of the prompts.\n",
        "\n",
        "The results obtained are different when executed with the 3B model compared to the 1B model, which demonstrates that it takes into account the model that will have to process the prompt to decide its complexity."
      ],
      "metadata": {
        "id": "FKQdqkO8B0Bv"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "analyze_prompt_complexity(calibration_prompts, adaptive_config, model,  tokenizer, verbose=True)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "f8_F7kJqOVbK",
        "outputId": "d4dc197c-7838-47c7-a7be-65d78a4e61ea"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Hi                                                           | score=0.023 | tokens=2 [tok=0.031 var=0.000]\n",
            "2+2=                                                         | score=0.159 | tokens=5 [tok=0.204 var=0.026]\n",
            "Hello.                                                       | score=0.072 | tokens=3 [tok=0.089 var=0.021]\n",
            "What is 2+2?                                                 | score=0.199 | tokens=8 [tok=0.250 var=0.046]\n",
            "What is the capital of France?                               | score=0.200 | tokens=8 [tok=0.250 var=0.050]\n",
            "Paris is the capital of                                      | score=0.188 | tokens=7 [tok=0.236 var=0.043]\n",
            "Tell me a joke.                                              | score=0.176 | tokens=6 [tok=0.221 var=0.040]\n",
            "Name the capital of Catalonia.                               | score=0.189 | tokens=7 [tok=0.236 var=0.046]\n",
            "Who wrote 'To Kill a Mockingbird'?                           | score=0.230 | tokens=11 [tok=0.282 var=0.072]\n",
            "Explain the basic principles of machine learning and how …   | score=0.260 | tokens=15 [tok=0.315 var=0.096]\n",
            "What are the main causes of climate change and what can i…   | score=0.272 | tokens=17 [tok=0.329 var=0.104]\n",
            "Summarize the plot of 'The Matrix' in one sentence.          | score=0.260 | tokens=15 [tok=0.315 var=0.095]\n",
            "List three benefits of regular exercise.                     | score=0.200 | tokens=8 [tok=0.250 var=0.052]\n",
            "Compare and contrast the economic policies of Keynesian a…   | score=0.372 | tokens=38 [tok=0.416 var=0.240]\n",
            "Design a comprehensive strategy for a small tech startup …   | score=0.372 | tokens=38 [tok=0.416 var=0.238]\n",
            "The sky appears blue during the day, during the night you…   | score=0.266 | tokens=16 [tok=0.322 var=0.097]\n",
            "Describe how a neural network learns from data.              | score=0.221 | tokens=10 [tok=0.273 var=0.065]\n",
            "Write a detailed philosophical essay examining the ethica…   | score=0.431 | tokens=55 [tok=0.458 var=0.350]\n",
            "Develop a multidisciplinary research proposal that integr…   | score=0.478 | tokens=72 [tok=0.488 var=0.451]\n",
            "Write a short poem about the experience of learning somet…   | score=0.245 | tokens=13 [tok=0.300 var=0.082]\n",
            "Produce a 450-word technical tutorial that walks through …   | score=0.699 | tokens=211 [tok=0.609 var=0.968]\n",
            "\n",
            "    Draft Integrated Strategic White-Paper for Inter-Age…   | score=0.825 | tokens=904 [tok=0.774 var=0.978]\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[('Hi', 0.0234),\n",
              " ('2+2=', 0.1594),\n",
              " ('Hello.', 0.0716),\n",
              " ('What is 2+2?', 0.1987),\n",
              " ('What is the capital of France?', 0.1999),\n",
              " ('Paris is the capital of ', 0.188),\n",
              " ('Tell me a joke.', 0.1758),\n",
              " ('Name the capital of Catalonia.', 0.1888),\n",
              " (\"Who wrote 'To Kill a Mockingbird'?\", 0.2298),\n",
              " ('Explain the basic principles of machine learning and how neural networks work.',\n",
              "  0.2603),\n",
              " ('What are the main causes of climate change and what can individuals do to help?',\n",
              "  0.2723),\n",
              " (\"Summarize the plot of 'The Matrix' in one sentence.\", 0.2601),\n",
              " ('List three benefits of regular exercise.', 0.2003),\n",
              " ('Compare and contrast the economic policies of Keynesian and Austrian schools of thought, analyzing their effectiveness during different historical periods and explaining which approach would be most suitable for addressing current global economic challenges.',\n",
              "  0.3723),\n",
              " ('Design a comprehensive strategy for a small tech startup to compete against established giants like Google and Microsoft in the cloud computing market, considering market positioning, technological differentiation, partnerships, and funding requirements.',\n",
              "  0.3717),\n",
              " ('The sky appears blue during the day, during the night you can see ',\n",
              "  0.2658),\n",
              " ('Describe how a neural network learns from data.', 0.2206),\n",
              " ('Write a detailed philosophical essay examining the ethical implications of artificial intelligence consciousness, incorporating perspectives from utilitarian, deontological, and virtue ethics frameworks, while addressing counterarguments and proposing a novel ethical framework for AI development that balances technological progress with human values and societal well-being.',\n",
              "  0.4307),\n",
              " ('Develop a multidisciplinary research proposal that integrates quantum computing, biotechnology, and environmental science to address food security challenges in the context of climate change, including methodology, timeline, budget considerations, potential collaborations, risk assessment, and expected societal impact over the next two decades.Given current economic trends, predict one challenge global markets may face in the next decade.',\n",
              "  0.4785),\n",
              " ('Write a short poem about the experience of learning something new.',\n",
              "  0.2454),\n",
              " ('Produce a 450-word technical tutorial that walks through implementing a transformer-based language model from scratch in NumPy, including positional encoding and scaled-dot-product attention.As an expert in global macroeconomics, geopolitical risk assessment, and artificial intelligence ethics, write an in-depth policy advisory report for a coalition of G20 nations facing simultaneous systemic challenges, including post-pandemic inflation volatility, supply chain reconfiguration due to AI-driven automation, increasing regional instability in energy markets, and declining trust in democratic institutions. Your report should propose a coordinated strategy that balances fiscal stimulus with monetary restraint, integrates quantum-secure blockchain for supply chain transparency, and includes AI oversight frameworks aligned with both utilitarian and deontological ethical models. Additionally, evaluate how international institutions like the IMF and the World Bank could modernize their governance structures to reflect multipolar power dynamics, and assess the feasibility of adopting an intergovernmental AI alignment charter inspired by the Paris Agreement model. Your recommendations must be actionable, globally inclusive, and anticipate sociopolitical backlash from both populist and nationalist movements.',\n",
              "  0.6988),\n",
              " ('\\n    Draft Integrated Strategic White-Paper for Inter-Agency Review—\\n\\nExecutive Overview:\\nThis document synthesises cutting-edge research in climate science, planetary boundaries, quantum-enhanced computation, synthetic bio-manufacturing, neuro-symbolic artificial intelligence, behavioural economics, geopolitics, space-based energy infrastructure, and post-growth macro-finance. It is intended for cabinet-level policymakers across the G20, the African Union, and APEC, as well as multilateral lenders, sovereign wealth funds, philanthropic megadonors, and fourth-sector cooperative alliances.\\n\\nSection 1 – Macroeconomic Volatility & Post-Pandemic Debt Overhang\\n1.1\\u2003Analyse the persistence of stagflationary pressures under divergent monetary regimes.\\n1.2\\u2003Model cascading default scenarios using agent-based stress tests that incorporate climate-induced supply-chain interruptions, semiconductor chokepoints in Taiwan and the Netherlands, and maritime bottlenecks in the Suez and Panama Canals.\\n1.3\\u2003Propose a menu of fiscal-monetary coordination instruments—helicopter stabilisation bonds, biodiversity-linked debt swaps, and anti-fragile carbon border adjustments—scaled to emerging-market liquidity traps.\\n\\nSection 2 – Planetary Health & Regenerative Bio-Economy\\n2.1\\u2003Summarise findings from IPCC AR7 draft chapters on irreversible cryosphere tipping points.\\n2.2\\u2003Evaluate next-generation direct air capture catalysis that leverages metal-organic frameworks seeded by engineered extremophilic microbes.\\n2.3\\u2003Draft a governance blueprint for a Global Soil Microbiome Commons, incorporating indigenous data sovereignty protocols, fair-benefit-sharing algorithms, and quantum-secured telemetry for real-time biodiversity crediting.\\n\\nSection 3 – Quantum-Classical Hybrid Infrastructure\\n3.1\\u2003Detail a phased roadmap for 1 000-qubit photonic processors coupled to error-mitigated superconducting qubits for combinatorial optimisation in logistics, drug-discovery, and lattice-QCD.\\n3.2\\u2003Define open-standard interfaces that allow sovereign cloud providers to interoperate with NATO-grade zero-trust enclaves and NIST-post-quantum cryptographic suites.\\n3.3\\u2003Recommend incentives for talent-mobility corridors bridging quantum start-up clusters in Toronto, Delft, Shenzhen, Sydney, and Kigali.\\n\\nSection 4 – Neuro-Symbolic AI & Alignment Governance\\n4.1\\u2003Compare scaling-law extrapolations for transformers, mixture-of-experts, retrieval-augmented decoders, and recursive reasoning agents.\\n4.2\\u2003Propose a multi-layer safety stack: interpretability probes, causal influence diagrams, counterfactual policy evaluation, and cooperative inverse-reinforcement architectures monitored by open-weight red-team sandboxes.\\n4.3\\u2003Outline a treaty-grade AI Alignment Accord modelled after the Paris Agreement, featuring dynamic capability thresholds, compute-cluster registration, differential privacy audits, and a tiered sanctions regime enforced via programmable CBDCs.\\n\\nSection 5 – Security, Geopolitics & Space-Based Energy\\n5.1\\u2003Assess escalation risks stemming from fractional-orbital bombardment systems, low-cost hypersonic glide vehicles, and AI-directed drone swarms.\\n5.2\\u2003Present techno-economic viability of kilometre-scale solar power satellites in sun-synchronous orbit, with microwave beaming arrays utilising adaptive phased-conjugate mirrors.\\n5.3\\u2003Recommend confidence-building measures: reciprocal on-site inspection, open telemetry APIs, catastrophe-bond insurance pools, and an International Orbital Commons Authority.\\n\\nSection 6 – Behavioural & Cultural Dynamics\\n6.1\\u2003Integrate behavioural-nudge frameworks, narrative foresight, and social-network epistemic resilience analytics to counter disinformation loops.\\n6.2\\u2003Design outcome-oriented citizen deliberation platforms that leverage quadratic voting, verifiable credentials, and language-agnostic dialogue agents with embedded bias-mitigation layers.\\n\\nSection 7 – Financing Mechanisms & Implementation Timeline\\n7.1\\u2003Catalogue blended-finance instruments: catalytic first-loss capital, sovereign green sukuk, resilience impact derivatives, and decentralized autonomous project bonds.\\n7.2\\u2003Map a ten-year Gantt chart with critical path analysis, specifying TRL-milestones, regulatory sandboxes, and adaptive procurement clauses.\\n\\nCall to Action:\\nConclude by articulating how cooperative mission-oriented investment, science-diplomacy trust architecture, and inclusive technology governance can converge to safeguard planetary health while enabling equitable prosperity within the safe-and-just operating space for humanity.\\n    ',\n",
              "  0.8249)]"
            ]
          },
          "metadata": {},
          "execution_count": 22
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# AAB Implementation\n",
        "In this section, the classes and functions that will modify the model's behavior are defined to allow it to dynamically skip attention layers based on the prompt complexity calculated in real-time."
      ],
      "metadata": {
        "id": "fu6QaAE0iK9r"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from typing import Dict, List, Tuple, Optional, Union\n",
        "import logging"
      ],
      "metadata": {
        "id": "x1Ep1jh64LP6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The **LayerActivationMask** class is like an external \"control panel\" that decides and remembers which attention layers of the model should work and which can be skipped for a given prompt.\n",
        "\n",
        "Its design seeks to keep this activation logic separate from the model's internal code, resulting in a cleaner and more modular system.\n",
        "\n",
        "Some of the functions are only for obtaining more information during execution in the notebook, but they are not necessary for the final code."
      ],
      "metadata": {
        "id": "DqjsLCOtH6Yh"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class LayerActivationMask:\n",
        "    \"\"\"\n",
        "    External mask system to control which attention layers are active at inference time.\n",
        "    Keeps a clean separation from model internals and allows dynamic updates per prompt.\n",
        "    \"\"\"\n",
        "    def __init__(self, total_layers: int):\n",
        "        self.total_layers = total_layers\n",
        "        # Boolean mask: True means this layer is active for the current inference\n",
        "        self.active_mask = [True] * total_layers\n",
        "        # The latest prompt complexity score (float between 0 and 1)\n",
        "        self.current_complexity = None\n",
        "        # How many layers are currently active\n",
        "        self.current_active_count = total_layers\n",
        "\n",
        "        # --- Debug and tracking variables ---\n",
        "        # Detailed log of which layers were executed or bypassed for each inference\n",
        "        self.execution_log = []\n",
        "        # Unique ID for each inference pass (useful for debugging multiple calls)\n",
        "        self.current_inference_id = 0\n",
        "        # Sequence length tracking for special triggers (e.g., layer 0 activation)\n",
        "        self.last_sequence_length = 0\n",
        "\n",
        "    def update_for_prompt(self, active_layer_indices: List[int], complexity_score: float):\n",
        "        \"\"\"\n",
        "        Update the active mask for the current prompt.\n",
        "        Should be called before inference, after computing prompt complexity.\n",
        "        \"\"\"\n",
        "        self.active_mask = [i in active_layer_indices for i in range(self.total_layers)]\n",
        "        self.current_complexity = complexity_score\n",
        "        self.current_active_count = len(active_layer_indices)\n",
        "        # Reset the execution log for this new inference\n",
        "        self.execution_log = []\n",
        "        self.current_inference_id += 1\n",
        "\n",
        "    def is_layer_active(self, layer_idx: int) -> bool:\n",
        "        \"\"\"\n",
        "        Returns True if the given layer should be active for this inference.\n",
        "        \"\"\"\n",
        "        return self.active_mask[layer_idx]\n",
        "\n",
        "    def get_stats(self) -> Dict:\n",
        "        \"\"\"\n",
        "        Returns a summary of the current mask status.\n",
        "        Includes complexity score, number of active layers, and ratio.\n",
        "        \"\"\"\n",
        "        return {\n",
        "            'complexity_score': self.current_complexity,\n",
        "            'active_layers': self.current_active_count,\n",
        "            'total_layers': self.total_layers,\n",
        "            'usage_ratio': self.current_active_count / self.total_layers if self.current_active_count else 0,\n",
        "            'initialized': self.current_complexity is not None\n",
        "        }\n",
        "\n",
        "    def log_layer_execution(self, layer_idx: int, executed: bool):\n",
        "        \"\"\"\n",
        "        (DEBUG) Log whether a layer was actually executed or bypassed in this inference pass.\n",
        "        \"\"\"\n",
        "        self.execution_log.append({\n",
        "            'inference_id': self.current_inference_id,\n",
        "            'layer_idx': layer_idx,\n",
        "            'executed': executed,\n",
        "            'expected_active': self.active_mask[layer_idx]\n",
        "        })\n",
        "\n",
        "    def get_execution_stats(self) -> Dict:\n",
        "        \"\"\"\n",
        "        (DEBUG) Return detailed statistics about which layers were executed or bypassed,\n",
        "        and whether execution matched the expected mask.\n",
        "        \"\"\"\n",
        "        if not self.execution_log:\n",
        "            return {\n",
        "                'inference_id': self.current_inference_id,\n",
        "                'layers_executed': [],\n",
        "                'layers_bypassed': [],\n",
        "                'total_calls': 0,\n",
        "                'execution_matches_mask': True\n",
        "            }\n",
        "\n",
        "        executed = [log['layer_idx'] for log in self.execution_log if log['executed']]\n",
        "        bypassed = [log['layer_idx'] for log in self.execution_log if not log['executed']]\n",
        "\n",
        "        # Check if execution matches what the mask specified\n",
        "        execution_matches = True\n",
        "        for log in self.execution_log:\n",
        "            if log['executed'] != log['expected_active']:\n",
        "                execution_matches = False\n",
        "                break\n",
        "\n",
        "        return {\n",
        "            'inference_id': self.current_inference_id,\n",
        "            'layers_executed': sorted(executed),\n",
        "            'layers_bypassed': sorted(bypassed),\n",
        "            'total_calls': len(self.execution_log),\n",
        "            'execution_matches_mask': execution_matches,\n",
        "            'expected_active': [i for i, active in enumerate(self.active_mask) if active],\n",
        "            'expected_bypassed': [i for i, active in enumerate(self.active_mask) if not active]\n",
        "        }\n"
      ],
      "metadata": {
        "id": "qNjXffvpmkEg"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Although this notebook has only been tested with Llama models, AAB is designed to be easily adaptable to other model families.\n",
        "\n",
        "This function allows for the identification of the main families and facilitates the subsequent adaptation of the code in other functions, whenever necessary."
      ],
      "metadata": {
        "id": "7_2yB8b4Jju7"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def detect_model_architecture(model) -> str:\n",
        "    \"\"\"\n",
        "    Automatically detect model architecture for compatibility\n",
        "    \"\"\"\n",
        "    model_class = model.__class__.__name__.lower()\n",
        "    model_name = getattr(model.config, '_name_or_path', '').lower()\n",
        "\n",
        "    if 'llama' in model_class or 'llama' in model_name:\n",
        "        return 'llama'\n",
        "    elif 'mistral' in model_class or 'mistral' in model_name:\n",
        "        return 'mistral'\n",
        "    elif 'gpt2' in model_class or 'gpt2' in model_name:\n",
        "        return 'gpt2'\n",
        "    else:\n",
        "        # Default to generic transformer approach\n",
        "        return 'generic'\n",
        "\n"
      ],
      "metadata": {
        "id": "nAIBQr5qmqnZ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def get_attention_layers(model, architecture: str) -> List:\n",
        "    \"\"\"\n",
        "    Get attention layers based on architecture\n",
        "    \"\"\"\n",
        "    if architecture in ['llama', 'mistral']:\n",
        "        return model.model.layers\n",
        "    elif architecture == 'gpt2':\n",
        "        return model.transformer.h\n",
        "    else:\n",
        "        # Generic approach - try common patterns\n",
        "        if hasattr(model, 'model') and hasattr(model.model, 'layers'):\n",
        "            return model.model.layers\n",
        "        elif hasattr(model, 'transformer') and hasattr(model.transformer, 'h'):\n",
        "            return model.transformer.h\n",
        "        else:\n",
        "            raise ValueError(f\"Cannot find attention layers for architecture: {architecture}\")\n",
        "\n"
      ],
      "metadata": {
        "id": "ZqaBxaCgmuAa"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def get_attention_module(layer, architecture: str):\n",
        "    \"\"\"\n",
        "    Get the attention module from a layer based on architecture\n",
        "    \"\"\"\n",
        "    if architecture in ['llama', 'mistral']:\n",
        "        return layer.self_attn\n",
        "    elif architecture == 'gpt2':\n",
        "        return layer.attn\n",
        "    else:\n",
        "        # Generic approach\n",
        "        if hasattr(layer, 'self_attn'):\n",
        "            return layer.self_attn\n",
        "        elif hasattr(layer, 'attn'):\n",
        "            return layer.attn\n",
        "        else:\n",
        "            raise ValueError(f\"Cannot find attention module for architecture: {architecture}\")\n",
        "\n"
      ],
      "metadata": {
        "id": "a0JMDO14mzeu"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "This function calculates the complexity score for a single prompt, using the same logic as the calibration function **analyze_prompt_complexity**, but optimized for real-time use during inference.\n"
      ],
      "metadata": {
        "id": "tYHc8n3nrjCN"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def compute_prompt_complexity_runtime(prompt: str, model, tokenizer, config: Dict) -> float:\n",
        "    \"\"\"\n",
        "    Computes a complexity score in [0, 1] for a single prompt, using the same\n",
        "    core logic as analyze_prompt_complexity. Optimized for runtime inference.\n",
        "    \"\"\"\n",
        "    device = next(model.parameters()).device\n",
        "\n",
        "    # --- Efficiently get model parameters and derive calculation constants ---\n",
        "    # Get size_billion from the pre-calculated config\n",
        "    param_str = config[\"model_info\"][\"total_parameters\"]  # e.g., \"3.21B\"\n",
        "    size_billion = float(param_str.rstrip(\"B\"))     # e.g., 3.21\n",
        "\n",
        "    MIN_TOKENS = 4\n",
        "\n",
        "    # Unified size adjustment factor (logic from analyze_prompt_complexity)\n",
        "    # Small models (< 2B) get boost, large models (> 10B) get dampening\n",
        "    size_factor = 1.0 + (2.0 - size_billion) * 0.1\n",
        "    size_factor = max(0.5, min(2.0, size_factor))  # Clamp between 0.5 and 2.0\n",
        "\n",
        "    # Length reference scaled by model size (logic from analyze_prompt_complexity)\n",
        "    # Smaller models reach max complexity with shorter prompts\n",
        "    base_length = 2000  # Using base_length from analyze_prompt_complexity\n",
        "    length_reference = base_length / size_factor\n",
        "    variance_saturation = length_reference / 15\n",
        "\n",
        "    # Get weights from config (logic from analyze_prompt_complexity)\n",
        "    weights = config.get(\"complexity_weights\", {\n",
        "        \"token_count\": 0.65,  # Default fallback\n",
        "        \"embedding_variance\": 0.35  # Default fallback\n",
        "    })\n",
        "\n",
        "    # --- Process the single prompt ---\n",
        "    # Tokenize\n",
        "    ids = tokenizer(prompt, return_tensors=\"pt\")[\"input_ids\"][0].to(device)\n",
        "    n_tokens = ids.size(0)\n",
        "\n",
        "    # 1. TOKEN SCORE (logic from analyze_prompt_complexity)\n",
        "    token_score = math.log1p(n_tokens) / math.log1p(length_reference)\n",
        "    token_score = min(token_score * size_factor, 1.0)\n",
        "    if n_tokens < MIN_TOKENS:\n",
        "        dampening = (n_tokens / MIN_TOKENS) ** 2  # Quadratic dampening\n",
        "        token_score = token_score * dampening\n",
        "\n",
        "    # 2. EMBEDDING VARIANCE (logic from analyze_prompt_complexity)\n",
        "    emb_variance = 0.0 # Default value\n",
        "    with torch.no_grad():\n",
        "        emb = model.get_input_embeddings()(ids.unsqueeze(0)).squeeze(0).float()\n",
        "        n_emb_tokens = emb.size(0) # Use n_emb_tokens for clarity in this block\n",
        "\n",
        "        if n_emb_tokens < 3:\n",
        "            # Too few tokens for meaningful variance\n",
        "            emb_variance = 0.0\n",
        "        else:\n",
        "            # Normalize embeddings\n",
        "            norm_emb = torch.nn.functional.normalize(emb, p=2, dim=1)\n",
        "\n",
        "            # Compute pairwise cosine similarities\n",
        "            sim_matrix = torch.matmul(norm_emb, norm_emb.t())\n",
        "\n",
        "            # Get off-diagonal elements (exclude self-similarity)\n",
        "            # Create mask on the correct device\n",
        "            mask = ~torch.eye(n_emb_tokens, dtype=bool, device=sim_matrix.device)\n",
        "            off_diag_sim = sim_matrix[mask]\n",
        "\n",
        "            if off_diag_sim.numel() > 0: # Ensure there are elements to mean\n",
        "                emb_variance = 1.0 - off_diag_sim.mean().item()\n",
        "            else: # Should not happen if n_emb_tokens >= 3 and mask is correct\n",
        "                emb_variance = 0.0\n",
        "\n",
        "            # Scale by length (longer prompts naturally have more variance)\n",
        "            length_scale = min(n_tokens / variance_saturation, 1.0) # Use n_tokens from original prompt\n",
        "            emb_variance = emb_variance * length_scale\n",
        "\n",
        "    # 3. FINAL SCORE (logic from analyze_prompt_complexity)\n",
        "    complexity_score = (\n",
        "        weights[\"token_count\"] * token_score +\n",
        "        weights[\"embedding_variance\"] * emb_variance\n",
        "    )\n",
        "    complexity_score = max(0.0, min(complexity_score, 1.0)) # Clamp for safety\n",
        "\n",
        "    return complexity_score"
      ],
      "metadata": {
        "id": "3RX88D8ROtwI"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Once the **complexity\\_score** for a prompt has been calculated using **compute\\_prompt\\_complexity\\_runtime**, the next step is to decide exactly how many attention layers should be activated and, most importantly, which ones.\n",
        "\n",
        "The **get\\_active\\_layers\\_for\\_prompt** function uses the complexity thresholds and the list of layers ordered by importance, stored in config, to decide which layers to execute."
      ],
      "metadata": {
        "id": "1uYh2G38tAFj"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_active_layers_for_prompt(complexity_score: float, config: Dict) -> List[int]:\n",
        "    \"\"\"\n",
        "    Use your pre-computed complexity_thresholds instead of recalculating.\n",
        "    This respects your original calibration work exactly!\n",
        "    \"\"\"\n",
        "    layers_by_importance = config[\"layers_by_importance\"]\n",
        "    complexity_thresholds = config[\"complexity_thresholds\"]\n",
        "\n",
        "    # Convert string keys to float and sort (EXACT logic from your original design)\n",
        "    thresholds = [(float(k), v) for k, v in complexity_thresholds.items()]\n",
        "    thresholds.sort()\n",
        "\n",
        "    # Find the appropriate number of layers to activate (EXACT logic)\n",
        "    num_layers_to_activate = thresholds[-1][1]  # Default to max\n",
        "\n",
        "    for threshold, num_layers in thresholds:\n",
        "        if complexity_score <= threshold:\n",
        "            num_layers_to_activate = num_layers\n",
        "            break\n",
        "\n",
        "    # Return the most important N layers using your ranking (EXACT COPY)\n",
        "    return layers_by_importance[:num_layers_to_activate]\n",
        "\n"
      ],
      "metadata": {
        "id": "Mari1I8em8U4"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The methods contained in **add\\_manual\\_complexity\\_methods** are used to equip the model with a set of tools that allow for manual testing and debugging of the AAB system.\n",
        "\n",
        "These methods operate independently of the fully automatic system that is activated during normal forward, allowing for atomic tests to analyze the behavior with the created configuration and to test how the model would react to specific prompts.\n",
        "\n",
        "I used them during development to fine-tune both the functions and the configuration, and they are maintained for their informative value. They are currently used in the final part of the notebook.\n"
      ],
      "metadata": {
        "id": "R6Oi9q8-N9ge"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def add_manual_complexity_methods(model, tokenizer, config: Dict):\n",
        "    \"\"\"\n",
        "    Add manual methods for complexity calculation and debugging.\n",
        "    These work independently of the automatic system.\n",
        "    \"\"\"\n",
        "    def manual_complexity_calculation(prompt: str) -> float:\n",
        "        \"\"\"Calculate exact prompt complexity manually\"\"\"\n",
        "        return compute_prompt_complexity_runtime(prompt, model, tokenizer, config)\n",
        "\n",
        "    def manual_mask_update(complexity_score: float):\n",
        "        \"\"\"Manually update the adaptive mask\"\"\"\n",
        "        active_layers = get_active_layers_for_prompt(complexity_score, config)\n",
        "        model._adaptive_mask.update_for_prompt(active_layers, complexity_score)\n",
        "        return model._adaptive_mask.get_stats()\n",
        "\n",
        "    def get_debug_info():\n",
        "        \"\"\"Get comprehensive debug information\"\"\"\n",
        "        stats = model._adaptive_mask.get_stats()\n",
        "        execution_stats = model._adaptive_mask.get_execution_stats()\n",
        "\n",
        "        return {\n",
        "            'mask_stats': stats,\n",
        "            'execution_stats': execution_stats,\n",
        "            'config_thresholds': config['complexity_thresholds'],\n",
        "            'layers_by_importance': config['layers_by_importance'][:10]  # First 10\n",
        "        }\n",
        "\n",
        "    def test_prompt_processing(prompt: str, verbose: bool = True):\n",
        "        \"\"\"Test end-to-end prompt processing\"\"\"\n",
        "        if verbose:\n",
        "            print(f\"🧪 Testing prompt: '{prompt[:50]}{'...' if len(prompt) > 50 else ''}'\")\n",
        "\n",
        "        # Step 1: Calculate complexity\n",
        "        complexity = manual_complexity_calculation(prompt)\n",
        "        if verbose:\n",
        "            print(f\"   Complexity: {complexity:.4f}\")\n",
        "\n",
        "        # Step 2: Update mask\n",
        "        stats = manual_mask_update(complexity)\n",
        "        if verbose:\n",
        "            print(f\"   Active layers: {stats['active_layers']}/{stats['total_layers']} \"\n",
        "                  f\"({stats['usage_ratio']:.1%})\")\n",
        "\n",
        "        # Step 3: Simulate inference (tokenize)\n",
        "        inputs = tokenizer(prompt, return_tensors='pt').to(next(model.parameters()).device)\n",
        "\n",
        "        # Step 4: Test forward pass\n",
        "        with torch.no_grad():\n",
        "            result = model.forward(input_ids=inputs['input_ids'])\n",
        "\n",
        "        # Step 5: Get execution stats\n",
        "        exec_stats = model._adaptive_mask.get_execution_stats()\n",
        "        if verbose:\n",
        "            print(f\"   Executed layers: {exec_stats['layers_executed']}\")\n",
        "            print(f\"   Bypassed layers: {exec_stats['layers_bypassed']}\")\n",
        "            print(f\"   Execution matches mask: {exec_stats['execution_matches_mask']}\")\n",
        "\n",
        "        return {\n",
        "            'complexity': complexity,\n",
        "            'mask_stats': stats,\n",
        "            'execution_stats': exec_stats\n",
        "        }\n",
        "\n",
        "    # Add methods to model\n",
        "    model.manual_complexity = manual_complexity_calculation\n",
        "    model.manual_mask_update = manual_mask_update\n",
        "    model.get_debug_info = get_debug_info\n",
        "    model.test_prompt = test_prompt_processing\n",
        "\n",
        "    return model"
      ],
      "metadata": {
        "id": "EcgYMn2G4zCM"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Below is one of the most important functions in the notebook: it dynamically modifies the model's forward method so that, every time a new prompt is processed, in the first generation pass, the system:\n",
        "\n",
        "* Automatically calculates the complexity of the prompt.\n",
        "\n",
        "* Determines how many attention layers should be active (adaptive bypass) based on that complexity.\n",
        "\n",
        "* Updates the adaptive mask before executing the actual inference.\n",
        "\n",
        "Thus, the model adapts the layers to execute to each prompt without manual intervention, transparently integrating AAB into the inference cycle.\n",
        "\n",
        "One of the main challenges was identifying when the first execution of the prompt occurred and avoiding the complexity calculation in subsequent forward passes with the creation of new tokens."
      ],
      "metadata": {
        "id": "Ajhtk6o3Ppx1"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import traceback # For printing stack traces in exceptions\n",
        "\n",
        "def add_automatic_complexity_computation(model, tokenizer):\n",
        "    \"\"\"\n",
        "    Add automatic complexity computation to model's forward method.\n",
        "    This will automatically update the adaptive mask when new prompts are processed.\n",
        "    \"\"\"\n",
        "    if not hasattr(model, '_adaptive_mask') or not hasattr(model, '_adaptive_config'):\n",
        "        # Ensure the model has been prepared by create_adaptive_model first\n",
        "        print(\"ERROR: Model is not set up for AAB. Please call create_adaptive_model() first.\") # User-friendly error\n",
        "        raise ValueError(\"Model must be created with create_adaptive_model() first. Missing AAB attributes.\")\n",
        "\n",
        "    # For tutorial/debug purposes, show the state of model.forward before and after modification\n",
        "    print(f\"Modifying model.forward. Original: {model.forward}\")\n",
        "\n",
        "    # Store the original forward method if it hasn't been stored already\n",
        "    if not hasattr(model, '_original_forward'):\n",
        "        model._original_forward = model.forward\n",
        "        print(f\"   Original model.forward stored as _original_forward: {model._original_forward}\")\n",
        "\n",
        "\n",
        "    # Define the new forward method that will replace the original one\n",
        "    def adaptive_model_forward(self, input_ids=None, **kwargs): # 'self' here is the model instance\n",
        "        # Attempt to get input_ids, whether passed directly or in kwargs\n",
        "        current_call_input_ids = input_ids\n",
        "        if current_call_input_ids is None and 'input_ids' in kwargs:\n",
        "            current_call_input_ids = kwargs['input_ids']\n",
        "\n",
        "        # Essential AAB attributes must be present on the model\n",
        "        if not hasattr(self, '_adaptive_config') or not hasattr(self, '_adaptive_mask'):\n",
        "            print(\"ERROR: AAB attributes (_adaptive_config or _adaptive_mask) missing during forward pass!\") # User-friendly error\n",
        "            # If critical AAB attributes are missing but we have the original forward, try to use it.\n",
        "            if hasattr(self, '_original_forward'):\n",
        "                return self._original_forward(input_ids=input_ids, **kwargs)\n",
        "            # If _original_forward is also missing, it's a critical setup error.\n",
        "            raise RuntimeError(\"Critical AAB setup error: _original_forward and AAB attributes missing.\")\n",
        "\n",
        "        # --- Determine if this is the first effective pass for a new prompt ---\n",
        "        # This is crucial because complexity should only be calculated once per prompt.\n",
        "        # Generation involves multiple forward passes: one for the prompt, then one for each new token.\n",
        "        past_key_values = kwargs.get('past_key_values')\n",
        "        is_effectively_first_pass = False\n",
        "\n",
        "        if past_key_values is None:\n",
        "            # No past_key_values typically means it's the first pass with the initial prompt.\n",
        "            is_effectively_first_pass = True\n",
        "\n",
        "        # Check for Hugging Face DynamicCache (common in newer generate() implementations)\n",
        "        elif hasattr(past_key_values, 'seen_tokens'):\n",
        "            # Check for Hugging Face DynamicCache objects (used in model.generate())\n",
        "            # 'seen_tokens' attribute indicates how many tokens are already in the KV cache.\n",
        "            current_cache_seq_len = past_key_values.seen_tokens\n",
        "            if current_cache_seq_len == 0:\n",
        "                # If seen_tokens is 0, the cache is empty, indicating a new generation sequence.\n",
        "                is_effectively_first_pass = True\n",
        "\n",
        "        # Check for traditional tuple-based KV Caches (older style or specific models)\n",
        "        elif (isinstance(past_key_values, tuple) and\n",
        "              len(past_key_values) > 0 and\n",
        "              isinstance(past_key_values[0], tuple) and len(past_key_values[0]) > 0 and\n",
        "              hasattr(past_key_values[0][0], 'shape') and\n",
        "              # Check if the sequence length dimension of the key/value tensors in the cache is 0.\n",
        "              # This typically corresponds to the second to last dimension (e.g., [batch_size, num_heads, sequence_length, head_dim]).\n",
        "              # For Llama-like models, KV cache shape is often [bsz, num_heads, seq_len, head_dim].\n",
        "              # We check the seq_len part of the first layer's key cache.\n",
        "              past_key_values[0][0].shape[-2] == 0):\n",
        "            # This handles standard tuple-based KV caches when they are empty.\n",
        "            is_effectively_first_pass = True\n",
        "\n",
        "        # Check if input_ids are valid for decoding a prompt\n",
        "        can_get_prompt_for_complexity = (current_call_input_ids is not None and\n",
        "                                         current_call_input_ids.ndim == 2 and # Expected [batch_size, sequence_length]\n",
        "                                         current_call_input_ids.shape[0] > 0 and\n",
        "                                         current_call_input_ids.shape[1] > 0)\n",
        "\n",
        "        # --- Main AAB Logic: Calculate complexity and update mask on the first pass ---\n",
        "        if is_effectively_first_pass and can_get_prompt_for_complexity:\n",
        "            try:\n",
        "                # Decode the prompt text from the first item in the batch\n",
        "                prompt_text = tokenizer.decode(current_call_input_ids[0], skip_special_tokens=True)\n",
        "\n",
        "                # Calculate complexity using the runtime function\n",
        "                complexity_score = compute_prompt_complexity_runtime(\n",
        "                    prompt_text, self, tokenizer, self._adaptive_config\n",
        "                )\n",
        "                # Determine which layers to activate based on the score and config\n",
        "                active_layers = get_active_layers_for_prompt(complexity_score, self._adaptive_config)\n",
        "                # Update the shared activation mask\n",
        "                self._adaptive_mask.update_for_prompt(active_layers, complexity_score)\n",
        "\n",
        "                stats = self._adaptive_mask.get_stats()\n",
        "                # This is an informative print for the tutorial user to see AAB in action\n",
        "                print(f\"AAB Activated: Complexity {complexity_score:.3f} -> \"\n",
        "                      f\"{stats['active_layers']}/{stats['total_layers']} layers active \"\n",
        "                      f\"({stats['usage_ratio']:.1%})\")\n",
        "            except Exception as e:\n",
        "                print(f\"ERROR during AAB complexity calculation/mask update: {e}\") # User-friendly error\n",
        "                traceback.print_exc() # Print full traceback for debugging\n",
        "        # else:\n",
        "            # Not the first pass, or input_ids are not suitable for complexity calculation.\n",
        "            # No AAB logic is run; mask remains as set by the last \"first pass\".\n",
        "            # print(\"DEBUG: Not a first pass or invalid inputs for complexity calculation. Skipping AAB logic.\")\n",
        "\n",
        "\n",
        "        if not hasattr(self, '_original_forward'):\n",
        "            # This should not happen if the setup logic at the beginning of\n",
        "            # add_automatic_complexity_computation ran correctly.\n",
        "            print(\"CRITICAL ERROR: _original_forward method is missing on model instance!\") # User-friendly error\n",
        "            raise RuntimeError(\"Cannot call missing _original_forward. Critical AAB setup error.\")\n",
        "\n",
        "        # Always call the original forward method to perform the actual model computation\n",
        "        return self._original_forward(input_ids=input_ids, **kwargs)\n",
        "\n",
        "    # --- Replace the model's original forward method with our adaptive_model_forward ---\n",
        "    # The use of .__get__(model, type(model)) ensures that 'adaptive_model_forward'\n",
        "    # is correctly bound as a method to the 'model' instance, so that 'self'\n",
        "    # inside 'adaptive_model_forward' refers to the model object.\n",
        "    model.forward = adaptive_model_forward.__get__(model, type(model))\n",
        "    print(f\"   New model.forward set to: {model.forward}\")\n",
        "    print(\"Automatic complexity computation hooked into model.forward.\")\n",
        "    return model"
      ],
      "metadata": {
        "id": "EDlLTffXQtxR"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The **create\\_adaptive\\_attention\\_forward** function acts as a \"factory\" that creates a new, customized forward method for each individual attention layer of the new model.\n",
        "\n",
        "This new method, called **adaptive\\_forward**, is specialized for each layer. It uses a closure to \"remember\" two crucial pieces of data specific to the layer it is attached to: its `layer_idx` (to know if it should be activated according to the mask) and its `original_forward` (the original attention behavior of that layer, which it will call if active).\n",
        "\n",
        "Thus, the `adaptive_forward` generated for each layer:\n",
        "\n",
        "* Consults the `LayerActivationMask` using its unique `layer_idx`.\n",
        "* If the layer should be active, it executes the `original_forward` it had saved.\n",
        "* If it is inactive, it skips the costly attention calculations, passing the `hidden_states` unmodified and returning `(hidden_states, None)` to maintain output compatibility.\n",
        "\n",
        "This mechanism allows each layer to decide whether to process or skip attention, based on the `LayerActivationMask` that has already been updated by the main logic of the AAB system."
      ],
      "metadata": {
        "id": "g4_Hjh3AVke3"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def create_adaptive_attention_forward(original_forward, layer_idx: int, mask: LayerActivationMask,\n",
        "                                    architecture: str, model, tokenizer, config: Dict):\n",
        "    \"\"\"\n",
        "    Create a new forward method that respects the activation mask.\n",
        "    \"\"\"\n",
        "    def adaptive_forward(self, hidden_states, *args, **kwargs):\n",
        "        # Check if this layer should be active\n",
        "        is_active = mask.is_layer_active(layer_idx)\n",
        "        mask.log_layer_execution(layer_idx, is_active)\n",
        "\n",
        "        if is_active:\n",
        "            # Execute normal attention\n",
        "            result = original_forward(hidden_states, *args, **kwargs)\n",
        "            return result\n",
        "        else:\n",
        "            # Bypass attention\n",
        "            #print(f\"--- Layer-level adaptive_forward PRINT: Bypassing Layer {layer_idx} ---\") # layer_idx from closure\n",
        "            use_cache_flag = kwargs.get('use_cache', False) # For logging\n",
        "            #print(f\"Layer {layer_idx} bypass: use_cache={use_cache_flag}\")\n",
        "\n",
        "            # Always return a 2-tuple as per the ValueError (expected 2)\n",
        "            # and similar to potentially working static pruner.\n",
        "            #print(f\"Layer {layer_idx} bypass: Now returning (hidden_states, None) (2-tuple)\")\n",
        "            return (hidden_states, None)\n",
        "\n",
        "    return adaptive_forward"
      ],
      "metadata": {
        "id": "EoWp630vnAnc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "This function transforms the original model into a version capable of deciding which attention layers to execute depending on the complexity of the received prompt.\n",
        "\n",
        "It is responsible for detecting the architecture, identifying all attention layers, and replacing their forward method with a version that consults the adaptive mask.\n",
        "\n",
        "Thus, only the necessary layers are executed for each prompt, according to the calculated complexity.\n",
        "\n",
        "In addition, the function attaches all the objects and configurations required for AAB to function in production to the model."
      ],
      "metadata": {
        "id": "MzC7HoHfdUcz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def create_adaptive_model(model, config: Dict, verbose: bool = False):\n",
        "    \"\"\"\n",
        "    Create an adaptive model that dynamically adjusts active layers based\n",
        "    on prompt complexity. Modifies individual attention layers to respect\n",
        "    the _adaptive_mask.\n",
        "    \"\"\"\n",
        "    # Detect architecture\n",
        "    architecture = detect_model_architecture(model)\n",
        "    if verbose:\n",
        "        print(f\"Detected architecture: {architecture}\")\n",
        "\n",
        "    # Get attention layers\n",
        "    try:\n",
        "        attention_layers = get_attention_layers(model, architecture)\n",
        "        total_layers = len(attention_layers)\n",
        "        if verbose:\n",
        "            print(f\"Found {total_layers} attention layers\")\n",
        "    except Exception as e:\n",
        "        raise ValueError(f\"Failed to get attention layers: {e}\")\n",
        "\n",
        "    # Create activation mask\n",
        "    mask = LayerActivationMask(total_layers)\n",
        "\n",
        "    # Store references in model for access during inference\n",
        "    model._adaptive_mask = mask\n",
        "    model._adaptive_config = config\n",
        "    model._adaptive_architecture = architecture\n",
        "\n",
        "    # Modify attention layers (CHANGED to pass all parameters to create_adaptive_attention_forward)\n",
        "    modified_layers = 0\n",
        "    for layer_idx, layer in enumerate(attention_layers):\n",
        "        try:\n",
        "            attention_module = get_attention_module(layer, architecture)\n",
        "\n",
        "            # Store original forward if not already stored\n",
        "            if not hasattr(attention_module, '_original_forward'):\n",
        "                attention_module._original_forward = attention_module.forward\n",
        "\n",
        "            # Create adaptive forward method (CHANGED - now includes model, tokenizer, config)\n",
        "            adaptive_forward = create_adaptive_attention_forward(\n",
        "                attention_module._original_forward,\n",
        "                layer_idx,\n",
        "                mask,\n",
        "                architecture,\n",
        "                model,  # NEW\n",
        "                tokenizer,  # NEW (note: will be passed when called)\n",
        "                config  # NEW\n",
        "            )\n",
        "\n",
        "            # Replace forward method\n",
        "            attention_module.forward = adaptive_forward.__get__(attention_module, type(attention_module))\n",
        "            modified_layers += 1\n",
        "\n",
        "        except Exception as e:\n",
        "            logger.warning(f\"Failed to modify layer {layer_idx}: {e}\")\n",
        "\n",
        "    if verbose:\n",
        "        print(f\"Successfully modified {modified_layers}/{total_layers} attention layers\")\n",
        "        print(f\"Complexity thresholds: {config['complexity_thresholds']}\")\n",
        "\n",
        "    return model"
      ],
      "metadata": {
        "id": "sIK5Co5J5Bs_"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The adaptive model is created by calling the **setup\\_adaptative\\_model** function.\n",
        "\n",
        "This is the main orchestrator for fully preparing the model and activating the Adaptive Attention Bypass (AAB) Inference system.\n",
        "\n",
        "It brings together all the previously defined components and applies them to the model in the correct sequence.\n",
        "\n",
        "In this function, the manual checking methods are added with **add\\_manual\\_complexity\\_methods**, which are entirely optional and are included in the notebook for educational purposes. If they are not desired, simply comment out the line that calls the function, without affecting the operation of the new model.\n"
      ],
      "metadata": {
        "id": "Og_4qizSd8bO"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def setup_adaptive_model_complete(model, tokenizer, config: Dict, verbose: bool = False):\n",
        "    \"\"\"\n",
        "    Complete setup of adaptive model with automatic complexity computation.\n",
        "    \"\"\"\n",
        "    if verbose:\n",
        "        print(\"Setting up Adaptive Attention Bypass (AAB) system...\")\n",
        "        print(\"=\" * 60)\n",
        "\n",
        "    # Step 1: Create adaptive model structure\n",
        "    adaptive_model = create_adaptive_model(model, config, verbose=verbose)\n",
        "\n",
        "\n",
        "    # Step 2: Add automatic complexity computation to hook into model.forward\n",
        "    adaptive_model = add_automatic_complexity_computation(adaptive_model, tokenizer)\n",
        "\n",
        "    # Step 2b: Add manual complexity methods, for testing.\n",
        "    adaptive_model = add_manual_complexity_methods(adaptive_model, tokenizer, config)\n",
        "\n",
        "    if verbose:\n",
        "        print(\"=\" * 60)\n",
        "        print(f\"Usage will vary from {min(config['complexity_thresholds'].values())}\"\n",
        "              f\" to {max(config['complexity_thresholds'].values())} layers based on prompt complexity\")\n",
        "\n",
        "    return adaptive_model"
      ],
      "metadata": {
        "id": "3EPWU0uzsvIg"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Creating the new Adaptative Model."
      ],
      "metadata": {
        "id": "wmVSjfcle2xN"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**adaptative\\_model** will contain an adaptive model capable of bypassing different attention layers depending on the complexity of the prompt.\n",
        "\n",
        "As can be seen in the trace, this model will be able to vary its execution from 22 to 28 layers.\n",
        "\n",
        "Although it may not seem like it, this is a very large modification for the model, since smaller models such as the 1B or 3B ones do not have as much redundancy in the Attention layers as large models do."
      ],
      "metadata": {
        "id": "BxyvkYfHCOnB"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "print(\"🔄 Creating new adaptive model with Layer 0 trigger...\")\n",
        "adaptive_model = setup_adaptive_model_complete(model, tokenizer, adaptive_config, verbose=True)\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2f_PfgB0ninB",
        "outputId": "08d19817-9472-42d5-bd16-cbbca02d1f85"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "🔄 Creating new adaptive model with Layer 0 trigger...\n",
            "Setting up Adaptive Attention Bypass (AAB) system...\n",
            "============================================================\n",
            "Detected architecture: llama\n",
            "Found 28 attention layers\n",
            "Successfully modified 28/28 attention layers\n",
            "Complexity thresholds: {0.1: 22, 0.3: 24, 0.5: 25, 0.7: 26, 0.9: 28}\n",
            "Modifying model.forward. Original: <bound method LlamaForCausalLM.forward of LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 3072)\n",
            "    (layers): ModuleList(\n",
            "      (0-27): 28 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "          (k_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (v_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (o_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (up_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (down_proj): Linear(in_features=8192, out_features=3072, bias=False)\n",
            "          (act_fn): SiLU()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=3072, out_features=128256, bias=False)\n",
            ")>\n",
            "   Original model.forward stored as _original_forward: <bound method LlamaForCausalLM.forward of LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 3072)\n",
            "    (layers): ModuleList(\n",
            "      (0-27): 28 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "          (k_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (v_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (o_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (up_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (down_proj): Linear(in_features=8192, out_features=3072, bias=False)\n",
            "          (act_fn): SiLU()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=3072, out_features=128256, bias=False)\n",
            ")>\n",
            "   New model.forward set to: <bound method add_automatic_complexity_computation.<locals>.adaptive_model_forward of LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 3072)\n",
            "    (layers): ModuleList(\n",
            "      (0-27): 28 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "          (k_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (v_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (o_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (up_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (down_proj): Linear(in_features=8192, out_features=3072, bias=False)\n",
            "          (act_fn): SiLU()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=3072, out_features=128256, bias=False)\n",
            ")>\n",
            "Automatic complexity computation hooked into model.forward.\n",
            "============================================================\n",
            "Usage will vary from 22 to 28 layers based on prompt complexity\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "It is verified that the forward function has been correctly modified.\n",
        "\n",
        "The new forward function is not directly responsible for skipping the layers, but rather it is responsible for calculating the prompt score and preparing the attention mask that the model will use to decide which layers to use."
      ],
      "metadata": {
        "id": "L_JQVFOxfBDQ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "print(f\"ID of adaptive_model after setup: {id(adaptive_model)}\")\n",
        "print(f\"adaptive_model.forward after setup: {adaptive_model.forward}\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "KY-B02hC_sw8",
        "outputId": "2d7e8c1c-616e-4e8d-f094-40343dab8fb3"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "ID of adaptive_model after setup: 132501248076624\n",
            "adaptive_model.forward after setup: <bound method add_automatic_complexity_computation.<locals>.adaptive_model_forward of LlamaForCausalLM(\n",
            "  (model): LlamaModel(\n",
            "    (embed_tokens): Embedding(128256, 3072)\n",
            "    (layers): ModuleList(\n",
            "      (0-27): 28 x LlamaDecoderLayer(\n",
            "        (self_attn): LlamaAttention(\n",
            "          (q_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "          (k_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (v_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
            "          (o_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
            "        )\n",
            "        (mlp): LlamaMLP(\n",
            "          (gate_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (up_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
            "          (down_proj): Linear(in_features=8192, out_features=3072, bias=False)\n",
            "          (act_fn): SiLU()\n",
            "        )\n",
            "        (input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "        (post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "      )\n",
            "    )\n",
            "    (norm): LlamaRMSNorm((3072,), eps=1e-05)\n",
            "    (rotary_emb): LlamaRotaryEmbedding()\n",
            "  )\n",
            "  (lm_head): Linear(in_features=3072, out_features=128256, bias=False)\n",
            ")>\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Step 2: Move to device\n",
        "adaptive_model.to(device)"
      ],
      "metadata": {
        "id": "urM9Q1pB5TXk",
        "outputId": "90b174ce-d20f-4220-a842-34a8bf098faf",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "LlamaForCausalLM(\n",
              "  (model): LlamaModel(\n",
              "    (embed_tokens): Embedding(128256, 3072)\n",
              "    (layers): ModuleList(\n",
              "      (0-27): 28 x LlamaDecoderLayer(\n",
              "        (self_attn): LlamaAttention(\n",
              "          (q_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
              "          (k_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
              "          (v_proj): Linear(in_features=3072, out_features=1024, bias=False)\n",
              "          (o_proj): Linear(in_features=3072, out_features=3072, bias=False)\n",
              "        )\n",
              "        (mlp): LlamaMLP(\n",
              "          (gate_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
              "          (up_proj): Linear(in_features=3072, out_features=8192, bias=False)\n",
              "          (down_proj): Linear(in_features=8192, out_features=3072, bias=False)\n",
              "          (act_fn): SiLU()\n",
              "        )\n",
              "        (input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
              "        (post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)\n",
              "      )\n",
              "    )\n",
              "    (norm): LlamaRMSNorm((3072,), eps=1e-05)\n",
              "    (rotary_emb): LlamaRotaryEmbedding()\n",
              "  )\n",
              "  (lm_head): Linear(in_features=3072, out_features=128256, bias=False)\n",
              ")"
            ]
          },
          "metadata": {},
          "execution_count": 37
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Inference Test"
      ],
      "metadata": {
        "id": "stZd9hB4geh7"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "It's time to test how the adaptive system works with a couple of prompts.\n",
        "\n",
        "The first prompt is quite simple and receives a complexity of 0.176, and the model decides that it will execute 24 of its 28 available layers!\n",
        "\n",
        "The result of this layer reduction is visible both in the returned response, which is different, and in the time required for the execution of the response.\n",
        "\n",
        "Original response:\n",
        "* Average time over 2 runs: 3501.18 ms\n",
        "* Generated text: \"Don't worry about 5G, it's not coming to the UK until 2020 at the earliest, says Ofcom\\nThe UK's telecoms regulator has said that it doesn't expect to see the next generation of mobile networks in\"\n",
        "\n",
        "Adaptive model response:\n",
        "* Average time over 2 runs: 2945.02 ms\n",
        "* Generated text: \"Don't worry about 3rd party software for this particular task because it can easily be done by using simple tools available in Windows itself. Here we will discuss how to transfer files between two different computers without using any external software or any other similar\n",
        "\n",
        "I want to emphasize that this notebook is an introduction to the concept, and that this small experiment is far from being considered a serious result on performance capability. However, it allows us to observe that there is indeed an improvement in performance, and the model maintains its coherent text generation capabilities."
      ],
      "metadata": {
        "id": "QyYDqZD0C7iq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "prompt = \"Don't worry about \"\n",
        "generated = get_output(prompt, adaptive_model, num_runs=2)\n",
        "print(f\"Generated text: {generated}\")"
      ],
      "metadata": {
        "id": "OLL0gVF-5zIT",
        "outputId": "a3117c6c-4f1e-4e4d-d880-f10356e579b1",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n",
            "The `seen_tokens` attribute is deprecated and will be removed in v4.41. Use the `cache_position` model input instead.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--- get_output ENTERED. Prompt (first 30 chars): 'Don't worry about ...' ---\n",
            "AAB Activated: Complexity 0.176 -> 24/28 layers active (85.7%)\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\n",
            "Run 1:\n",
            "Tokenization time: 0.90 ms\n",
            "Generation time: 2956.27 ms\n",
            "Decoding time: 0.21 ms\n",
            "Total time: 2957.48 ms\n",
            "AAB Activated: Complexity 0.176 -> 24/28 layers active (85.7%)\n",
            "\n",
            "Run 2:\n",
            "Tokenization time: 0.53 ms\n",
            "Generation time: 2921.70 ms\n",
            "Decoding time: 0.44 ms\n",
            "Total time: 2922.79 ms\n",
            "\n",
            "Average time over 2 runs: 2940.03 ms\n",
            "Generated text: [\"Don't worry about 3rd party software for this particular task because it can easily be done by using simple tools available in Windows itself. Here we will discuss how to transfer files between two different computers without using any external software or any other similar\", \"Don't worry about 3rd party software for this particular task because it can easily be done by using simple tools available in Windows itself. Here we will discuss how to transfer files between two different computers without using any external software or any other similar\"]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "This second prompt is more complex than the first, obtaining a score of 0.306. For this prompt, the model decides to use one more layer, increasing the active layers to its best 25.\n"
      ],
      "metadata": {
        "id": "AW9q6daA59cy"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "prompt = \"The sky appears blue during the day, during the night you can see wow it is totally different, and \"\n",
        "# The layer 0 trigger should work automatically during generate()\n",
        "generated = get_output(prompt, adaptive_model, num_runs=1)\n",
        "print(f\"Generated text: {generated}\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yeRwOAGoG-LM",
        "outputId": "d3327dc4-4c0e-4886-848c-3359eaddf8aa"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--- get_output ENTERED. Prompt (first 30 chars): 'The sky appears blue during th...' ---\n",
            "AAB Activated: Complexity 0.306 -> 25/28 layers active (89.3%)\n",
            "Tokenization time: 1.02 ms\n",
            "Generation time: 1866.48 ms\n",
            "Decoding time: 0.25 ms\n",
            "Total time: 1867.84 ms\n",
            "Generated text: The sky appears blue during the day, during the night you can see wow it is totally different, and  there are many other things that are different between day and night, such as the temperature, the number of people on the streets,\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Optional Manual testing."
      ],
      "metadata": {
        "id": "ulydyCWk1cvS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "simple_result = adaptive_model.test_prompt(\"Hi\", verbose=True)"
      ],
      "metadata": {
        "id": "H9gnriZb5iPS",
        "outputId": "d1e80f55-0aa3-496e-a28c-7994eeef0923",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "🧪 Testing prompt: 'Hi'\n",
            "   Complexity: 0.0234\n",
            "   Active layers: 22/28 (78.6%)\n",
            "AAB Activated: Complexity 0.023 -> 22/28 layers active (78.6%)\n",
            "   Executed layers: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 21, 24, 25, 27]\n",
            "   Bypassed layers: [16, 19, 20, 22, 23, 26]\n",
            "   Execution matches mask: True\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "simple_result = adaptive_model.test_prompt(\"Don't worry about  \", verbose=True)\n"
      ],
      "metadata": {
        "id": "2ZdVC6RVBjhj",
        "outputId": "dc6a1c60-65be-45dd-9ef5-fcab7004aac9",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "🧪 Testing prompt: 'Don't worry about  '\n",
            "   Complexity: 0.1757\n",
            "   Active layers: 24/28 (85.7%)\n",
            "AAB Activated: Complexity 0.176 -> 24/28 layers active (85.7%)\n",
            "   Executed layers: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 22, 24, 25, 27]\n",
            "   Bypassed layers: [19, 20, 23, 26]\n",
            "   Execution matches mask: True\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "complex_result = adaptive_model.test_prompt(\n",
        "    \"Analyze the geopolitical implications of quantum computing on global cybersecurity frameworks\",\n",
        "    verbose=True\n",
        ")"
      ],
      "metadata": {
        "id": "8qeCXL085rfL",
        "outputId": "d929e01c-cb73-4b2d-ab0f-e819f5b5dd7f",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "🧪 Testing prompt: 'Analyze the geopolitical implications of quantum c...'\n",
            "   Complexity: 0.2461\n",
            "   Active layers: 24/28 (85.7%)\n",
            "AAB Activated: Complexity 0.246 -> 24/28 layers active (85.7%)\n",
            "   Executed layers: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 22, 24, 25, 27]\n",
            "   Bypassed layers: [19, 20, 23, 26]\n",
            "   Execution matches mask: True\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "debug_info = adaptive_model.get_debug_info()\n",
        "print(f\"   Mask stats: {debug_info['mask_stats']}\")\n",
        "print(f\"   Total execution calls: {debug_info['execution_stats']['total_calls']}\")\n",
        "print(f\"   Most important layers: {debug_info['layers_by_importance']}\")"
      ],
      "metadata": {
        "id": "2FjZSoLc6b4u",
        "outputId": "c432e294-9e22-4a85-d8cc-1c11391ce7d3",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "   Mask stats: {'complexity_score': 0.2461123389241667, 'active_layers': 24, 'total_layers': 28, 'usage_ratio': 0.8571428571428571, 'initialized': True}\n",
            "   Total execution calls: 28\n",
            "   Most important layers: [8, 9, 12, 10, 7, 0, 6, 27, 13, 5]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "test_prompts = [\n",
        "    (\"Simple\", \"2+2=\"),\n",
        "    (\"Medium\", \"Explain machine learning basics\"),\n",
        "    (\"Complex\", \"Write a comprehensive analysis of the economic implications of artificial intelligence\")\n",
        "]\n",
        "\n",
        "for level, test_prompt in test_prompts:\n",
        "    print(f\"\\n{level}: '{test_prompt[:50]}{'...' if len(test_prompt) > 50 else ''}'\")\n",
        "    result = adaptive_model.test_prompt(test_prompt, verbose=False)\n",
        "    print(f\"   Complexity: {result['complexity']:.3f}\")\n",
        "    print(f\"   Layers: {result['mask_stats']['active_layers']}/{result['mask_stats']['total_layers']} \"\n",
        "          f\"({result['mask_stats']['usage_ratio']:.1%})\")\n",
        "    print(f\"   Executed: {len(result['execution_stats']['layers_executed'])}, \"\n",
        "          f\"Bypassed: {len(result['execution_stats']['layers_bypassed'])}\")"
      ],
      "metadata": {
        "id": "xzENonZe6mRX",
        "outputId": "205c01a2-f344-4a28-9b64-e19d494c8e3d",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\n",
            "Simple: '2+2='\n",
            "AAB Activated: Complexity 0.159 -> 24/28 layers active (85.7%)\n",
            "   Complexity: 0.159\n",
            "   Layers: 24/28 (85.7%)\n",
            "   Executed: 24, Bypassed: 4\n",
            "\n",
            "Medium: 'Explain machine learning basics'\n",
            "AAB Activated: Complexity 0.175 -> 24/28 layers active (85.7%)\n",
            "   Complexity: 0.175\n",
            "   Layers: 24/28 (85.7%)\n",
            "   Executed: 24, Bypassed: 4\n",
            "\n",
            "Complex: 'Write a comprehensive analysis of the economic imp...'\n",
            "AAB Activated: Complexity 0.238 -> 24/28 layers active (85.7%)\n",
            "   Complexity: 0.238\n",
            "   Layers: 24/28 (85.7%)\n",
            "   Executed: 24, Bypassed: 4\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "test_complexity = compute_prompt_complexity_runtime(\"Don't worry about \", adaptive_model, tokenizer, adaptive_config)\n",
        "print(f\"Manual complexity test: {test_complexity}\")"
      ],
      "metadata": {
        "id": "lfM6E-PKp33c",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "532b3c35-d4cc-4f52-c371-74dda2cc1361"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Manual complexity test: 0.17585985783965807\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "inputs = tokenizer(\"Paris is the capital of\", return_tensors='pt').to(device)\n",
        "try:\n",
        "    result = adaptive_model.forward(input_ids=inputs['input_ids'])\n",
        "    print(\"Manual forward call worked\")\n",
        "except Exception as e:\n",
        "    print(f\"Manual forward failed: {e}\")"
      ],
      "metadata": {
        "id": "EjsXiQCMtuEr",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "b0b93479-819c-46a9-8079-aa3d73676bbd"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "AAB Activated: Complexity 0.175 -> 24/28 layers active (85.7%)\n",
            "Manual forward call worked\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Conclusion.\n",
        "\n",
        "If you've made it this far, congratulations! This notebook has been by far the most complex, and long, to assemble in the entire Pruning and Optimization section of the Large Language Model Course.\n",
        "\n",
        "After **sweating it out for several days, this notebook introduces, in a simplified way, the first implementation of my approach to adaptive models, which I've called: Adaptive Attention Bypass (AAB)**.\n",
        "\n",
        "Although the general idea of adaptive models already exists, my goal was to bring it down to earth: create something simplified that any \"LLM Engineer\" could understand, modify, and, above all: Create new adaptive models without breaking a sweat!\n",
        "\n",
        "With this first foray into AAB, several noteworthy things have been achieved:\n",
        "* Setting up an AAB system that adapts Llama-3.2 models.\n",
        "* Devising a calibration method to decide which attention layers truly add value and which can be bypassed from time to time. For this, we have used cosine similarity and a few prompts for calibration.\n",
        "* The function that indicates the complexity for each incoming prompt by looking at its word count and the variance of the embeddings.\n",
        "* Modifying the model, changing both its main `forward` and that of each attention layer, so that the adaptation is transparent to the user using the Model.\n",
        "* And the best part: seeing in the tests how the system decides which layers to use and which not to, depending on the prompt! We have already seen that for simple prompts, the model runs a bit faster and still gives a coherent response.\n",
        "\n",
        "I have tried to make the code and explanation of **this version of AAB** as digestible as possible so that everything fits in a notebook and the general idea can be understood in a few hours of study.\n",
        "\n",
        "**AAB** is, so to speak, in its infancy, and this notebook is its introduction to the world. As in any self-respecting research project, a huge part is experimenting, so this is the task that will take the most time in the near future.\n",
        "\n",
        "Honestly, I think the method shows promise. It is very lightweight to implement, both in calibration and in inference. Furthermore, **it does not require any retraining** and can be adjusted with the `adaptive_config.json`.\n",
        "\n",
        "## Future Works (What's coming for AAB!)\n",
        "\n",
        "This notebook contains only a first deep dive into AAB, but there are already several planned next steps:\n",
        "\n",
        "1.  **Taking AAB to New Territories (More and Larger Models)**:\n",
        "    * I'm very curious to see how **this implementation of AAB** performs with other decoder-only models like Mistral (for which there is already a hint in the `detect_model_architecture` code) and perhaps other more exotic ones.\n",
        "    * Testing it with the \"monsters\" of more than 30B or 70B parameters! **This is where I believe attention bypass can make a huge difference in performance**, as demonstrated in works like [\"What matters in transformers? not all attention is needed.\"](https://arxiv.org/abs/2406.15786).\n",
        "\n",
        "2.  **Refining Layer Selection: Complexity Level-Specific Activation Masks**:\n",
        "    * A promising line of research is, instead of a single global `layers_by_importance` list, to explore the creation of **multiple `layers_by_importance` lists, each optimized for a specific prompt complexity range or level**.\n",
        "\n",
        "3.  **Exploring Advanced Metrics**:\n",
        "    * Investigating alternative or more sophisticated metrics than cosine similarity for calibrating layer importance in **the AAB system**.\n",
        "\n",
        "4.  **Synergy with Other Optimization Techniques**:\n",
        "    * Studying further how **AAB can be combined** with other techniques such as quantization or different types of structured pruning, looking for potential combined benefits.\n",
        "\n",
        "5.  **Exhaustive and Rigorous Benchmarking**:\n",
        "    * It is essential to establish a robust benchmarking framework to quantitatively measure the savings in latency and resources, as well as the impact on the quality of the model's responses in various tasks and standard NLP benchmarks.\n",
        "\n",
        "In short, **this AAB approach** is a developing project with considerable growth potential. I hope this presentation has sparked your interest and curiosity."
      ],
      "metadata": {
        "id": "qs1IGGbmEwu9"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Authors Note.\n",
        "\n",
        "In addition to creating content like this notebook and offering it under the MIT license, I have also contributed to repositories such as those of Hugging Face and Google Gemini.\n",
        "\n",
        "I am especially proud of my book: [Large Language Models: Apply and Implement Strategies for Large Language Models [Apress](https://amzn.to/3DSepLb).\n",
        "\n",
        "You can find it on both [Amazon](https://amzn.to/3DSepLb) and [Springer](https://link.springer.com/book/10.1007/979-8-8688-0515-8), where they often have good deals on the purchase price.\n",
        "\n",
        "If you take a look and end up purchasing it, keep in mind that you can reach out with any questions via the Discussions section of this same repository or on any of my social media channels. I’ll do my best to respond as quickly as possible."
      ],
      "metadata": {
        "id": "YzSndTpuRHkC"
      }
    },
    {
      "cell_type": "code",
      "source": [],
      "metadata": {
        "id": "xwmjkFKV80x0"
      },
      "execution_count": null,
      "outputs": []
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "L4",
      "provenance": [],
      "authorship_tag": "ABX9TyNqgHaHM1ZxIYNsf6CK8Wf9",
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}