{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# AutoThink Example with OptiLLM and Qwen 2.5 0.5B Instruct\n",
        "This notebook is a companion of chapter 14 of the \"Domain-Specific Small Language Models\" [book](https://www.manning.com/books/domain-specific-small-language-models), author Guglielmo Iozzia, [Manning Publications](https://www.manning.com/), 2025.  \n",
        "The code in this notebook is an example of usage of the [AutoThink](https://dx.doi.org/10.2139/ssrn.5253327) technique in [OptiLLM](https://github.com/codelion/optillm/) with the [Qwen 2.5 0.5B instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) model. Hardware acceleration (GPU) is recommended.   \n",
        "More details about the code can be found in the related book's chapter."
      ],
      "metadata": {
        "id": "GC6XKN7ZNrCh"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Install OptiLLM. A session restart is needed at the end of the installation process."
      ],
      "metadata": {
        "id": "kR3i0a59PoVA"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": true,
        "id": "3LEu4w3TTQ56"
      },
      "outputs": [],
      "source": [
        "!pip install optillm"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to download the model checkpoints and associated tokenizer from the HF's Hub."
      ],
      "metadata": {
        "id": "bY8WiTfNF_t8"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
        "import torch\n",
        "\n",
        "def download_model_from_hf(model_name):\n",
        "    model = AutoModelForCausalLM.from_pretrained(\n",
        "        model_name,\n",
        "        torch_dtype=\"auto\",\n",
        "        device_map=\"auto\"\n",
        "    )\n",
        "    tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
        "\n",
        "    return model, tokenizer\n"
      ],
      "metadata": {
        "id": "fJVXcZPGFdSh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Download the Qwen 2.5 0.5B Instuct model and companion tokenizer from the HF's Hub."
      ],
      "metadata": {
        "id": "ti8jlnwlP1pD"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model_name = \"Qwen/Qwen2.5-0.5B-Instruct\"\n",
        "model, tokenizer = download_model_from_hf(model_name)"
      ],
      "metadata": {
        "id": "qD7PIDCcFvWQ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
        "import torch\n",
        "\n",
        "model_name = \"Qwen/Qwen2.5-0.5B-Instruct\"\n",
        "model = AutoModelForCausalLM.from_pretrained(model_name)\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_name)"
      ],
      "metadata": {
        "id": "z9vfP8wbUSND"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Provide a prompt (a mathematical task)."
      ],
      "metadata": {
        "id": "tfEFpnRAXdwa"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "messages = [\n",
        "    {\"role\": \"user\", \"content\": \"In a dance class of 20 students, 20% enrolled in contemporary dance, 25% of the remaining enrolled in jazz dance, and the rest enrolled in hip-hop dance. What percentage of the entire students enrolled in hip-hop dance?\"}\n",
        "]"
      ],
      "metadata": {
        "id": "tPDwbsTsXdMW"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Test the model response using different OptiLLM built-in decoding techniques (Thinkdeeper, AutoThink, CoT Decoding and Entropy Decoding)."
      ],
      "metadata": {
        "id": "56mMCPjnX4bu"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from optillm.thinkdeeper import thinkdeeper_decode\n",
        "\n",
        "result = thinkdeeper_decode(model, tokenizer, messages, {\"do_sample\": True, \"temperature\": 0.1, \"max_new_tokens\": 1024})\n",
        "print(f\"ThinkDeeper Decoding:\\n {result}\")"
      ],
      "metadata": {
        "id": "Bt90nDksS63w"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from optillm.autothink import autothink_decode\n",
        "\n",
        "result = autothink_decode(model, tokenizer, messages, {\"do_sample\": True, \"temperature\": 0.1, \"max_new_tokens\": 1024})\n",
        "print(f\"AutoThink Decoding:\\n {result}\")"
      ],
      "metadata": {
        "id": "7LrcvoKbT_Qw"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from optillm.cot_decoding import cot_decode\n",
        "\n",
        "# Generate the response using CoT decoding\n",
        "result, confidence = cot_decode(model, tokenizer, messages, aggregate_paths=True, temperature=0.1, max_new_tokens=1024)\n",
        "print(f\"CoT Decoding:\\n {result}\")\n",
        "# print(f\"Confidence: {confidence}\")"
      ],
      "metadata": {
        "id": "Iy5a40bQQhHl"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from optillm.entropy_decoding import entropy_decode\n",
        "\n",
        "# Generate the response using Entropy decoding\n",
        "result = entropy_decode(model, tokenizer, messages, temperature=0.1, max_new_tokens=1024)\n",
        "print(f\"\\nEntropy Decoding:\\n {result}\")"
      ],
      "metadata": {
        "id": "lx0jNTB9QmhN"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Do greedy decoding with the same model on the same prompt to compare results."
      ],
      "metadata": {
        "id": "F0348I9HYb2h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_device():\n",
        "  if torch.cuda.is_available():\n",
        "    return torch.device(\"cuda\")\n",
        "  else:\n",
        "    return torch.device(\"cpu\")\n",
        "\n",
        "device = get_device()\n",
        "model = model.to(device)\n",
        "\n",
        "# Prepare input with proper attention mask\n",
        "input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors=\"pt\")\n",
        "attention_mask = torch.ones_like(input_ids)  # Create attention mask\n",
        "input_ids = input_ids.to(device)\n",
        "input_length = input_ids.shape[1]\n",
        "attention_mask = attention_mask.to(device)\n",
        "\n",
        "# Get pad and eos token ids\n",
        "pad_token_id = tokenizer.pad_token_id\n",
        "if pad_token_id is None:\n",
        "    pad_token_id = tokenizer.eos_token_id\n",
        "\n",
        "# Configure generation parameters properly for greedy decoding\n",
        "output_ids = model.generate(\n",
        "    input_ids,\n",
        "    attention_mask=attention_mask,\n",
        "    max_new_tokens=1024,\n",
        "    do_sample=False,     # Greedy decoding\n",
        "    num_beams=1,        # Single beam for greedy\n",
        "    pad_token_id=pad_token_id,\n",
        "    temperature=1.0,    # Remove or set to 1.0 for greedy\n",
        "    top_p=1.0,         # Remove or set to 1.0 for greedy\n",
        "    use_cache=True,    # Enable KV caching for faster generation\n",
        ")\n",
        "\n",
        "output_ids = output_ids.cpu()\n",
        "# Decode only the newly generated tokens\n",
        "response = tokenizer.decode(output_ids[0][input_length:], skip_special_tokens=True)\n",
        "print(f\"Greedy Decoding:\\n {response}\")"
      ],
      "metadata": {
        "id": "0VWGo7SO-APA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Vanilla Model Inference on GSM8k Samples.\n",
        "The next three code cells are just to show how to do inference on some GSM8k dataset samples using the vanilla Qwen 2.5 0.5 Instruct model (without the OptiLLM proxy). You can skip this section if you are interested only in evaluating OptiLLM."
      ],
      "metadata": {
        "id": "yxg6UQztVf5r"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Repeating below the code to download the model and companion tokenizer from the HF's Hub, just in case you would start executing this notebook from here."
      ],
      "metadata": {
        "id": "qQ6fDi0zedRi"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model_name = \"Qwen/Qwen2.5-0.5B-Instruct\"\n",
        "model, tokenizer = download_model_from_hf(model_name)"
      ],
      "metadata": {
        "id": "mmRmmvTnVlzj"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Provide some sample from the GSM8k dataset and tokenize them."
      ],
      "metadata": {
        "id": "dJ2cl9r3elbT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "prompts = [\n",
        "    'There are 4,000 jelly beans in a jar. If three fourths of the jelly beans are red, and one quarter of the red jelly beans are coconut flavored, how many jelly beans are coconut flavored?',\n",
        "    'There have been 15 \"Where\\'s Waldo?\" books published. Each book has 30 puzzles to find Waldo. The average person takes 3 minutes to find Waldo in a puzzle. How long would it take to find every Waldo?',\n",
        "    'Bart makes a mixtape. The first side has 6 songs. The second side has 4 songs. Each song is 4 minutes. How long is the total tape?'\n",
        "]\n",
        "prompt = prompts[2]\n",
        "question = f\"\"\"Solve this math problem step by step. After solving, provide the final numerical answer after '### ' (three hash symbols and a space).\\n\\n\n",
        "            Question: {prompt}\\n\\n\n",
        "            Show your work, then give the final answer after '### '.\"\"\"\n",
        "\n",
        "messages = [\n",
        "    {\"role\": \"system\", \"content\": \"You are a helpful AI assistant focused on providing precise answers in the requested format.\"},\n",
        "    {\"role\": \"user\", \"content\": question}\n",
        "]\n",
        "text = tokenizer.apply_chat_template(\n",
        "    messages,\n",
        "    tokenize=False,\n",
        "    add_generation_prompt=True\n",
        ")\n",
        "model_inputs = tokenizer([text], return_tensors=\"pt\").to(model.device)"
      ],
      "metadata": {
        "id": "KNw5w0gcVupv"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Evaluate the model on the provided list of GSM8k prompts."
      ],
      "metadata": {
        "id": "MkPuNLY8e7bX"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "generated_ids = model.generate(\n",
        "    **model_inputs,\n",
        "    do_sample=True,\n",
        "    temperature=0.1,\n",
        "    max_new_tokens=1024\n",
        ")\n",
        "generated_ids = [\n",
        "    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)\n",
        "]\n",
        "\n",
        "response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]"
      ],
      "metadata": {
        "id": "HWVlEjorXROs"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "response"
      ],
      "metadata": {
        "id": "samjeZO_VpKc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Benchmark (AutoThink in OptiLLM on GSM8k)"
      ],
      "metadata": {
        "id": "ct3mUDRhw96_"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -U datasets"
      ],
      "metadata": {
        "id": "WYYedaIxKoCS"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Setup the logging level to INFO, to minimize the number of output messages during the benchmark."
      ],
      "metadata": {
        "id": "PYhrEGpOhD8F"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import logging\n",
        "\n",
        "logging.basicConfig(\n",
        "    level=logging.INFO,\n",
        "    format='%(asctime)s - %(levelname)s - %(message)s'\n",
        ")\n",
        "logger = logging.getLogger(__name__)"
      ],
      "metadata": {
        "id": "yRkwQEz5CmW1"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to load the benchmarking dataset, [OptiLLMBench](https://huggingface.co/datasets/codelion/optillmbench). It contains 500 selected challenging problems across multiple datasets (competition_math, HumanEval, GSM8K, MMLU, BBH)."
      ],
      "metadata": {
        "id": "ABTy8XbLhQoL"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import datasets\n",
        "from datasets import load_dataset\n",
        "\n",
        "def load_optillm_bench() -> datasets.Dataset:\n",
        "    \"\"\"Load the OptiLLM Bench dataset.\"\"\"\n",
        "    try:\n",
        "        dataset = load_dataset(\"codelion/optillmbench\")\n",
        "        gsm8k_dataset = dataset[\"test\"].filter(lambda example: example[\"category\"] == \"gsm8k\")\n",
        "        return gsm8k_dataset\n",
        "    except Exception as e:\n",
        "        logger.error(f\"Error loading dataset: {e}\")\n",
        "        raise"
      ],
      "metadata": {
        "id": "AVA8GjwGxchV"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Load the dataset."
      ],
      "metadata": {
        "id": "HLanuygIiCjs"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "dataset = load_optillm_bench()"
      ],
      "metadata": {
        "id": "aiy_ywc2xFy7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to set the approriate prompt for each of the categories included in the OptiLLMBench dataset."
      ],
      "metadata": {
        "id": "Jz0gvcOfiE6J"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_prompt_for_category(question: str, category: str) -> str:\n",
        "    \"\"\"\n",
        "    Generate appropriate prompt based on category.\n",
        "    \"\"\"\n",
        "    if category == \"gsm8k\":\n",
        "        return (\n",
        "            f\"Solve this math problem step by step. After solving, provide the final \"\n",
        "            f\"numerical answer after '### ' (three hash symbols and a space).\\n\\n\"\n",
        "            f\"Question: {question}\\n\\n\"\n",
        "            f\"Show your work, then give the final answer after '### '.\"\n",
        "        )\n",
        "    elif category == \"mmlu_math\":\n",
        "        return (\n",
        "            f\"Solve this math problem. Provide only the answer with no explanation.\\n\\n\"\n",
        "            f\"Question: {question}\"\n",
        "        )\n",
        "    elif category == \"boolq\":\n",
        "        return (\n",
        "            f\"Answer this yes/no question with only 'yes' or 'no'.\\n\\n\"\n",
        "            f\"Question: {question}\"\n",
        "        )\n",
        "    elif category == \"aqua_rat\":\n",
        "        return (\n",
        "            f\"Choose the correct answer. Provide only the letter choice with no explanation.\\n\\n\"\n",
        "            f\"Question: {question}\"\n",
        "        )\n",
        "    else:\n",
        "        return f\"Question: {question}\""
      ],
      "metadata": {
        "id": "GQwDhTwjzDWh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to remove the thinking blocks from the model responses."
      ],
      "metadata": {
        "id": "xStZX_ogiUZY"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def remove_thinking_blocks(text: str) -> str:\n",
        "    \"\"\"\n",
        "    Remove <think>...</think> blocks from the response.\n",
        "    If there's a </think> tag, only keep the content after it.\n",
        "    \"\"\"\n",
        "    if not text:\n",
        "        return text\n",
        "\n",
        "    # Check if there's a thinking block\n",
        "    if '</think>' in text:\n",
        "        # Get everything after the last </think> tag\n",
        "        parts = text.split('</think>')\n",
        "        return parts[-1].strip()\n",
        "\n",
        "    return text"
      ],
      "metadata": {
        "id": "clebsJR6zMey"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to extract numerical answer from responses to GSM8K questions."
      ],
      "metadata": {
        "id": "TEPoC4S8ia3a"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def extract_gsm8k_answer(text: str) -> float:\n",
        "    \"\"\"Extract numerical answer after ### from GSM8K responses.\"\"\"\n",
        "    match = re.search(r'###\\s*(-?\\d*\\.?\\d+)', text)\n",
        "    if match:\n",
        "        try:\n",
        "            return float(match.group(1))\n",
        "        except ValueError:\n",
        "            return None\n",
        "    return None"
      ],
      "metadata": {
        "id": "86QAftzHAXJY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function to extract the correct answer from a multiple-choice question."
      ],
      "metadata": {
        "id": "-sy6Bp-0ilmq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import re\n",
        "\n",
        "def extract_choice_index_from_question(question: str, answer: str) -> int:\n",
        "    \"\"\"\n",
        "    Extract the index of the correct answer from a multiple-choice question.\n",
        "\n",
        "    Args:\n",
        "        question: The question text containing choices\n",
        "        answer: The correct answer (just the text, no index)\n",
        "\n",
        "    Returns:\n",
        "        int: The index of the correct answer, or -1 if not found\n",
        "    \"\"\"\n",
        "    # Look for a pattern like \"N. answer\" in the question\n",
        "    answer_clean = answer.strip().lower()\n",
        "\n",
        "    # Debug logging for critical examples\n",
        "    logger.debug(f\"Looking for answer: '{answer_clean}' in question\")\n",
        "\n",
        "    # Check for \"Choices:\" marker in the question\n",
        "    if \"choices:\" in question.lower():\n",
        "        # Split the question by lines after \"Choices:\"\n",
        "        choices_section = question.lower().split(\"choices:\")[1].strip()\n",
        "\n",
        "        # Log the choices section\n",
        "        logger.debug(f\"Choices section: '{choices_section}'\")\n",
        "\n",
        "        # If it's all on one line, use a more comprehensive regex\n",
        "        if '\\n' not in choices_section:\n",
        "            # This pattern matches \"N. text\" where N is a digit and text is any text up to the next number or end\n",
        "            all_choices = re.findall(r'(\\d+)\\s*\\.\\s*([^0-9.]+?)(?=\\s*\\d+\\s*\\.|$)', choices_section)\n",
        "\n",
        "            logger.debug(f\"Single line choices found: {all_choices}\")\n",
        "\n",
        "            for idx, choice_text in all_choices:\n",
        "                choice_text_clean = choice_text.strip()\n",
        "                if choice_text_clean.lower() == answer_clean:\n",
        "                    logger.debug(f\"Found match at index {idx}: '{choice_text_clean}'\")\n",
        "                    return int(idx)\n",
        "\n",
        "        # Try splitting by newlines\n",
        "        choices = choices_section.split(\"\\n\")\n",
        "\n",
        "        for i, choice in enumerate(choices):\n",
        "            choice = choice.strip()\n",
        "            if not choice:\n",
        "                continue\n",
        "\n",
        "            logger.debug(f\"Checking choice {i}: '{choice}'\")\n",
        "\n",
        "            # Try to extract the index and choice text\n",
        "            match = re.match(r'\\s*(\\d+)\\s*\\.\\s*(.*)', choice)\n",
        "            if match:\n",
        "                idx = int(match.group(1))\n",
        "                choice_text = match.group(2).strip()\n",
        "\n",
        "                logger.debug(f\"Parsed choice: index={idx}, text='{choice_text}'\")\n",
        "\n",
        "                if choice_text.lower() == answer_clean:\n",
        "                    logger.debug(f\"Found exact match at index {idx}\")\n",
        "                    return idx\n",
        "\n",
        "        # Fallback: just look for any occurrence of the number followed by the answer\n",
        "        pattern = r'(\\d+)\\s*\\.\\s*' + re.escape(answer_clean)\n",
        "        match = re.search(pattern, choices_section)\n",
        "        if match:\n",
        "            logger.debug(f\"Fallback match found at index {match.group(1)}\")\n",
        "            return int(match.group(1))\n",
        "\n",
        "    logger.debug(\"No match found for answer in choices\")\n",
        "    return -1"
      ],
      "metadata": {
        "id": "ko8lWN6YAhM0"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function to check if a response from the model is purely numerical."
      ],
      "metadata": {
        "id": "8ugWDewaiucR"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from typing import Tuple\n",
        "\n",
        "def is_numeric_only_response(response: str) -> Tuple[bool, int]:\n",
        "    \"\"\"\n",
        "    Check if the response is just a numeric value, possibly with whitespace and newlines.\n",
        "\n",
        "    Args:\n",
        "        response: The response text to check\n",
        "\n",
        "    Returns:\n",
        "        Tuple of (is_numeric, value)\n",
        "    \"\"\"\n",
        "    # Strip all whitespace, including newlines\n",
        "    clean_response = re.sub(r'\\s', '', response)\n",
        "\n",
        "    # Check if it's just a number\n",
        "    if clean_response.isdigit():\n",
        "        return True, int(clean_response)\n",
        "\n",
        "    return False, -1"
      ],
      "metadata": {
        "id": "7r7JP7DAAnze"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function that, using the other custom functions implemented above, to evaluate the responsed from the model."
      ],
      "metadata": {
        "id": "AY2OT9b8tjU0"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def evaluate_response(response: str, ground_truth: str, category: str, question: str = None) -> bool:\n",
        "    \"\"\"\n",
        "    Evaluate if the response matches the ground truth based on category.\n",
        "\n",
        "    Args:\n",
        "        response: Model's response\n",
        "        ground_truth: Correct answer\n",
        "        category: Problem category (gsm8k, mmlu_math, boolq, aqua_rat)\n",
        "        question: Original question text, needed for MMLU evaluation\n",
        "\n",
        "    Returns:\n",
        "        bool: Whether the response is correct\n",
        "    \"\"\"\n",
        "    if not response or not ground_truth:\n",
        "        return False\n",
        "\n",
        "    # First, remove any thinking blocks\n",
        "    response = remove_thinking_blocks(response)\n",
        "\n",
        "    if category == \"gsm8k\":\n",
        "        # Extract numerical answers after ### and compare\n",
        "        response_num = extract_gsm8k_answer(response)\n",
        "        ground_truth_num = extract_gsm8k_answer(ground_truth)\n",
        "\n",
        "        if response_num is None or ground_truth_num is None:\n",
        "            return False\n",
        "\n",
        "        # Compare with small tolerance for floating point\n",
        "        return abs(response_num - ground_truth_num) < 1e-6\n",
        "    elif category == \"mmlu_math\":\n",
        "        # Special handling for MMLU-math multiple choice questions\n",
        "        response_clean = response.strip().lower()\n",
        "        ground_truth_clean = ground_truth.strip().lower()\n",
        "\n",
        "        # Case 1: Exact match of answer text\n",
        "        if response_clean == ground_truth_clean:\n",
        "            logger.debug(\"Exact text match\")\n",
        "            return True\n",
        "\n",
        "        # For other cases, we need to find what index corresponds to the ground truth\n",
        "        if question:\n",
        "            correct_index = extract_choice_index_from_question(question, ground_truth)\n",
        "\n",
        "            if correct_index >= 0:\n",
        "                # Case 2: Check if response is just the digit (most common LLM response for indices)\n",
        "                is_numeric, value = is_numeric_only_response(response)\n",
        "                if is_numeric and value == correct_index:\n",
        "                    logger.debug(f\"Numeric match: response '{response}' -> {value} matches index {correct_index}\")\n",
        "                    return True\n",
        "\n",
        "                # Case 3: Check if response is \"index. answer\"\n",
        "                if re.search(fr\"{correct_index}\\s*\\.\\s*{re.escape(ground_truth_clean)}\", response_clean):\n",
        "                    logger.debug(\"Pattern match for 'index. answer'\")\n",
        "                    return True\n",
        "\n",
        "                # Case 4: Check if response contains both the index and the answer text\n",
        "                if str(correct_index) in response_clean and ground_truth_clean in response_clean:\n",
        "                    logger.debug(\"Contains both index and answer\")\n",
        "                    return True\n",
        "\n",
        "        return False\n",
        "    else:\n",
        "        # Clean up both strings for comparison\n",
        "        response_clean = response.strip().lower()\n",
        "        ground_truth_clean = ground_truth.strip().lower()\n",
        "        return response_clean == ground_truth_clean"
      ],
      "metadata": {
        "id": "yiNUkZdNANqU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function that iterates through the samples in the datasets, run the models using the AutoThink technique on them, evaluate the responses and then returns the results and calculate evaluation metrics."
      ],
      "metadata": {
        "id": "RuvQZEaBt0qJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import time\n",
        "from typing import Dict, List, Any\n",
        "from optillm.autothink import autothink_decode\n",
        "from tqdm import tqdm\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
        "\n",
        "def evaluate_model(\n",
        "    model: AutoModelForCausalLM,\n",
        "    tokenizer: AutoTokenizer,\n",
        "    dataset: datasets.Dataset,\n",
        "    max_samples: int = None\n",
        ") -> Tuple[Dict[str, float], List[Dict[str, Any]]]:\n",
        "    \"\"\"\n",
        "    Evaluate a model on the dataset using a specific approach.\n",
        "    Returns metrics and detailed results.\n",
        "    \"\"\"\n",
        "    metrics = {\n",
        "        \"total_correct\": 0,\n",
        "        \"total_time\": 0,\n",
        "        \"samples\": 0,\n",
        "    }\n",
        "\n",
        "    # Initialize category-specific metrics\n",
        "    category_metrics = {}\n",
        "\n",
        "    # Detailed results for each example\n",
        "    detailed_results = []\n",
        "\n",
        "    # Prepare the dataset\n",
        "    examples = dataset if max_samples is None else dataset.select(range(max_samples))\n",
        "\n",
        "    for example in tqdm(examples, desc=f\"Evaluating\"):\n",
        "        try:\n",
        "            # Get appropriate prompt for the category\n",
        "            prompt = get_prompt_for_category(example['question'], example['category'])\n",
        "\n",
        "            # Record start time\n",
        "            start_time = time.time()\n",
        "\n",
        "            # Do inference\n",
        "            messages=[\n",
        "                    {\"role\": \"system\", \"content\": \"You are a helpful AI assistant focused on providing precise answers in the requested format.\"},\n",
        "                    {\"role\": \"user\", \"content\": prompt}\n",
        "                ]\n",
        "            response = autothink_decode(model, tokenizer, messages, {\"do_sample\": True, \"temperature\": 0.1, \"max_new_tokens\": 1024})\n",
        "            #print(response)\n",
        "\n",
        "            # Calculate time taken\n",
        "            time_taken = time.time() - start_time\n",
        "\n",
        "            # Get the response text\n",
        "            response_text = response\n",
        "\n",
        "            # Also store the raw response for reference\n",
        "            raw_response = response_text\n",
        "\n",
        "            # Process the response to remove thinking blocks\n",
        "            processed_response = remove_thinking_blocks(response_text)\n",
        "\n",
        "            # Evaluate the processed response\n",
        "            is_correct = evaluate_response(\n",
        "                processed_response,\n",
        "                example['answer'],\n",
        "                example['category'],\n",
        "                example['question']  # Pass the question for MMLU evaluation\n",
        "            )\n",
        "\n",
        "            # Update metrics\n",
        "            metrics[\"total_correct\"] += int(is_correct)\n",
        "            metrics[\"total_time\"] += time_taken\n",
        "            metrics[\"samples\"] += 1\n",
        "\n",
        "            # Update category metrics\n",
        "            if example['category'] not in category_metrics:\n",
        "                category_metrics[example['category']] = {\n",
        "                    \"correct\": 0,\n",
        "                    \"total\": 0,\n",
        "                    \"time\": 0\n",
        "                }\n",
        "            category_metrics[example['category']][\"correct\"] += int(is_correct)\n",
        "            category_metrics[example['category']][\"total\"] += 1\n",
        "            category_metrics[example['category']][\"time\"] += time_taken\n",
        "\n",
        "            # Check if thinking blocks were removed\n",
        "            has_thinking = '</think>' in raw_response\n",
        "\n",
        "            # Record detailed result\n",
        "            detailed_results.append({\n",
        "                \"id\": example['id'],\n",
        "                \"category\": example['category'],\n",
        "                \"correct\": is_correct,\n",
        "                \"time_taken\": time_taken,\n",
        "                \"raw_response\": raw_response,\n",
        "                \"processed_response\": processed_response if has_thinking else None,\n",
        "                \"has_thinking\": has_thinking,\n",
        "                \"ground_truth\": example['answer']\n",
        "            })\n",
        "\n",
        "        except Exception as e:\n",
        "            logger.error(f\"Error processing example {example['id']}: {e}\")\n",
        "            continue\n",
        "\n",
        "    # Calculate final metrics\n",
        "    final_metrics = {\n",
        "        \"accuracy\": metrics[\"total_correct\"] / metrics[\"samples\"] if metrics[\"samples\"] > 0 else 0,\n",
        "        \"average_time\": metrics[\"total_time\"] / metrics[\"samples\"] if metrics[\"samples\"] > 0 else 0,\n",
        "        \"total_time\": metrics[\"total_time\"],\n",
        "        \"total_samples\": metrics[\"samples\"],\n",
        "    }\n",
        "\n",
        "    # Add category-specific metrics\n",
        "    for category, cat_metrics in category_metrics.items():\n",
        "        final_metrics[f\"{category}_accuracy\"] = cat_metrics[\"correct\"] / cat_metrics[\"total\"]\n",
        "        final_metrics[f\"{category}_average_time\"] = cat_metrics[\"time\"] / cat_metrics[\"total\"]\n",
        "\n",
        "    return final_metrics, detailed_results"
      ],
      "metadata": {
        "id": "QVP-1sL1x0Hh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to save the evaluation results to file."
      ],
      "metadata": {
        "id": "JrCih-t0u_2h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import json\n",
        "from datetime import datetime\n",
        "\n",
        "def save_results(metrics: Dict[str, float], detailed_results: List[Dict[str, Any]],\n",
        "                model: str, output_dir: str):\n",
        "    \"\"\"Save evaluation results to files.\"\"\"\n",
        "    timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
        "\n",
        "    # Create model-specific directory\n",
        "    model_dir = os.path.join(output_dir, model.replace('/', '_'))\n",
        "    os.makedirs(model_dir, exist_ok=True)\n",
        "\n",
        "    base_filename = os.path.join(model_dir, f\"_{timestamp}\")\n",
        "\n",
        "    # Save metrics\n",
        "    with open(f\"{base_filename}_metrics.json\", \"w\") as f:\n",
        "        json.dump(metrics, f, indent=2)\n",
        "\n",
        "    # Save detailed results\n",
        "    with open(f\"{base_filename}_detailed.json\", \"w\") as f:\n",
        "        json.dump(detailed_results, f, indent=2)\n",
        "\n",
        "    # Create a summary DataFrame for easier analysis\n",
        "    df = pd.DataFrame([\n",
        "        {k: v for k, v in result.items() if k != 'raw_response' and k != 'processed_response'}\n",
        "        for result in detailed_results\n",
        "    ])\n",
        "    df.to_csv(f\"{base_filename}_summary.csv\", index=False)\n",
        "\n",
        "    logger.info(f\"Results saved to {base_filename}_*\")"
      ],
      "metadata": {
        "id": "QiD4ZDQCBMyc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a custom function to generate a report starting from the evaluation metrics."
      ],
      "metadata": {
        "id": "VAa221InvJq3"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import pandas as pd\n",
        "from datetime import datetime\n",
        "\n",
        "def generate_report(all_metrics: Dict[str, Dict[str, float]], output_dir: str):\n",
        "    \"\"\"Generate a comprehensive report comparing all approaches.\"\"\"\n",
        "    report = []\n",
        "\n",
        "    # Header\n",
        "    report.append(\"# OptiLLM Bench Evaluation Report\")\n",
        "    report.append(f\"Generated on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\\n\")\n",
        "\n",
        "    # Overall Results Table\n",
        "    report.append(\"## Overall Results\")\n",
        "    headers = [\"Accuracy\", \"Avg Time (s)\", \"Total Time (s)\"]\n",
        "    rows = []\n",
        "\n",
        "    '''for metrics in all_metrics.items():\n",
        "        rows.append([\n",
        "            f\"{metrics['accuracy']*100:.2f}%\",\n",
        "            f\"{metrics['average_time']:.2f}\",\n",
        "            f\"{metrics['total_time']:.2f}\"\n",
        "        ])'''\n",
        "    rows.append([\n",
        "        f\"{all_metrics['accuracy']*100:.2f}%\",\n",
        "        f\"{all_metrics['average_time']:.2f}\",\n",
        "        f\"{all_metrics['total_time']:.2f}\"\n",
        "    ])\n",
        "\n",
        "    # Convert to DataFrame for nice formatting\n",
        "    df = pd.DataFrame(rows, columns=headers)\n",
        "    report.append(df.to_markdown())\n",
        "\n",
        "    # Category-wise Results\n",
        "    report.append(\"\\n## Results by Category\")\n",
        "    categories = [\"gsm8k\", \"mmlu_math\", \"boolq\", \"aqua_rat\"]\n",
        "\n",
        "    for category in categories:\n",
        "        report.append(f\"\\n### {category.upper()}\")\n",
        "        headers = [\"Accuracy\", \"Avg Time (s)\"]\n",
        "        rows = []\n",
        "        if f\"{category}_accuracy\" in all_metrics:\n",
        "            rows.append([\n",
        "                f\"{all_metrics[f'{category}_accuracy']*100:.2f}%\",\n",
        "                f\"{all_metrics[f'{category}_average_time']:.2f}\"\n",
        "            ])\n",
        "\n",
        "        df = pd.DataFrame(rows, columns=headers)\n",
        "        report.append(df.to_markdown())\n",
        "\n",
        "    # Save report\n",
        "    report_path = f\"{output_dir}/evaluation_report.md\"\n",
        "    with open(report_path, \"w\") as f:\n",
        "        f.write(\"\\n\\n\".join(report))\n",
        "\n",
        "    logger.info(f\"Report saved to {report_path}\")"
      ],
      "metadata": {
        "id": "pjexI93UBecg"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "model_name = \"Qwen/Qwen2.5-0.5B-Instruct\"\n",
        "model, tokenizer = download_model_from_hf(model_name)"
      ],
      "metadata": {
        "id": "Cek9UD7sGv9f"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Do model evaluation on the downloaded dataset."
      ],
      "metadata": {
        "id": "4whP1k9avWhr"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "\n",
        "output_dir = \"results\"\n",
        "os.makedirs(output_dir, exist_ok=True)\n",
        "try:\n",
        "    metrics, detailed_results = evaluate_model(\n",
        "        model,\n",
        "        tokenizer,\n",
        "        dataset,\n",
        "        28\n",
        "    )\n",
        "\n",
        "    save_results(metrics, detailed_results, model_id,\n",
        "                output_dir)\n",
        "\n",
        "    logger.info(f\"Completed evaluation.\")\n",
        "    logger.info(f\"Accuracy: {metrics['accuracy']*100:.2f}%\")\n",
        "    logger.info(f\"Average time per sample: {metrics['average_time']:.2f}s\")\n",
        "\n",
        "except Exception as e:\n",
        "    logger.error(f\"Error evaluating: {e}\")"
      ],
      "metadata": {
        "id": "YPQpKuFPBIPn"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Display the evaluation metrics."
      ],
      "metadata": {
        "id": "OOIW-2zpvdr6"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "metrics"
      ],
      "metadata": {
        "id": "pAjKOpfdTNv0"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Generate the final report."
      ],
      "metadata": {
        "id": "U-hrMwd7vgxM"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "generate_report(metrics, os.path.join(output_dir, model_id.replace('/', '_')))"
      ],
      "metadata": {
        "id": "fuCEJpWWBYls"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}