{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "23a846d8",
   "metadata": {},
   "source": [
    "<center>\n",
    "    <p style=\"text-align:center\">\n",
    "        <img alt=\"phoenix logo\" src=\"https://storage.googleapis.com/arize-assets/arize-logo-white.jpg\" width=\"200\"/>\n",
    "        <br>\n",
    "        <img alt=\"phoenix logo\" src=\"https://storage.googleapis.com/arize-assets/phoenix/assets/phoenix-logo-light.svg\" width=\"200\"/>\n",
    "        <br>\n",
    "        <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n",
    "        |\n",
    "        <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n",
    "        |\n",
    "        <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n",
    "    </p>\n",
    "</center>\n",
    "<h1 align=\"center\"> Judge Prompt Comparison: Simple/Complex × Reasoning/Non-Reasoning Models </h1>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d4e7025",
   "metadata": {},
   "source": [
    "This notebook uses **Arize Phoenix `llm_classify`** to evaluate tool-calling predictions on the **Berkeley Function Calling Leaderboard (BFCL)** dataset with:\n",
    "- a **simple** binary prompt (`Yes`/`No`), and\n",
    "- a **complex** multi-class prompt (`correct` / `partially_correct` / `incorrect`).\n",
    "\n",
    "We keep it minimal and focused on classification-style LLM-as-a-judge using a non-reasoning model and a reasoning model for the judge. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "052dc541",
   "metadata": {},
   "source": [
    "## Install & Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7b00e731",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip -q install --upgrade pandas datasets arize-phoenix openai tiktoken nest_asyncio"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6eab650b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import getpass\n",
    "import json\n",
    "import os\n",
    "import random\n",
    "import re\n",
    "import urllib.request\n",
    "from pathlib import Path\n",
    "\n",
    "import nest_asyncio\n",
    "import pandas as pd\n",
    "\n",
    "from phoenix.evals import llm_classify\n",
    "from phoenix.evals.models import OpenAIModel\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6a7185b",
   "metadata": {},
   "source": [
    "## Configure Judge Models \n",
    " We will be using gpt-4o-mini as our nonreasoning & o3 for the Reasoning Model. Make sure to set your OPENAI_API_KEY. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e5f9c97c",
   "metadata": {},
   "outputs": [],
   "source": [
    "if not (openai_api_key := os.getenv(\"OPENAI_API_KEY\")):\n",
    "    openai_api_key = getpass.getpass(\"🔑 Enter your OpenAI API key: \")\n",
    "os.environ[\"OPENAI_API_KEY\"] = openai_api_key\n",
    "\n",
    "non_reasoning_model = OpenAIModel(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    temperature=0,\n",
    ")\n",
    "reasoning_model = OpenAIModel(\n",
    "    model=\"o3\",\n",
    "    temperature=0,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e8b57f78",
   "metadata": {},
   "source": [
    "## Load BFCL (V3 Exec Splits)\n",
    "\n",
    ">      \n",
    "> The Berkeley function calling leaderboard is a live leaderboard to evaluate the ability of different LLMs to call functions (also ?referred to as tools). We built this dataset from our learnings to be representative of most users' function calling use-cases, for example, in agents, as a part of enterprise workflows, etc. To this end, our evaluation dataset spans diverse categories, and across multiple languages.\n",
    "> \n",
    "\n",
    "The `exec_simple` dataset is where the 'single function evaluation contains the simplest but most commonly seen format, where the user supplies a single JSON function document, with one and only one function call being invoked.'\n",
    "\n",
    "The `exec_multiple` dataset is where the 'multiple function category contains a user question that only invokes one function call out of 2 to 4 JSON function documentations. The model needs to be capable of selecting the best function to invoke according to user-provided context.'\n",
    "\n",
    "More information about these datasets can be found here: https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "994eb09b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Simple shape: (100, 5) Multiple shape: (50, 5)\n",
      "Combined shape: (150, 5)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>question</th>\n",
       "      <th>function</th>\n",
       "      <th>execution_result_type</th>\n",
       "      <th>ground_truth</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>exec_simple_0</td>\n",
       "      <td>[[{'role': 'user', 'content': 'I've been playi...</td>\n",
       "      <td>[{'name': 'calc_binomial_probability', 'descri...</td>\n",
       "      <td>[exact_match]</td>\n",
       "      <td>[calc_binomial_probability(n=20, k=5, p=0.6)]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>exec_simple_1</td>\n",
       "      <td>[[{'role': 'user', 'content': 'During last nig...</td>\n",
       "      <td>[{'name': 'calc_binomial_probability', 'descri...</td>\n",
       "      <td>[exact_match]</td>\n",
       "      <td>[calc_binomial_probability(n=30, k=15, p=0.5)]</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "              id                                           question  \\\n",
       "0  exec_simple_0  [[{'role': 'user', 'content': 'I've been playi...   \n",
       "1  exec_simple_1  [[{'role': 'user', 'content': 'During last nig...   \n",
       "\n",
       "                                            function execution_result_type  \\\n",
       "0  [{'name': 'calc_binomial_probability', 'descri...         [exact_match]   \n",
       "1  [{'name': 'calc_binomial_probability', 'descri...         [exact_match]   \n",
       "\n",
       "                                     ground_truth  \n",
       "0   [calc_binomial_probability(n=20, k=5, p=0.6)]  \n",
       "1  [calc_binomial_probability(n=30, k=15, p=0.5)]  "
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "BFCL_FILES = {\n",
    "    \"exec_simple\": \"https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard/resolve/main/BFCL_v3_exec_simple.json\",\n",
    "    \"exec_multiple\": \"https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard/resolve/main/BFCL_v3_exec_multiple.json\",\n",
    "}\n",
    "\n",
    "\n",
    "def fetch_json(url, out_path):\n",
    "    out = Path(out_path)\n",
    "    if not out.exists():\n",
    "        print(f\"Downloading {url} -> {out}\")\n",
    "        urllib.request.urlretrieve(url, out)\n",
    "    text = Path(out).read_text()\n",
    "    try:\n",
    "        return json.loads(text)\n",
    "    except Exception:\n",
    "        rows = []\n",
    "        for line in text.splitlines():\n",
    "            line = line.strip()\n",
    "            if not line:\n",
    "                continue\n",
    "            try:\n",
    "                rows.append(json.loads(line))\n",
    "            except Exception:\n",
    "                pass\n",
    "        return rows\n",
    "\n",
    "\n",
    "df_simple = pd.DataFrame(fetch_json(BFCL_FILES[\"exec_simple\"], \"BFCL_v3_exec_simple.json\"))\n",
    "df_multi = pd.DataFrame(fetch_json(BFCL_FILES[\"exec_multiple\"], \"BFCL_v3_exec_multiple.json\"))\n",
    "print(\"Simple shape:\", df_simple.shape, \"Multiple shape:\", df_multi.shape)\n",
    "df = pd.concat([df_simple, df_multi], ignore_index=True)\n",
    "print(\"Combined shape:\", df.shape)\n",
    "df.head(2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77a35ff9",
   "metadata": {},
   "source": [
    "## Prepare & Format DataFrames\n",
    "Pull out different parts of the data like, instruction, functions, ground truth, & predictions. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "0e73a4d6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>instruction</th>\n",
       "      <th>functions_json</th>\n",
       "      <th>ground_truth</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>exec_simple_0</td>\n",
       "      <td>I've been playing a game where rolling a six i...</td>\n",
       "      <td>[{\"name\": \"calc_binomial_probability\", \"parame...</td>\n",
       "      <td>calc_binomial_probability(n=20, k=5, p=0.6)</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "              id                                        instruction  \\\n",
       "0  exec_simple_0  I've been playing a game where rolling a six i...   \n",
       "\n",
       "                                      functions_json  \\\n",
       "0  [{\"name\": \"calc_binomial_probability\", \"parame...   \n",
       "\n",
       "                                  ground_truth  \n",
       "0  calc_binomial_probability(n=20, k=5, p=0.6)  "
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def extract_gt_call(row):\n",
    "    gt = row.get(\"ground_truth\")\n",
    "    if isinstance(gt, list) and gt:\n",
    "        return gt[0]\n",
    "    if isinstance(gt, str):\n",
    "        return gt\n",
    "    return \"\"\n",
    "\n",
    "\n",
    "def extract_functions(row):\n",
    "    fns = row.get(\"function\", [])\n",
    "    if isinstance(fns, dict):\n",
    "        fns = [fns]\n",
    "    return fns\n",
    "\n",
    "\n",
    "def extract_instruction(row):\n",
    "    q = row.get(\"question\", [])\n",
    "    last_user = \"\"\n",
    "    for msg_list in q:\n",
    "        for m in msg_list:\n",
    "            if m.get(\"role\") == \"user\":\n",
    "                last_user = m.get(\"content\", last_user)\n",
    "    return last_user\n",
    "\n",
    "\n",
    "work = []\n",
    "for _, r in df.iterrows():\n",
    "    rr = r.to_dict()\n",
    "    work.append(\n",
    "        {\n",
    "            \"id\": rr.get(\"id\", \"\"),\n",
    "            \"instruction\": extract_instruction(rr),\n",
    "            \"functions_json\": json.dumps(\n",
    "                [\n",
    "                    {\n",
    "                        \"name\": f.get(\"name\"),\n",
    "                        \"parameters\": f.get(\"parameters\"),\n",
    "                        \"description\": f.get(\"description\", \"\"),\n",
    "                    }\n",
    "                    for f in extract_functions(rr)\n",
    "                ],\n",
    "                ensure_ascii=False,\n",
    "            ),\n",
    "            \"ground_truth\": extract_gt_call(rr),\n",
    "        }\n",
    "    )\n",
    "all_data = pd.DataFrame(work)\n",
    "all_data.head(1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f36abbbb",
   "metadata": {},
   "source": [
    "## Modify Benchmark Dataset\n",
    "\n",
    "The BFCL Dataset does not have any `negative` examples, i.e. only `question`, `available_tools`, and `ground_truth` are present. In order to accurately benchmark our LLM-as-a-Judge, this code implements a data corruption strategy to generate synthetic evaluation datasets for testing LLM-as-a-Judge systems. It's designed to create realistic \"negative examples\" (incorrect tool calls) from existing ground truth data, enabling comprehensive evaluation of classification models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "bee03b29",
   "metadata": {},
   "outputs": [],
   "source": [
    "def corrupt_call(s: str) -> str:\n",
    "    if not s or \"(\" not in s:\n",
    "        return s\n",
    "    tool, args = s.split(\"(\", 1)\n",
    "    tool = tool.strip()\n",
    "    args = args.rstrip(\")\")\n",
    "    if random.random() < 0.5:\n",
    "        tool = tool + \"_alt\"\n",
    "    else:\n",
    "        args = re.sub(r\"(\\d+(?:\\.\\d+)?)\", lambda m: str(float(m.group()) * 1.1), args, count=1)\n",
    "    return f\"{tool} ({args})\"\n",
    "\n",
    "\n",
    "predict_tool_call = [\n",
    "    gt if random.random() < 0.7 else corrupt_call(gt) for gt in all_data[\"ground_truth\"]\n",
    "]\n",
    "data = all_data.copy()\n",
    "data[\"predicted_tool_call\"] = predict_tool_call"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "56c09162",
   "metadata": {},
   "source": [
    "We will be using a small subset of our data for testing purposes. Here we are generating our testing dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "4d366c4d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>instruction</th>\n",
       "      <th>functions_json</th>\n",
       "      <th>ground_truth</th>\n",
       "      <th>predicted_tool_call</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>exec_multiple_7</td>\n",
       "      <td>As a data analyst, I've been tracking the dail...</td>\n",
       "      <td>[{\"name\": \"get_time_zone_by_coord\", \"parameter...</td>\n",
       "      <td>calculate_mean(numbers=[22, 24, 26, 28, 30, 32...</td>\n",
       "      <td>calculate_mean(numbers=[22, 24, 26, 28, 30, 32...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>exec_multiple_14</td>\n",
       "      <td>I'm working on a community art project and pla...</td>\n",
       "      <td>[{\"name\": \"calculate_electrostatic_potential_e...</td>\n",
       "      <td>geometry_area_circle(radius=15)</td>\n",
       "      <td>geometry_area_circle(radius=15)</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>exec_simple_57</td>\n",
       "      <td>I'm tracking a storm system for my weather rep...</td>\n",
       "      <td>[{\"name\": \"get_time_zone_by_coord\", \"parameter...</td>\n",
       "      <td>get_time_zone_by_coord(long='-80.75', lat='35....</td>\n",
       "      <td>get_time_zone_by_coord(long='-80.75', lat='35....</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>exec_simple_59</td>\n",
       "      <td>I'm working on a study about climate change in...</td>\n",
       "      <td>[{\"name\": \"get_weather_data\", \"parameters\": {\"...</td>\n",
       "      <td>get_weather_data(coordinates=[25.00, 13.00])</td>\n",
       "      <td>get_weather_data(coordinates=[25.00, 13.00])</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>exec_multiple_5</td>\n",
       "      <td>During a simulation of a high-speed pursuit, I...</td>\n",
       "      <td>[{\"name\": \"calculate_cosine_similarity\", \"para...</td>\n",
       "      <td>calculate_final_velocity(initial_velocity=0, a...</td>\n",
       "      <td>calculate_final_velocity (initial_velocity=0.0...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                 id                                        instruction  \\\n",
       "0   exec_multiple_7  As a data analyst, I've been tracking the dail...   \n",
       "1  exec_multiple_14  I'm working on a community art project and pla...   \n",
       "2    exec_simple_57  I'm tracking a storm system for my weather rep...   \n",
       "3    exec_simple_59  I'm working on a study about climate change in...   \n",
       "4   exec_multiple_5  During a simulation of a high-speed pursuit, I...   \n",
       "\n",
       "                                      functions_json  \\\n",
       "0  [{\"name\": \"get_time_zone_by_coord\", \"parameter...   \n",
       "1  [{\"name\": \"calculate_electrostatic_potential_e...   \n",
       "2  [{\"name\": \"get_time_zone_by_coord\", \"parameter...   \n",
       "3  [{\"name\": \"get_weather_data\", \"parameters\": {\"...   \n",
       "4  [{\"name\": \"calculate_cosine_similarity\", \"para...   \n",
       "\n",
       "                                        ground_truth  \\\n",
       "0  calculate_mean(numbers=[22, 24, 26, 28, 30, 32...   \n",
       "1                    geometry_area_circle(radius=15)   \n",
       "2  get_time_zone_by_coord(long='-80.75', lat='35....   \n",
       "3       get_weather_data(coordinates=[25.00, 13.00])   \n",
       "4  calculate_final_velocity(initial_velocity=0, a...   \n",
       "\n",
       "                                 predicted_tool_call  \n",
       "0  calculate_mean(numbers=[22, 24, 26, 28, 30, 32...  \n",
       "1                    geometry_area_circle(radius=15)  \n",
       "2  get_time_zone_by_coord(long='-80.75', lat='35....  \n",
       "3       get_weather_data(coordinates=[25.00, 13.00])  \n",
       "4  calculate_final_velocity (initial_velocity=0.0...  "
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "small_data = data.sample(n=30, random_state=24).reset_index(drop=True)\n",
    "small_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77bc0086",
   "metadata": {},
   "source": [
    "## Define your LLM-as-a-Judge Templates & Rails"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "8c4ae0b4",
   "metadata": {},
   "outputs": [],
   "source": [
    "SIMPLE_TEMPLATE = \"\"\"You are grading a tool-calling attempt.\n",
    "\n",
    "Given:\n",
    "USER INSTRUCTION:\n",
    "{instruction}\n",
    "\n",
    "AVAILABLE FUNCTIONS (JSON Schemas):\n",
    "{functions_json}\n",
    "\n",
    "MODEL TOOL CALL (string):\n",
    "{predicted_tool_call}\n",
    "\n",
    "GROUND TRUTH TOOL CALL (string):\n",
    "{ground_truth}\n",
    "\n",
    "Question: Did the model invoke the correct tool(s) AND use the correct parameter names and values?\n",
    "Answer strictly with one word, Yes or No, & an explanation for your answer.\n",
    "\n",
    "Example response:\n",
    "LABEL: \"Yes\" or \"No\"\n",
    "EXPLANATION: An explanation of your reasoning for why the label is \"Yes\" or \"No\"\n",
    "\"\"\"\n",
    "\n",
    "SIMPLE_RAILS = [\"Yes\", \"No\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1c76b4eb",
   "metadata": {},
   "outputs": [],
   "source": [
    "COMPLEX_TEMPLATE = \"\"\"You are grading a tool-calling attempt.\n",
    "Return ONLY one of the following labels:\n",
    "- correct\n",
    "- partially_correct\n",
    "- incorrect\n",
    "\n",
    "Use these rules:\n",
    "- Consider types and trivial formatting (e.g., '5' vs 5, whitespace) as equivalent.\n",
    "- Consider equivalent units only if explicitly clear from context.\n",
    "- The attempt is \"correct\" only if the tool and all required parameters match the ground truth.\n",
    "- It's \"partially_correct\" if the tool is correct but parameters have minor issues.\n",
    "- It's \"incorrect\" otherwise.\n",
    "\n",
    "Context:\n",
    "USER INSTRUCTION:\n",
    "{instruction}\n",
    "\n",
    "AVAILABLE FUNCTIONS (JSON Schemas):\n",
    "{functions_json}\n",
    "\n",
    "MODEL TOOL CALL (string):\n",
    "{predicted_tool_call}\n",
    "\n",
    "GROUND TRUTH TOOL CALL (string):\n",
    "{ground_truth}\n",
    "\n",
    "Question: Use the rules above to determine if the model's tool call is correct, partially correct, or incorrect.\n",
    "Answer strictly with one label & an explanation for your answer.\n",
    "\n",
    "Example response:\n",
    "LABEL: \"correct\" or \"partially_correct\" or \"incorrect\"\n",
    "EXPLANATION: An explanation of your reasoning for why the label is \"correct\" or \"partially_correct\" or \"incorrect\"\n",
    "\"\"\"\n",
    "\n",
    "COMPLEX_RAILS = [\"correct\", \"partially_correct\", \"incorrect\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a79bb08",
   "metadata": {},
   "source": [
    "## Run our Simple Evaluation on both Judge Models"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "d722232e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3c5aeab306fc405f80d7def0e4d7061f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "llm_classify |          | 0/30 (0.0%) | ⏳ 00:00<? | ?it/s"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>explanation</th>\n",
       "      <th>response</th>\n",
       "      <th>exceptions</th>\n",
       "      <th>execution_status</th>\n",
       "      <th>execution_seconds</th>\n",
       "      <th>prompt_tokens</th>\n",
       "      <th>completion_tokens</th>\n",
       "      <th>total_tokens</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>no</td>\n",
       "      <td>The model did not invoke the correct tool or u...</td>\n",
       "      <td>{\"explanation\":\"The model did not invoke the c...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>1.602984</td>\n",
       "      <td>194</td>\n",
       "      <td>50</td>\n",
       "      <td>244</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>no</td>\n",
       "      <td>The model did not invoke the correct tool or u...</td>\n",
       "      <td>{\"explanation\":\"The model did not invoke the c...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>1.309475</td>\n",
       "      <td>194</td>\n",
       "      <td>50</td>\n",
       "      <td>244</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>no</td>\n",
       "      <td>The model invoked the correct tool but used in...</td>\n",
       "      <td>{\"explanation\":\"The model invoked the correct ...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>1.034774</td>\n",
       "      <td>194</td>\n",
       "      <td>22</td>\n",
       "      <td>216</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  label                                        explanation  \\\n",
       "0    no  The model did not invoke the correct tool or u...   \n",
       "1    no  The model did not invoke the correct tool or u...   \n",
       "2    no  The model invoked the correct tool but used in...   \n",
       "\n",
       "                                            response exceptions  \\\n",
       "0  {\"explanation\":\"The model did not invoke the c...         []   \n",
       "1  {\"explanation\":\"The model did not invoke the c...         []   \n",
       "2  {\"explanation\":\"The model invoked the correct ...         []   \n",
       "\n",
       "  execution_status  execution_seconds  prompt_tokens  completion_tokens  \\\n",
       "0        COMPLETED           1.602984            194                 50   \n",
       "1        COMPLETED           1.309475            194                 50   \n",
       "2        COMPLETED           1.034774            194                 22   \n",
       "\n",
       "   total_tokens  \n",
       "0           244  \n",
       "1           244  \n",
       "2           216  "
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "simple_df = small_data.copy()\n",
    "\n",
    "non_reasoning_simple_results = llm_classify(\n",
    "    data=simple_df.assign(template=SIMPLE_TEMPLATE),\n",
    "    model=non_reasoning_model,\n",
    "    template=\"{template}\",\n",
    "    rails=SIMPLE_RAILS,\n",
    "    provide_explanation=True,\n",
    "    include_prompt=False,\n",
    "    include_response=True,\n",
    "    run_sync=True,\n",
    ")\n",
    "\n",
    "non_reasoning_simple_results.head(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "1ae98d4f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b1eb1af1255b4d2cbe031f7f86b28c00",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "llm_classify |          | 0/30 (0.0%) | ⏳ 00:00<? | ?it/s"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>explanation</th>\n",
       "      <th>response</th>\n",
       "      <th>exceptions</th>\n",
       "      <th>execution_status</th>\n",
       "      <th>execution_seconds</th>\n",
       "      <th>prompt_tokens</th>\n",
       "      <th>completion_tokens</th>\n",
       "      <th>total_tokens</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>no</td>\n",
       "      <td>The necessary details (user instruction, avail...</td>\n",
       "      <td>{\"response\":\"No\",\"explanation\":\"The necessary ...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>6.347308</td>\n",
       "      <td>188</td>\n",
       "      <td>326</td>\n",
       "      <td>514</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>no</td>\n",
       "      <td>The model’s tool invocation does not exactly m...</td>\n",
       "      <td>{\"response\":\"No\",\"explanation\":\"The model’s to...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>19.295503</td>\n",
       "      <td>188</td>\n",
       "      <td>1088</td>\n",
       "      <td>1276</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>no</td>\n",
       "      <td>The prompt did not provide the user instructio...</td>\n",
       "      <td>{\"response\":\"No\",\"explanation\":\"The prompt did...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>3.337270</td>\n",
       "      <td>188</td>\n",
       "      <td>194</td>\n",
       "      <td>382</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  label                                        explanation  \\\n",
       "0    no  The necessary details (user instruction, avail...   \n",
       "1    no  The model’s tool invocation does not exactly m...   \n",
       "2    no  The prompt did not provide the user instructio...   \n",
       "\n",
       "                                            response exceptions  \\\n",
       "0  {\"response\":\"No\",\"explanation\":\"The necessary ...         []   \n",
       "1  {\"response\":\"No\",\"explanation\":\"The model’s to...         []   \n",
       "2  {\"response\":\"No\",\"explanation\":\"The prompt did...         []   \n",
       "\n",
       "  execution_status  execution_seconds  prompt_tokens  completion_tokens  \\\n",
       "0        COMPLETED           6.347308            188                326   \n",
       "1        COMPLETED          19.295503            188               1088   \n",
       "2        COMPLETED           3.337270            188                194   \n",
       "\n",
       "   total_tokens  \n",
       "0           514  \n",
       "1          1276  \n",
       "2           382  "
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "simple_df = small_data.copy()\n",
    "\n",
    "reasoning_simple_results = llm_classify(\n",
    "    data=simple_df.assign(template=SIMPLE_TEMPLATE),\n",
    "    model=reasoning_model,\n",
    "    template=\"{template}\",\n",
    "    rails=SIMPLE_RAILS,\n",
    "    provide_explanation=True,\n",
    "    include_prompt=False,\n",
    "    include_response=True,\n",
    "    run_sync=True,\n",
    ")\n",
    "\n",
    "reasoning_simple_results.head(3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf8b1988",
   "metadata": {},
   "source": [
    "## Run our Complex Evaluation on both Judge Models"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "2e1b05f6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dbf8255990a1455e995a3bcd12a8382d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "llm_classify |          | 0/30 (0.0%) | ⏳ 00:00<? | ?it/s"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>explanation</th>\n",
       "      <th>response</th>\n",
       "      <th>exceptions</th>\n",
       "      <th>execution_status</th>\n",
       "      <th>execution_seconds</th>\n",
       "      <th>prompt_tokens</th>\n",
       "      <th>completion_tokens</th>\n",
       "      <th>total_tokens</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>incorrect</td>\n",
       "      <td>The model's tool call does not match the groun...</td>\n",
       "      <td>{\"explanation\":\"The model's tool call does not...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>1.319789</td>\n",
       "      <td>311</td>\n",
       "      <td>38</td>\n",
       "      <td>349</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>incorrect</td>\n",
       "      <td>The model's tool call does not match the groun...</td>\n",
       "      <td>{\"explanation\":\"The model's tool call does not...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>1.662194</td>\n",
       "      <td>311</td>\n",
       "      <td>38</td>\n",
       "      <td>349</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>incorrect</td>\n",
       "      <td>The model's tool call does not match the groun...</td>\n",
       "      <td>{\"explanation\":\"The model's tool call does not...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>1.137608</td>\n",
       "      <td>311</td>\n",
       "      <td>38</td>\n",
       "      <td>349</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "       label                                        explanation  \\\n",
       "0  incorrect  The model's tool call does not match the groun...   \n",
       "1  incorrect  The model's tool call does not match the groun...   \n",
       "2  incorrect  The model's tool call does not match the groun...   \n",
       "\n",
       "                                            response exceptions  \\\n",
       "0  {\"explanation\":\"The model's tool call does not...         []   \n",
       "1  {\"explanation\":\"The model's tool call does not...         []   \n",
       "2  {\"explanation\":\"The model's tool call does not...         []   \n",
       "\n",
       "  execution_status  execution_seconds  prompt_tokens  completion_tokens  \\\n",
       "0        COMPLETED           1.319789            311                 38   \n",
       "1        COMPLETED           1.662194            311                 38   \n",
       "2        COMPLETED           1.137608            311                 38   \n",
       "\n",
       "   total_tokens  \n",
       "0           349  \n",
       "1           349  \n",
       "2           349  "
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "complex_df = small_data.copy()\n",
    "\n",
    "non_reasoning_complex_results = llm_classify(\n",
    "    data=complex_df.assign(template=COMPLEX_TEMPLATE),\n",
    "    model=non_reasoning_model,\n",
    "    template=\"{template}\",\n",
    "    rails=COMPLEX_RAILS,\n",
    "    provide_explanation=True,\n",
    "    include_prompt=False,\n",
    "    include_response=True,\n",
    "    run_sync=True,\n",
    ")\n",
    "non_reasoning_complex_results.head(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "12ebbe51",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "743c4a3516504faa93e86b285a332025",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "llm_classify |          | 0/30 (0.0%) | ⏳ 00:00<? | ?it/s"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>explanation</th>\n",
       "      <th>response</th>\n",
       "      <th>exceptions</th>\n",
       "      <th>execution_status</th>\n",
       "      <th>execution_seconds</th>\n",
       "      <th>prompt_tokens</th>\n",
       "      <th>completion_tokens</th>\n",
       "      <th>total_tokens</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>incorrect</td>\n",
       "      <td>The necessary information to make an accurate ...</td>\n",
       "      <td>{\"response\":\"incorrect\",\"explanation\":\"The nec...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>17.276113</td>\n",
       "      <td>305</td>\n",
       "      <td>1263</td>\n",
       "      <td>1568</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>incorrect</td>\n",
       "      <td>Unable to evaluate: the required ground-truth ...</td>\n",
       "      <td>{\"response\":\"incorrect\",\"explanation\":\"Unable ...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>11.018556</td>\n",
       "      <td>305</td>\n",
       "      <td>632</td>\n",
       "      <td>937</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>incorrect</td>\n",
       "      <td>The necessary details of the user instruction,...</td>\n",
       "      <td>{\"response\":\"incorrect\",\"explanation\":\"The nec...</td>\n",
       "      <td>[]</td>\n",
       "      <td>COMPLETED</td>\n",
       "      <td>5.915661</td>\n",
       "      <td>305</td>\n",
       "      <td>269</td>\n",
       "      <td>574</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "       label                                        explanation  \\\n",
       "0  incorrect  The necessary information to make an accurate ...   \n",
       "1  incorrect  Unable to evaluate: the required ground-truth ...   \n",
       "2  incorrect  The necessary details of the user instruction,...   \n",
       "\n",
       "                                            response exceptions  \\\n",
       "0  {\"response\":\"incorrect\",\"explanation\":\"The nec...         []   \n",
       "1  {\"response\":\"incorrect\",\"explanation\":\"Unable ...         []   \n",
       "2  {\"response\":\"incorrect\",\"explanation\":\"The nec...         []   \n",
       "\n",
       "  execution_status  execution_seconds  prompt_tokens  completion_tokens  \\\n",
       "0        COMPLETED          17.276113            305               1263   \n",
       "1        COMPLETED          11.018556            305                632   \n",
       "2        COMPLETED           5.915661            305                269   \n",
       "\n",
       "   total_tokens  \n",
       "0          1568  \n",
       "1           937  \n",
       "2           574  "
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "complex_df = small_data.copy()\n",
    "\n",
    "reasoning_complex_results = llm_classify(\n",
    "    data=complex_df.assign(template=COMPLEX_TEMPLATE),\n",
    "    model=reasoning_model,\n",
    "    template=\"{template}\",\n",
    "    rails=COMPLEX_RAILS,\n",
    "    provide_explanation=True,\n",
    "    include_prompt=False,\n",
    "    include_response=True,\n",
    "    run_sync=True,\n",
    ")\n",
    "\n",
    "reasoning_complex_results.head(3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "61b69a6c",
   "metadata": {},
   "source": [
    "## View Results\n",
    "\n",
    "We will compare the number of times the models disagree on their evaluation labels as well as how many tokens they used to complete their evaluations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "4dcd3002",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "For Simple Eval: \n",
      "-----------------------------------------------------------\n",
      "Reasoning and non-reasoning models agree on all samples\n",
      "Non-reasoning model used 7238 tokens\n",
      "Reasoning model used 21644 tokens\n",
      "Reasoning model is 2.990328820116054 times more expensive than the non-reasoning model\n"
     ]
    }
   ],
   "source": [
    "print(\"For Simple Eval: \")\n",
    "print(\"-----------------------------------------------------------\")\n",
    "\n",
    "simple_different_labels = (\n",
    "    non_reasoning_simple_results[\"label\"] != reasoning_simple_results[\"label\"]\n",
    ").sum()\n",
    "if simple_different_labels == 0:\n",
    "    print(\"Reasoning and non-reasoning models agree on all samples\")\n",
    "else:\n",
    "    print(f\"Reasoning and non-reasoning models disagree on {simple_different_labels} samples\")\n",
    "NR_simple_tokens = non_reasoning_simple_results[\"total_tokens\"].sum()\n",
    "R_simple_tokens = reasoning_simple_results[\"total_tokens\"].sum()\n",
    "\n",
    "print(f\"Non-reasoning model used {NR_simple_tokens} tokens\")\n",
    "print(f\"Reasoning model used {R_simple_tokens} tokens\")\n",
    "print(\n",
    "    f\"Reasoning model is {R_simple_tokens / NR_simple_tokens} times more expensive than the non-reasoning model\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "4af64f6e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "For Complex Eval: \n",
      "-----------------------------------------------------------\n",
      "Reasoning and non-reasoning models disagree on 1 samples\n",
      "Non-reasoning model used 10479 tokens\n",
      "Reasoning model used 25355 tokens\n",
      "Reasoning model is 2.4196011069758563 times more expensive than the non-reasoning model\n"
     ]
    }
   ],
   "source": [
    "print(\"For Complex Eval: \")\n",
    "print(\"-----------------------------------------------------------\")\n",
    "\n",
    "complex_different_labels = (\n",
    "    non_reasoning_complex_results[\"label\"] != reasoning_complex_results[\"label\"]\n",
    ").sum()\n",
    "if complex_different_labels == 0:\n",
    "    print(\"Reasoning and non-reasoning models agree on all samples\")\n",
    "else:\n",
    "    print(f\"Reasoning and non-reasoning models disagree on {complex_different_labels} samples\")\n",
    "NR_complex_tokens = non_reasoning_complex_results[\"total_tokens\"].sum()\n",
    "R_complex_tokens = reasoning_complex_results[\"total_tokens\"].sum()\n",
    "\n",
    "print(f\"Non-reasoning model used {NR_complex_tokens} tokens\")\n",
    "print(f\"Reasoning model used {R_complex_tokens} tokens\")\n",
    "print(\n",
    "    f\"Reasoning model is {R_complex_tokens / NR_complex_tokens} times more expensive than the non-reasoning model\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76c84e99",
   "metadata": {},
   "source": [
    "### References\n",
    "- Phoenix Evals Overview: https://arize.com/docs/phoenix/evaluation/llm-evals\n",
    "- Using `llm_classify` (Docs): https://arize.com/docs/phoenix/evaluation/how-to-evals/bring-your-own-evaluator\n",
    "- BFCL dataset: https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
