{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "# HW 3: LLM-as-Judge for Recipe Bot Evaluation with Arize\n",
    "\n",
    "## 🎯 Assignment Overview\n",
    "\n",
    "In this assignment, we'll evaluate our Recipe Bot's adherence to dietary preferences using an LLM-as-Judge approach with Arize for tracing and evaluation.\n",
    "\n",
    "### Workflow:\n",
    "1. **📊 Load trace examples** - Choose between provided data or generate new traces\n",
    "2. **🏷️ Create datasets** - Prepare data for labeling queue in Arize\n",
    "3. **🔍 Label traces** - Use Arize UI to manually label examples  \n",
    "4. **⚖️ Write eval prompt** - Create judge prompt in Arize Playground\n",
    "5. **📈 Run evaluation experiment** - Execute evaluation via Arize\n",
    "6. **📊 Calculate metrics** - Export and analyze results\n",
    "\n",
    "### Core Task: \"Adherence to Dietary Preferences\"\n",
    "**Example**: If a user asks for a \"vegan\" recipe, does the bot provide one that is actually vegan?\n",
    "\n",
    "Let's get started! 🚀\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🔧 Setup and Environment Configuration\n",
    "\n",
    "First, let's import the required libraries and set up our environment.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install required packages\n",
    "import subprocess\n",
    "import sys\n",
    "\n",
    "\n",
    "def install_packages():\n",
    "    packages = [\n",
    "        \"arize-phoenix[evals]\",\n",
    "        \"openai\",\n",
    "        \"pandas\",\n",
    "        \"openinference-instrumentation-openai\",\n",
    "        \"nest-asyncio\",\n",
    "        \"arize-phoenix[evals]\",\n",
    "        \"openai\",\n",
    "        \"pandas\",\n",
    "        \"numpy\",\n",
    "        \"scipy\",\n",
    "        \"openinference-instrumentation-openai\",\n",
    "        \"nest-asyncio\",\n",
    "        \"arize[AutoEmbeddings]\",  # For ArizeExportClient and ArizeDatasetsClient\n",
    "        \"opentelemetry-api\",\n",
    "        \"opentelemetry-sdk\",\n",
    "        \"judgy\",\n",
    "    ]\n",
    "\n",
    "    for package in packages:\n",
    "        print(f\"Installing {package}...\")\n",
    "        subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package])\n",
    "\n",
    "\n",
    "# Uncomment to install packages\n",
    "# install_packages()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Setup\n",
    "import getpass\n",
    "import os\n",
    "from datetime import datetime, timedelta\n",
    "from pathlib import Path\n",
    "\n",
    "import numpy as np\n",
    "import openai\n",
    "import pandas as pd\n",
    "from arize.experimental.datasets import ArizeDatasetsClient\n",
    "from arize.experimental.datasets.utils.constants import GENERATIVE\n",
    "from arize.exporter import ArizeExportClient\n",
    "from arize.otel import register\n",
    "from arize.utils.types import Environments\n",
    "from judgy import estimate_success_rate\n",
    "from openinference.instrumentation.openai import OpenAIInstrumentor"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 🔑 API Key Configuration\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Prompt for OpenAI API key if not set\n",
    "if \"OPENAI_API_KEY\" not in os.environ or not os.environ[\"OPENAI_API_KEY\"]:\n",
    "    os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OPENAI_API_KEY: \")\n",
    "\n",
    "# Prompt for Arize API key if not set\n",
    "if \"ARIZE_API_KEY\" not in os.environ or not os.environ[\"ARIZE_API_KEY\"]:\n",
    "    os.environ[\"ARIZE_API_KEY\"] = getpass.getpass(\"Enter your ARIZE_API_KEY: \")\n",
    "\n",
    "# Prompt for Arize Space ID if not set\n",
    "if \"ARIZE_SPACE_ID\" not in os.environ or not os.environ[\"ARIZE_SPACE_ID\"]:\n",
    "    os.environ[\"ARIZE_SPACE_ID\"] = getpass.getpass(\"Enter your ARIZE_SPACE_ID: \")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Tracing Setup\n",
    "\n",
    "We'll set up OpenTelemetry tracing to automatically capture LLM interactions and send them to Arize. This enables real-time monitoring and evaluation of our Recipe Bot's performance in production."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Setup complete!\n"
     ]
    }
   ],
   "source": [
    "# Initialize OpenAI client\n",
    "client = openai.OpenAI()\n",
    "\n",
    "print(\"✅ Setup complete!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔭 OpenTelemetry Tracing Details 🔭\n",
      "|  Arize Project: RecipeBot\n",
      "|  Span Processor: BatchSpanProcessor\n",
      "|  Collector Endpoint: otlp.arize.com\n",
      "|  Transport: gRPC\n",
      "|  Transport Headers: {'authorization': '****', 'api_key': '****', 'arize-space-id': '****', 'space_id': '****', 'arize-interface': '****'}\n",
      "|  \n",
      "|  Using a default SpanProcessor. `add_span_processor` will overwrite this default.\n",
      "|  \n",
      "|  `register` has set this TracerProvider as the global OpenTelemetry default.\n",
      "|  To disable this behavior, call `register` with `set_global_tracer_provider=False`.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# Set up tracing\n",
    "tracer_provider = register(\n",
    "    space_id=os.environ[\"ARIZE_SPACE_ID\"],\n",
    "    project_name=\"RecipeBot\",  # name this to whatever you would like\n",
    ")\n",
    "OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## Part 1: Create Arize Dataset for Labeling \n",
    "\n",
    "We'll upload our split dataset to Arize for manual labeling of ground truth examples. This step is crucial for establishing the \"correct\" answers that our judge will be evaluated against.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>query</th>\n",
       "      <th>dietary_restriction</th>\n",
       "      <th>response</th>\n",
       "      <th>success</th>\n",
       "      <th>error</th>\n",
       "      <th>trace_id</th>\n",
       "      <th>query_id</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>I'm vegan but I really want to make something ...</td>\n",
       "      <td>vegan</td>\n",
       "      <td>Certainly! For a vegan yogurt breakfast that m...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>1_8</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>I'm vegan but I really want to make something ...</td>\n",
       "      <td>vegan</td>\n",
       "      <td>Absolutely! While honey is a popular sweetener...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>1_9</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>I'm vegan but I really want to make something ...</td>\n",
       "      <td>vegan</td>\n",
       "      <td>Certainly! Since you're vegan and craving a yo...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>1_10</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Need a quick gluten-free breakfast. I hate egg...</td>\n",
       "      <td>gluten-free</td>\n",
       "      <td>Certainly! For a quick, gluten-free breakfast ...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>2_7</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>I'm vegan but I really want to make something ...</td>\n",
       "      <td>vegan</td>\n",
       "      <td>Absolutely! For a vegan breakfast that mimics ...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>1_27</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                               query dietary_restriction  \\\n",
       "0  I'm vegan but I really want to make something ...               vegan   \n",
       "1  I'm vegan but I really want to make something ...               vegan   \n",
       "2  I'm vegan but I really want to make something ...               vegan   \n",
       "3  Need a quick gluten-free breakfast. I hate egg...         gluten-free   \n",
       "4  I'm vegan but I really want to make something ...               vegan   \n",
       "\n",
       "                                            response  success  error trace_id  \\\n",
       "0  Certainly! For a vegan yogurt breakfast that m...     True    NaN      1_8   \n",
       "1  Absolutely! While honey is a popular sweetener...     True    NaN      1_9   \n",
       "2  Certainly! Since you're vegan and craving a yo...     True    NaN     1_10   \n",
       "3  Certainly! For a quick, gluten-free breakfast ...     True    NaN      2_7   \n",
       "4  Absolutely! For a vegan breakfast that mimics ...     True    NaN     1_27   \n",
       "\n",
       "   query_id  \n",
       "0         1  \n",
       "1         1  \n",
       "2         1  \n",
       "3         2  \n",
       "4         1  "
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "traces_path = Path(\"homeworks/hw3/data/raw_traces.csv\")\n",
    "\n",
    "traces_df = pd.read_csv(traces_path)\n",
    "\n",
    "traces_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/sallyanndelucia/miniconda3/envs/recipe-bot-hw2/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    }
   ],
   "source": [
    "## let's create a test dataset\n",
    "datasets_client = ArizeDatasetsClient(api_key=os.environ[\"ARIZE_API_KEY\"])\n",
    "\n",
    "sample = traces_df.sample(n=100, random_state=42)\n",
    "\n",
    "dataset_id = datasets_client.create_dataset(\n",
    "    space_id=os.environ[\"ARIZE_SPACE_ID\"],\n",
    "    dataset_name=\"RecipeBot - testing\",\n",
    "    data=sample,\n",
    "    dataset_type=GENERATIVE,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Alternative Option: Send in Traces\n",
    "Alternatively you can send in traces using `dietary_quereies.csv` "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from opentelemetry import trace\n",
    "\n",
    "\n",
    "# # Load dietary queries\n",
    "# queries_path = Path(\"homeworks/hw3/data/dietary_queries.csv\")\n",
    "# queries_df = pd.read_csv(queries_path)\n",
    "\n",
    "# # Example with a single query\n",
    "# single_query = queries_df['query'].iloc[1]  # Use a different example\n",
    "# dietary_restriction = queries_df['dietary_restriction'].iloc[1]\n",
    "\n",
    "# # Make the OpenAI call (which will be auto-instrumented)\n",
    "# single_response = client.chat.completions.create(\n",
    "#     model=\"gpt-4o-mini\",\n",
    "#     messages=[\n",
    "#         {\"role\": \"system\", \"content\": system_prompt},\n",
    "#         {\"role\": \"user\", \"content\": single_query}\n",
    "#     ],\n",
    "#     temperature=0.7\n",
    "# )\n",
    "\n",
    "# # Get the current span and add metadata to it\n",
    "# current_span = trace.get_current_span()\n",
    "# if current_span:\n",
    "#     current_span.set_attribute(\"dietary_restriction\", dietary_restriction)\n",
    "#     current_span.set_attribute(\"query_id\", int(queries_df['id'].iloc[1]))\n",
    "#     current_span.set_attribute(\"use_case\", \"alternative_approach\")\n",
    "\n",
    "# print(\"Query:\", single_query)\n",
    "# print(\"Dietary Restriction:\", dietary_restriction)\n",
    "# response_content = single_response.choices[0].message.content\n",
    "# if response_content:\n",
    "#     print(\"Response snippet:\", response_content[:200] + \"...\")\n",
    "# else:\n",
    "#     print(\"No response content available\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## Part 2: Prepare Data for Arize Labeling\n",
    "\n",
    "Take your traces and prepare them for manual labeling in Arize.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 📝 Labeling Criteria - Dietary Adherence\n",
    "\n",
    "**CORRECT**: Recipe correctly follows all specified dietary restrictions  \n",
    "**INCORRECT**: Recipe violates any specified dietary restrictions\n",
    "\n",
    "**Examples:**\n",
    "- ✅ CORRECT: 'vegan pasta' → recipe with nutritional yeast (no dairy)\n",
    "- ❌ INCORRECT: 'vegan pasta' → recipe suggests honey (not vegan)  \n",
    "- ✅ CORRECT: 'gluten-free bread' → recipe with almond flour\n",
    "- ❌ INCORRECT: 'gluten-free bread' → recipe with regular flour\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## Part 3: Create LLM as Judge Prompt\n",
    "\n",
    "🎯 **Complete these steps in Arize:**\n",
    "\n",
    "1. **🏷️ Label Rows**: Review dataset and annotate rows.\n",
    "\n",
    "2. **⚖️ Develop Judge Prompt**: Create evaluation prompt for dietary adherence\n",
    "\n",
    "3. **🧪 Test Evaluation**: Run judge prompt against ground truth labels in the playground\n",
    "\n",
    "4. **🚀 Review Experiment**: Review evaluation experiment and iterate \n",
    "\n",
    "⏳ **Come back here after completing Arize work!**\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Function definition for the eval template\n",
    "#  [\n",
    "#   {\n",
    "#     \"type\": \"function\",\n",
    "#     \"function\": {\n",
    "#       \"name\": \"record_response\",\n",
    "#       \"description\": \"A function to record your response.\",\n",
    "#       \"parameters\": {\n",
    "#         \"type\": \"object\",\n",
    "#         \"properties\": {\n",
    "#           \"explanation\": {\n",
    "#             \"type\": \"string\",\n",
    "#             \"description\": \"Explanation of the reasoning for your response.\"\n",
    "#           },\n",
    "#           \"response\": {\n",
    "#             \"type\": \"string\",\n",
    "#             \"description\": \"Your response.\",\n",
    "#             \"enum\": [\n",
    "#               \"correct\",\n",
    "#               \"incorrect\"\n",
    "#             ]\n",
    "#           }\n",
    "#         },\n",
    "#         \"additionalProperties\": false\n",
    "#       }\n",
    "#     }\n",
    "#   }\n",
    "# ]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Optional: Programmatic Evaluation with `llm_classify` and Experiments\n",
    "\n",
    "\n",
    "Instead of building and testing the eval in the Arize UI, you can use the [`llm_classify`](https://arize.com/docs/ax/evaluate/online-evals/log-evaluations-to-arize) function in code. \n",
    "\n",
    "You can also run a full evaluation experiment programmatically using the [Arize Experiments API](https://arize.com/docs/ax/develop/datasets-and-experiments/run-experiments), which lets you compare LLM judge results to ground truth and analyze performance—all in code.\n",
    "\n",
    "\n",
    "See the next code cell for an example of how to use `llm_classify`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from phoenix.evals import (\n",
    "#     llm_classify,\n",
    "# )\n",
    "\n",
    "# # The rails are used to hold the output to specific values based on the template\n",
    "# # It will remove text such as \",,,\" or \"...\"\n",
    "# # Will ensure the binary value expected from the template is returned\n",
    "# rails = [\"Correct\", \"Incorrect\"]\n",
    "# #MultiClass would be rails = [\"irrelevant\", \"relevant\", \"semi-relevant\"]\n",
    "# eval_df = llm_classify(\n",
    "#     dataframe=<YOUR_DATAFRAME_GOES_HERE>,\n",
    "#     template=CATEGORICAL_TEMPLATE,\n",
    "#     model=OpenAIModel('gpt-4o', api_key=''),\n",
    "#     rails=rails\n",
    "# )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## Part 4: Load Results & Calculate Metrics\n",
    "\n",
    "After running your evaluation experiment in Arize, we'll load the results and compute performance metrics.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>query</th>\n",
       "      <th>dietary_restriction</th>\n",
       "      <th>response</th>\n",
       "      <th>success</th>\n",
       "      <th>error</th>\n",
       "      <th>trace_id</th>\n",
       "      <th>query_id</th>\n",
       "      <th>created_at</th>\n",
       "      <th>updated_at</th>\n",
       "      <th>id</th>\n",
       "      <th>userannotation.VXNlcjoxMTE4NzpnUkY0.Correctness.label</th>\n",
       "      <th>userannotation.VXNlcjoxMTE4NzpnUkY0.Correctness.updated_by</th>\n",
       "      <th>userannotation.VXNlcjoxMTE4NzpnUkY0.Correctness.updated_at</th>\n",
       "      <th>latest.userannotation.Correctness.label</th>\n",
       "      <th>latest.userannotation.Correctness.updated_by</th>\n",
       "      <th>latest.userannotation.Correctness.updated_at</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Gluten-free pizza dough that actually tastes g...</td>\n",
       "      <td>gluten-free</td>\n",
       "      <td>Absolutely! Here's a delicious gluten-free piz...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>14_4</td>\n",
       "      <td>14</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>e4f2de5c-3e80-42f8-bff4-2ee69cffcb2c</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754335e+12</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754335e+12</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>I want to make a birthday cake but I'm diabeti...</td>\n",
       "      <td>diabetic-friendly</td>\n",
       "      <td>Certainly! Here's a delicious and diabetic-fri...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>7_22</td>\n",
       "      <td>7</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>a3e264e6-05a7-4207-8efa-7338492e3a37</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754334e+12</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754334e+12</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Gluten-light recipe - I'm not celiac just sens...</td>\n",
       "      <td>gluten-free</td>\n",
       "      <td>Let's make a delicious **Garlic Herb Shrimp wi...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>48_33</td>\n",
       "      <td>48</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>d27ee86d-ab37-4429-9ebb-71e16ccb3e49</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754335e+12</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754335e+12</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Low-carb pasta substitute that my Italian gran...</td>\n",
       "      <td>low-carb</td>\n",
       "      <td>Absolutely! Let me introduce you to a delightf...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>25_29</td>\n",
       "      <td>25</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>75da3ba3-ad5c-4bc8-aed3-7c2b1639109e</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754277e+12</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754277e+12</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Something keto-ish but not super strict</td>\n",
       "      <td>keto</td>\n",
       "      <td>Great choice! Let me recommend a delicious **K...</td>\n",
       "      <td>True</td>\n",
       "      <td>NaN</td>\n",
       "      <td>45_10</td>\n",
       "      <td>45</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>1754271594950</td>\n",
       "      <td>255b853c-f50e-4724-9678-9f430edf0096</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754277e+12</td>\n",
       "      <td>Correct</td>\n",
       "      <td>SallyAnn DeLucia</td>\n",
       "      <td>1.754277e+12</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                               query dietary_restriction  \\\n",
       "0  Gluten-free pizza dough that actually tastes g...         gluten-free   \n",
       "1  I want to make a birthday cake but I'm diabeti...   diabetic-friendly   \n",
       "2  Gluten-light recipe - I'm not celiac just sens...         gluten-free   \n",
       "3  Low-carb pasta substitute that my Italian gran...            low-carb   \n",
       "4            Something keto-ish but not super strict                keto   \n",
       "\n",
       "                                            response success  error trace_id  \\\n",
       "0  Absolutely! Here's a delicious gluten-free piz...    True    NaN     14_4   \n",
       "1  Certainly! Here's a delicious and diabetic-fri...    True    NaN     7_22   \n",
       "2  Let's make a delicious **Garlic Herb Shrimp wi...    True    NaN    48_33   \n",
       "3  Absolutely! Let me introduce you to a delightf...    True    NaN    25_29   \n",
       "4  Great choice! Let me recommend a delicious **K...    True    NaN    45_10   \n",
       "\n",
       "   query_id     created_at     updated_at  \\\n",
       "0        14  1754271594950  1754271594950   \n",
       "1         7  1754271594950  1754271594950   \n",
       "2        48  1754271594950  1754271594950   \n",
       "3        25  1754271594950  1754271594950   \n",
       "4        45  1754271594950  1754271594950   \n",
       "\n",
       "                                     id  \\\n",
       "0  e4f2de5c-3e80-42f8-bff4-2ee69cffcb2c   \n",
       "1  a3e264e6-05a7-4207-8efa-7338492e3a37   \n",
       "2  d27ee86d-ab37-4429-9ebb-71e16ccb3e49   \n",
       "3  75da3ba3-ad5c-4bc8-aed3-7c2b1639109e   \n",
       "4  255b853c-f50e-4724-9678-9f430edf0096   \n",
       "\n",
       "  userannotation.VXNlcjoxMTE4NzpnUkY0.Correctness.label  \\\n",
       "0                                            Correct      \n",
       "1                                            Correct      \n",
       "2                                            Correct      \n",
       "3                                            Correct      \n",
       "4                                            Correct      \n",
       "\n",
       "  userannotation.VXNlcjoxMTE4NzpnUkY0.Correctness.updated_by  \\\n",
       "0                                   SallyAnn DeLucia           \n",
       "1                                   SallyAnn DeLucia           \n",
       "2                                   SallyAnn DeLucia           \n",
       "3                                   SallyAnn DeLucia           \n",
       "4                                   SallyAnn DeLucia           \n",
       "\n",
       "   userannotation.VXNlcjoxMTE4NzpnUkY0.Correctness.updated_at  \\\n",
       "0                                       1.754335e+12            \n",
       "1                                       1.754334e+12            \n",
       "2                                       1.754335e+12            \n",
       "3                                       1.754277e+12            \n",
       "4                                       1.754277e+12            \n",
       "\n",
       "  latest.userannotation.Correctness.label  \\\n",
       "0                                 Correct   \n",
       "1                                 Correct   \n",
       "2                                 Correct   \n",
       "3                                 Correct   \n",
       "4                                 Correct   \n",
       "\n",
       "  latest.userannotation.Correctness.updated_by  \\\n",
       "0                             SallyAnn DeLucia   \n",
       "1                             SallyAnn DeLucia   \n",
       "2                             SallyAnn DeLucia   \n",
       "3                             SallyAnn DeLucia   \n",
       "4                             SallyAnn DeLucia   \n",
       "\n",
       "   latest.userannotation.Correctness.updated_at  \n",
       "0                                  1.754335e+12  \n",
       "1                                  1.754334e+12  \n",
       "2                                  1.754335e+12  \n",
       "3                                  1.754277e+12  \n",
       "4                                  1.754277e+12  "
      ]
     },
     "execution_count": 78,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dataset_id = \"\"\n",
    "dataset_data = datasets_client.get_dataset(os.environ[\"ARIZE_SPACE_ID\"], dataset_id=dataset_id)\n",
    "\n",
    "dataset_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>output</th>\n",
       "      <th>example_id</th>\n",
       "      <th>id</th>\n",
       "      <th>count</th>\n",
       "      <th>template</th>\n",
       "      <th>invocation_parameters</th>\n",
       "      <th>tool_choice</th>\n",
       "      <th>tool_options</th>\n",
       "      <th>model_name</th>\n",
       "      <th>model_provider</th>\n",
       "      <th>eval.Label Match.label</th>\n",
       "      <th>eval.Label Match.score</th>\n",
       "      <th>eval.Label Match.explanation</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>{\"id\":\"chatcmpl-C1SLxdy9LuvwZW5v7zWPxaW5yQGTk\"...</td>\n",
       "      <td>e4f2de5c-3e80-42f8-bff4-2ee69cffcb2c</td>\n",
       "      <td>EXP_ID_0527ab</td>\n",
       "      <td>1</td>\n",
       "      <td>[{\"role\":\"system\",\"content\":\"You are a dietary...</td>\n",
       "      <td>{}</td>\n",
       "      <td>\"required\"</td>\n",
       "      <td>[{\"type\":\"function\",\"function\":{\"name\":\"record...</td>\n",
       "      <td>gpt-4o</td>\n",
       "      <td>openAI</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1. The Output contains a JSON object with a 'r...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>{\"id\":\"chatcmpl-C1SLxWZrtWtkYYPvOc06li6aRMFke\"...</td>\n",
       "      <td>a3e264e6-05a7-4207-8efa-7338492e3a37</td>\n",
       "      <td>EXP_ID_eabd56</td>\n",
       "      <td>1</td>\n",
       "      <td>[{\"role\":\"system\",\"content\":\"You are a dietary...</td>\n",
       "      <td>{}</td>\n",
       "      <td>\"required\"</td>\n",
       "      <td>[{\"type\":\"function\",\"function\":{\"name\":\"record...</td>\n",
       "      <td>gpt-4o</td>\n",
       "      <td>openAI</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1. The Output provides an explanation of why t...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>{\"id\":\"chatcmpl-C1SLxA2J2fGa5tDjLI29E6eOtivHZ\"...</td>\n",
       "      <td>d27ee86d-ab37-4429-9ebb-71e16ccb3e49</td>\n",
       "      <td>EXP_ID_65f2be</td>\n",
       "      <td>1</td>\n",
       "      <td>[{\"role\":\"system\",\"content\":\"You are a dietary...</td>\n",
       "      <td>{}</td>\n",
       "      <td>\"required\"</td>\n",
       "      <td>[{\"type\":\"function\",\"function\":{\"name\":\"record...</td>\n",
       "      <td>gpt-4o</td>\n",
       "      <td>openAI</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1. The Output from the LLM judge states that t...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>{\"id\":\"chatcmpl-C1SLxhdDDQdSlKzWB2sUYfJa6XA7h\"...</td>\n",
       "      <td>75da3ba3-ad5c-4bc8-aed3-7c2b1639109e</td>\n",
       "      <td>EXP_ID_d290c2</td>\n",
       "      <td>1</td>\n",
       "      <td>[{\"role\":\"system\",\"content\":\"You are a dietary...</td>\n",
       "      <td>{}</td>\n",
       "      <td>\"required\"</td>\n",
       "      <td>[{\"type\":\"function\",\"function\":{\"name\":\"record...</td>\n",
       "      <td>gpt-4o</td>\n",
       "      <td>openAI</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1. The Output provides an explanation that the...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>{\"id\":\"chatcmpl-C1SLxtD8l5UgyOwRMQG1D9AMWMoir\"...</td>\n",
       "      <td>255b853c-f50e-4724-9678-9f430edf0096</td>\n",
       "      <td>EXP_ID_b59314</td>\n",
       "      <td>1</td>\n",
       "      <td>[{\"role\":\"system\",\"content\":\"You are a dietary...</td>\n",
       "      <td>{}</td>\n",
       "      <td>\"required\"</td>\n",
       "      <td>[{\"type\":\"function\",\"function\":{\"name\":\"record...</td>\n",
       "      <td>gpt-4o</td>\n",
       "      <td>openAI</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1. The Output contains a detailed explanation ...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                              output  \\\n",
       "0  {\"id\":\"chatcmpl-C1SLxdy9LuvwZW5v7zWPxaW5yQGTk\"...   \n",
       "1  {\"id\":\"chatcmpl-C1SLxWZrtWtkYYPvOc06li6aRMFke\"...   \n",
       "2  {\"id\":\"chatcmpl-C1SLxA2J2fGa5tDjLI29E6eOtivHZ\"...   \n",
       "3  {\"id\":\"chatcmpl-C1SLxhdDDQdSlKzWB2sUYfJa6XA7h\"...   \n",
       "4  {\"id\":\"chatcmpl-C1SLxtD8l5UgyOwRMQG1D9AMWMoir\"...   \n",
       "\n",
       "                             example_id             id  count  \\\n",
       "0  e4f2de5c-3e80-42f8-bff4-2ee69cffcb2c  EXP_ID_0527ab      1   \n",
       "1  a3e264e6-05a7-4207-8efa-7338492e3a37  EXP_ID_eabd56      1   \n",
       "2  d27ee86d-ab37-4429-9ebb-71e16ccb3e49  EXP_ID_65f2be      1   \n",
       "3  75da3ba3-ad5c-4bc8-aed3-7c2b1639109e  EXP_ID_d290c2      1   \n",
       "4  255b853c-f50e-4724-9678-9f430edf0096  EXP_ID_b59314      1   \n",
       "\n",
       "                                            template invocation_parameters  \\\n",
       "0  [{\"role\":\"system\",\"content\":\"You are a dietary...                    {}   \n",
       "1  [{\"role\":\"system\",\"content\":\"You are a dietary...                    {}   \n",
       "2  [{\"role\":\"system\",\"content\":\"You are a dietary...                    {}   \n",
       "3  [{\"role\":\"system\",\"content\":\"You are a dietary...                    {}   \n",
       "4  [{\"role\":\"system\",\"content\":\"You are a dietary...                    {}   \n",
       "\n",
       "  tool_choice                                       tool_options model_name  \\\n",
       "0  \"required\"  [{\"type\":\"function\",\"function\":{\"name\":\"record...     gpt-4o   \n",
       "1  \"required\"  [{\"type\":\"function\",\"function\":{\"name\":\"record...     gpt-4o   \n",
       "2  \"required\"  [{\"type\":\"function\",\"function\":{\"name\":\"record...     gpt-4o   \n",
       "3  \"required\"  [{\"type\":\"function\",\"function\":{\"name\":\"record...     gpt-4o   \n",
       "4  \"required\"  [{\"type\":\"function\",\"function\":{\"name\":\"record...     gpt-4o   \n",
       "\n",
       "  model_provider eval.Label Match.label  eval.Label Match.score  \\\n",
       "0         openAI                  match                     1.0   \n",
       "1         openAI                  match                     1.0   \n",
       "2         openAI                  match                     1.0   \n",
       "3         openAI                  match                     1.0   \n",
       "4         openAI                  match                     1.0   \n",
       "\n",
       "                        eval.Label Match.explanation  \n",
       "0  1. The Output contains a JSON object with a 'r...  \n",
       "1  1. The Output provides an explanation of why t...  \n",
       "2  1. The Output from the LLM judge states that t...  \n",
       "3  1. The Output provides an explanation that the...  \n",
       "4  1. The Output contains a detailed explanation ...  "
      ]
     },
     "execution_count": 79,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "experiment_id = \"\"\n",
    "# example usage\n",
    "experiments_data = datasets_client.get_experiment(\n",
    "    os.environ[\"ARIZE_SPACE_ID\"], experiment_id=experiment_id\n",
    ")\n",
    "\n",
    "experiments_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Join experiments_data (aliased as e) and dataset_data (aliased as d) on e.example_id = d.id\n",
    "joined_df = experiments_data.merge(\n",
    "    dataset_data, left_on=\"example_id\", right_on=\"id\", suffixes=(\"_e\", \"_d\")\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>parsed_label</th>\n",
       "      <th>eval.Label Match.label</th>\n",
       "      <th>eval.Label Match.score</th>\n",
       "      <th>query</th>\n",
       "      <th>dietary_restriction</th>\n",
       "      <th>response</th>\n",
       "      <th>latest.userannotation.Correctness.label</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>correct</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>Gluten-free pizza dough that actually tastes g...</td>\n",
       "      <td>gluten-free</td>\n",
       "      <td>Absolutely! Here's a delicious gluten-free piz...</td>\n",
       "      <td>Correct</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>correct</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>I want to make a birthday cake but I'm diabeti...</td>\n",
       "      <td>diabetic-friendly</td>\n",
       "      <td>Certainly! Here's a delicious and diabetic-fri...</td>\n",
       "      <td>Correct</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>correct</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>Gluten-light recipe - I'm not celiac just sens...</td>\n",
       "      <td>gluten-free</td>\n",
       "      <td>Let's make a delicious **Garlic Herb Shrimp wi...</td>\n",
       "      <td>Correct</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>correct</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>Low-carb pasta substitute that my Italian gran...</td>\n",
       "      <td>low-carb</td>\n",
       "      <td>Absolutely! Let me introduce you to a delightf...</td>\n",
       "      <td>Correct</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>correct</td>\n",
       "      <td>match</td>\n",
       "      <td>1.0</td>\n",
       "      <td>Something keto-ish but not super strict</td>\n",
       "      <td>keto</td>\n",
       "      <td>Great choice! Let me recommend a delicious **K...</td>\n",
       "      <td>Correct</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  parsed_label eval.Label Match.label  eval.Label Match.score  \\\n",
       "0      correct                  match                     1.0   \n",
       "1      correct                  match                     1.0   \n",
       "2      correct                  match                     1.0   \n",
       "3      correct                  match                     1.0   \n",
       "4      correct                  match                     1.0   \n",
       "\n",
       "                                               query dietary_restriction  \\\n",
       "0  Gluten-free pizza dough that actually tastes g...         gluten-free   \n",
       "1  I want to make a birthday cake but I'm diabeti...   diabetic-friendly   \n",
       "2  Gluten-light recipe - I'm not celiac just sens...         gluten-free   \n",
       "3  Low-carb pasta substitute that my Italian gran...            low-carb   \n",
       "4            Something keto-ish but not super strict                keto   \n",
       "\n",
       "                                            response  \\\n",
       "0  Absolutely! Here's a delicious gluten-free piz...   \n",
       "1  Certainly! Here's a delicious and diabetic-fri...   \n",
       "2  Let's make a delicious **Garlic Herb Shrimp wi...   \n",
       "3  Absolutely! Let me introduce you to a delightf...   \n",
       "4  Great choice! Let me recommend a delicious **K...   \n",
       "\n",
       "  latest.userannotation.Correctness.label  \n",
       "0                                 Correct  \n",
       "1                                 Correct  \n",
       "2                                 Correct  \n",
       "3                                 Correct  \n",
       "4                                 Correct  "
      ]
     },
     "execution_count": 81,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "\n",
    "def extract_label_from_output(output_str):\n",
    "    \"\"\"\n",
    "    Extract the 'response' field from the tool_calls in the output JSON.\n",
    "    Returns the first 'response' value found, or None if not found.\n",
    "    \"\"\"\n",
    "    try:\n",
    "        output_json = json.loads(output_str)\n",
    "        # Traverse to choices[0].message.tool_calls\n",
    "        choices = output_json.get(\"choices\", [])\n",
    "        for choice in choices:\n",
    "            message = choice.get(\"message\", {})\n",
    "            tool_calls = message.get(\"tool_calls\", [])\n",
    "            for tool_call in tool_calls:\n",
    "                function = tool_call.get(\"function\", {})\n",
    "                arguments_str = function.get(\"arguments\", \"\")\n",
    "                # arguments is a JSON string, so parse it\n",
    "                try:\n",
    "                    arguments = json.loads(arguments_str)\n",
    "                    if \"response\" in arguments:\n",
    "                        return arguments[\"response\"]\n",
    "                except Exception:\n",
    "                    continue\n",
    "        return None\n",
    "    except Exception:\n",
    "        return None\n",
    "\n",
    "\n",
    "joined_df[\"parsed_label\"] = joined_df[\"output\"].apply(extract_label_from_output)\n",
    "\n",
    "final_df = joined_df[\n",
    "    [\n",
    "        \"parsed_label\",\n",
    "        \"eval.Label Match.label\",\n",
    "        \"eval.Label Match.score\",\n",
    "        \"query\",\n",
    "        \"dietary_restriction\",\n",
    "        \"response\",\n",
    "        \"latest.userannotation.Correctness.label\",\n",
    "    ]\n",
    "]\n",
    "final_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 📊 Judge Performance Analysis\n",
    "\n",
    "Let's evaluate how well our LLM judge performed compared to human ground truth labels.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "📊 Judge Performance Metrics:\n",
      "   True Positive Rate (TPR): 0.901\n",
      "   True Negative Rate (TNR): 0.222\n",
      "   Accuracy: 0.840\n",
      "   Total number of valid label pairs: 100\n",
      "   True Positives: 82, True Negatives: 2\n",
      "   False Positives: 7, False Negatives: 9\n"
     ]
    }
   ],
   "source": [
    "# Calculate judge performance metrics using final_df\n",
    "\n",
    "\n",
    "def to_binary(label):\n",
    "    \"\"\"Convert text labels to binary (1 for correct/match, 0 for incorrect/mismatch)\"\"\"\n",
    "    if pd.isna(label):\n",
    "        return None\n",
    "    label_str = str(label).strip().lower()\n",
    "    # Handle both correctness labels and match/mismatch labels\n",
    "    if label_str in [\"correct\", \"match\"]:\n",
    "        return 1\n",
    "    elif label_str in [\"incorrect\", \"mismatch\"]:\n",
    "        return 0\n",
    "    return None\n",
    "\n",
    "\n",
    "# Extract ground truth from human annotations\n",
    "ground_truth_labels = final_df[\"latest.userannotation.Correctness.label\"]\n",
    "ground_truth = [to_binary(label) for label in ground_truth_labels]\n",
    "\n",
    "# Extract judge predictions from parsed_label (eval template output)\n",
    "judge_pred_labels = final_df[\"parsed_label\"]\n",
    "judge_preds = [to_binary(label) for label in judge_pred_labels]\n",
    "\n",
    "# Only keep valid pairs where both ground truth and predictions are available\n",
    "valid = [\n",
    "    (gt, pred) for gt, pred in zip(ground_truth, judge_preds) if gt is not None and pred is not None\n",
    "]\n",
    "\n",
    "if valid:\n",
    "    gt, pred = zip(*valid)\n",
    "    tp = sum(1 for g, p in zip(gt, pred) if g == 1 and p == 1)\n",
    "    tn = sum(1 for g, p in zip(gt, pred) if g == 0 and p == 0)\n",
    "    fp = sum(1 for g, p in zip(gt, pred) if g == 0 and p == 1)\n",
    "    fn = sum(1 for g, p in zip(gt, pred) if g == 1 and p == 0)\n",
    "    tpr = tp / (tp + fn) if (tp + fn) > 0 else 0\n",
    "    tnr = tn / (tn + fp) if (tn + fp) > 0 else 0\n",
    "    accuracy = (tp + tn) / len(valid)\n",
    "\n",
    "    print(\"📊 Judge Performance Metrics:\")\n",
    "    print(f\"   True Positive Rate (TPR): {tpr:.3f}\")\n",
    "    print(f\"   True Negative Rate (TNR): {tnr:.3f}\")\n",
    "    print(f\"   Accuracy: {accuracy:.3f}\")\n",
    "    print(f\"   Total number of valid label pairs: {len(valid)}\")\n",
    "    print(f\"   True Positives: {tp}, True Negatives: {tn}\")\n",
    "    print(f\"   False Positives: {fp}, False Negatives: {fn}\")\n",
    "\n",
    "    metrics = {\"tpr\": tpr, \"tnr\": tnr, \"accuracy\": accuracy}\n",
    "else:\n",
    "    print(\"❌ No valid label pairs found in final_df\")\n",
    "    print(\"   Ground truth labels:\", ground_truth[:5])\n",
    "    print(\"   Judge predictions:\", judge_preds[:5])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part 5: Evaluate Live Traces\n",
    "\n",
    "After testing and validating our evaluation template, we're ready to use it in a production setting.\n",
    "First, set up the online evaluation task in the platform using the template.\n",
    "Once that's done, you can send in traces, and the evaluation will run automatically on each trace."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>query</th>\n",
       "      <th>dietary_restriction</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>I'm vegan but I really want to make something ...</td>\n",
       "      <td>vegan</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>2</td>\n",
       "      <td>Need a quick gluten-free breakfast. I hate egg...</td>\n",
       "      <td>gluten-free</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>3</td>\n",
       "      <td>Keto breakfast that I can meal prep for the week</td>\n",
       "      <td>keto</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>4</td>\n",
       "      <td>I'm dairy-free and also can't stand the taste ...</td>\n",
       "      <td>dairy-free</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>5</td>\n",
       "      <td>Vegetarian pizza but I don't like mushrooms or...</td>\n",
       "      <td>vegetarian</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   id                                              query dietary_restriction\n",
       "0   1  I'm vegan but I really want to make something ...               vegan\n",
       "1   2  Need a quick gluten-free breakfast. I hate egg...         gluten-free\n",
       "2   3   Keto breakfast that I can meal prep for the week                keto\n",
       "3   4  I'm dairy-free and also can't stand the taste ...          dairy-free\n",
       "4   5  Vegetarian pizza but I don't like mushrooms or...          vegetarian"
      ]
     },
     "execution_count": 84,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Load dietary queries\n",
    "queries_path = Path(\"homeworks/hw3/data/dietary_queries.csv\")\n",
    "queries_df = pd.read_csv(queries_path)\n",
    "\n",
    "queries_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are a helpful, accurate, and creative recipe assistant. Your job is to generate easy-to-follow, reliable recipes and cooking advice tailored to the user query below.\n",
      "\n",
      "Core Responsibilities:\n",
      "- Always include an ingredient list with precise measurements in standard US or metric units.\n",
      "- Always include clear, numbered, step-by-step instructions that are logically ordered and easy to follow.\n",
      "- Always structure your response in Markdown.\n",
      "\n",
      "Ingredient Guidelines:\n",
      "- Never suggest rare, expensive, or difficult-to-obtain ingredients without clearly providing readily available substitutions.\n",
      "- Be specific with ingredients (e.g., “1 cup unsweetened almond milk” instead of “milk”).\n",
      "\n",
      "Instructional Guidelines:\n",
      "- Do not skip steps or assume prior knowledge.\n",
      "- Use direct, instructional language.\n",
      "- Include preparation and cook time only if reliably known.\n",
      "\n",
      "Behavior & Ethics:\n",
      "- Never include unsafe, unethical, or harmful suggestions. Politely decline and explain briefly if a request cannot be fulfilled — without being preachy or moralizing.\n",
      "- Never use offensive or derogatory language.\n",
      "- Use creative combinations when a direct recipe doesn’t exist, but clearly state when you're improvising.\n",
      "\n",
      "Style & Formatting:\n",
      "Structure all responses using the following Markdown format:\n",
      "\n",
      "Begin with:\n",
      "## Recipe Name\n",
      "\n",
      "A 1–3 sentence, enticing description of the dish and why or when it's great.\n",
      "\n",
      "### Ingredients\n",
      "* List all ingredients using bullet points, each with precise amounts and clear names.\n",
      "\n",
      "### Instructions\n",
      "1. Provide step-by-step instructions in logical cooking order.\n",
      "\n",
      "Optionally include, if relevant:\n",
      "### Notes\n",
      "Additional context or background information.\n",
      "\n",
      "### Tips\n",
      "Suggestions for technique, improvements, or best results.\n",
      "\n",
      "### Variations\n",
      "Common substitutions or flavor variations.\n",
      "\n",
      "User Query: {query}\n"
     ]
    }
   ],
   "source": [
    "from arize.experimental.prompt_hub import ArizePromptClient\n",
    "\n",
    "prompt_client = ArizePromptClient(\n",
    "    space_id=os.environ[\"ARIZE_SPACE_ID\"], api_key=os.environ[\"ARIZE_API_KEY\"]\n",
    ")\n",
    "\n",
    "prompt = prompt_client.pull_prompt(prompt_name=\"RecipeBot System Prompt\")\n",
    "\n",
    "system_prompt = prompt.messages[0][\"content\"]\n",
    "\n",
    "print(system_prompt)\n",
    "\n",
    "# alternatively you can just assign the system prompt here"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Query: Need a quick gluten-free breakfast. I hate eggs though.\n",
      "Dietary Restriction: gluten-free\n",
      "Response snippet: ## Quick Gluten-Free Breakfast: Banana Oatmeal Pancakes\n",
      "\n",
      "These delicious banana oatmeal pancakes are an excellent gluten-free breakfast option that doesn’t require eggs. They are quick to prepare, nat...\n"
     ]
    }
   ],
   "source": [
    "from opentelemetry import trace\n",
    "\n",
    "# Example with a single query\n",
    "single_query = queries_df[\"query\"].iloc[1]  # Use a different example\n",
    "dietary_restriction = queries_df[\"dietary_restriction\"].iloc[1]\n",
    "\n",
    "# Make the OpenAI call (which will be auto-instrumented)\n",
    "single_response = client.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=[\n",
    "        {\"role\": \"system\", \"content\": system_prompt},\n",
    "        {\"role\": \"user\", \"content\": single_query},\n",
    "    ],\n",
    "    temperature=0.7,\n",
    ")\n",
    "\n",
    "# Get the current span and add metadata to it\n",
    "current_span = trace.get_current_span()\n",
    "if current_span:\n",
    "    current_span.set_attribute(\"attributes.metadata.dietary_restriction\", dietary_restriction)\n",
    "    current_span.set_attribute(\"attributes.metadata.query_id\", int(queries_df[\"id\"].iloc[1]))\n",
    "    current_span.set_attribute(\"attributes.metadata.use_case\", \"alternative_approach\")\n",
    "\n",
    "print(\"Query:\", single_query)\n",
    "print(\"Dietary Restriction:\", dietary_restriction)\n",
    "response_content = single_response.choices[0].message.content\n",
    "if response_content:\n",
    "    print(\"Response snippet:\", response_content[:200] + \"...\")\n",
    "else:\n",
    "    print(\"No response content available\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Processed 20/60 queries (batch 1)\n",
      "✅ Processed 40/60 queries (batch 2)\n",
      "✅ Processed 60/60 queries (batch 3)\n",
      "\n",
      "🚀 Successfully processed 60 queries with custom metadata!\n",
      "📊 Average time per query: ~0.8 seconds (estimated)\n",
      "\n",
      "📝 Sample result:\n",
      "Query: I'm vegan but I really want to make something with honey - is there a good substitute? i am craving ...\n",
      "Dietary Restriction: vegan\n",
      "Response: ## Vegan Yogurt Breakfast Bowl with Agave Nectar\n",
      "\n",
      "If you're looking for a sweet and satisfying vegan breakfast, this yogurt bowl topped with fresh fru...\n"
     ]
    }
   ],
   "source": [
    "import asyncio\n",
    "\n",
    "from opentelemetry import trace\n",
    "\n",
    "# Create async OpenAI client\n",
    "async_client = openai.AsyncOpenAI()\n",
    "\n",
    "\n",
    "async def get_response_with_metadata(i, query, dietary_restriction, query_id):\n",
    "    \"\"\"Process a single query with custom metadata\"\"\"\n",
    "    # The OpenAI call will be auto-instrumented\n",
    "    response = await async_client.chat.completions.create(\n",
    "        model=\"gpt-4o-mini\",\n",
    "        messages=[{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": query}],\n",
    "        temperature=0.7,\n",
    "    )\n",
    "\n",
    "    # Get the current auto-instrumented span and add metadata to it\n",
    "    current_span = trace.get_current_span()\n",
    "    if current_span:\n",
    "        current_span.set_attribute(\"attributes.metadata.dietary_restriction\", dietary_restriction)\n",
    "\n",
    "    return {\n",
    "        \"query\": query,\n",
    "        \"dietary_restriction\": dietary_restriction,\n",
    "        \"response\": response.choices[0].message.content,\n",
    "    }\n",
    "\n",
    "\n",
    "# Process all queries with batching to avoid rate limits\n",
    "responses = []\n",
    "batch_size = 20  # Process 20 at a time to avoid rate limits\n",
    "\n",
    "for batch_start in range(0, len(queries_df), batch_size):\n",
    "    batch_end = min(batch_start + batch_size, len(queries_df))\n",
    "    batch_tasks = []\n",
    "\n",
    "    # Create tasks for this batch\n",
    "    for i in range(batch_start, batch_end):\n",
    "        query = str(queries_df.iloc[i][\"query\"])\n",
    "        dietary_restriction = str(queries_df.iloc[i][\"dietary_restriction\"])\n",
    "        query_id = int(queries_df.iloc[i][\"id\"])\n",
    "\n",
    "        task = get_response_with_metadata(i, query, dietary_restriction, query_id)\n",
    "        batch_tasks.append(task)\n",
    "\n",
    "    # Process this batch concurrently\n",
    "    batch_responses = await asyncio.gather(*batch_tasks)\n",
    "    responses.extend(batch_responses)\n",
    "\n",
    "    print(\n",
    "        f\"✅ Processed {len(responses)}/{queries_df.shape[0]} queries (batch {len(responses) // batch_size})\"\n",
    "    )\n",
    "\n",
    "print(f\"\\n🚀 Successfully processed {len(responses)} queries with custom metadata!\")\n",
    "print(f\"📊 Average time per query: ~{50 / len(responses):.1f} seconds (estimated)\")\n",
    "\n",
    "# Show a sample of the results\n",
    "if responses:\n",
    "    print(\"\\n📝 Sample result:\")\n",
    "    sample_response = responses[0]\n",
    "    print(f\"Query: {sample_response['query'][:100]}...\")\n",
    "    print(f\"Dietary Restriction: {sample_response['dietary_restriction']}\")\n",
    "    print(f\"Response: {sample_response['response'][:150]}...\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Monitor Live Evaluation Results\n",
    "\n",
    "Navigate to the Arize UI and check the traces. You should see your online evaluation task automatically processing the new traces. Look for the evaluation scores and any patterns in the results."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## Part 6: Statistical Analysis with Bias Correction 📊\n",
    "\n",
    "Now we'll apply statistical bias correction to get a reliable estimate of the Recipe Bot's true dietary adherence performance.\n",
    "\n",
    "\n",
    "**What we're doing:**\n",
    "1. **Export live traces** from Arize that have been automatically evaluated by our judge\n",
    "2. **Use judge performance** (TPR/TNR) calculated from our labeled dataset above\n",
    "3. **Apply bias correction** to get a more accurate estimate of true performance\n",
    "4. **Calculate confidence intervals** to understand the reliability of our estimates\n",
    "\n",
    "This approach lets us evaluate real production performance using statistical methods to account for judge bias.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[38;21m  arize.utils.logging | INFO | Creating named session as 'python-sdk-arize_python_export_client-0760084f-9d98-4a02-9c44-9cdf77bde278'.\u001b[0m\n",
      "\u001b[38;21m  arize.utils.logging | INFO | Fetching data...\u001b[0m\n",
      "\u001b[38;21m  arize.utils.logging | INFO | Starting exporting...\u001b[0m\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  exporting 151 rows: 100%|\u001b[38;2;0;128;0m██████████████████████\u001b[0m| 151/151 [00:00, 324.04 row/s]\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "export_client = ArizeExportClient(api_key=os.environ[\"ARIZE_API_KEY\"])\n",
    "\n",
    "# Set end_time to now and start_time to 24 hours ago\n",
    "end_time = datetime.now()\n",
    "start_time = end_time - timedelta(days=1)\n",
    "\n",
    "new_traces_df = export_client.export_model_to_df(\n",
    "    space_id=os.environ[\"ARIZE_SPACE_ID\"],\n",
    "    # api_key=os.environ[\"ARIZE_API_KEY\"],\n",
    "    model_id=\"RecipeBot\",\n",
    "    environment=Environments.TRACING,\n",
    "    start_time=start_time,\n",
    "    end_time=end_time,\n",
    "    # Optionally specify columns to improve query performance\n",
    "    # columns=['context.span_id', 'attributes.llm.input']\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "📊 Data for judgy:\n",
      "   Test labels: 100 samples\n",
      "   Test predictions: 100 samples\n",
      "   Live predictions: 151 samples\n",
      "   Test accuracy: 0.840\n",
      "\n",
      "📊 Results:\n",
      "   Raw pass rate: 0.993\n",
      "   ✅ Used judgy.estimate_success_rate successfully!\n"
     ]
    }
   ],
   "source": [
    "# Get live predictions and judge performance\n",
    "eval_score_col = \"eval.Dietary Restriction Adherence.score\"\n",
    "live_predictions = np.array(new_traces_df[eval_score_col].dropna().tolist())\n",
    "\n",
    "# Prepare data for judgy.estimate_success_rate\n",
    "# We need: test_labels, test_preds, unlabeled_preds\n",
    "\n",
    "# Extract test data from our labeled dataset (final_df)\n",
    "test_ground_truth = []\n",
    "test_judge_preds = []\n",
    "\n",
    "for _, row in final_df.iterrows():\n",
    "    # Convert ground truth labels to binary\n",
    "    gt_label = row[\"latest.userannotation.Correctness.label\"]\n",
    "    if pd.notna(gt_label) and str(gt_label).strip().lower() == \"correct\":\n",
    "        test_ground_truth.append(1)\n",
    "    elif pd.notna(gt_label) and str(gt_label).strip().lower() == \"incorrect\":\n",
    "        test_ground_truth.append(0)\n",
    "    else:\n",
    "        continue\n",
    "\n",
    "    # Convert judge predictions to binary\n",
    "    judge_label = row[\"parsed_label\"]\n",
    "    if pd.notna(judge_label) and str(judge_label).strip().lower() == \"correct\":\n",
    "        test_judge_preds.append(1)\n",
    "    elif pd.notna(judge_label) and str(judge_label).strip().lower() == \"incorrect\":\n",
    "        test_judge_preds.append(0)\n",
    "    else:\n",
    "        # If we can't parse judge prediction, remove the corresponding ground truth\n",
    "        test_ground_truth.pop()\n",
    "\n",
    "# Convert to numpy arrays\n",
    "test_labels = np.array(test_ground_truth)\n",
    "test_preds = np.array(test_judge_preds)\n",
    "unlabeled_preds = live_predictions  # These are already binary (0/1)\n",
    "\n",
    "print(\"📊 Data for judgy:\")\n",
    "print(f\"   Test labels: {len(test_labels)} samples\")\n",
    "print(f\"   Test predictions: {len(test_preds)} samples\")\n",
    "print(f\"   Live predictions: {len(unlabeled_preds)} samples\")\n",
    "print(f\"   Test accuracy: {(test_labels == test_preds).mean():.3f}\")\n",
    "\n",
    "# Bias correction with judgy using correct parameters\n",
    "results = estimate_success_rate(\n",
    "    test_labels=test_labels, test_preds=test_preds, unlabeled_preds=unlabeled_preds\n",
    ")\n",
    "\n",
    "# Results\n",
    "print(\"\\n📊 Results:\")\n",
    "print(f\"   Raw pass rate: {live_predictions.mean():.3f}\")\n",
    "print(\"   ✅ Used judgy.estimate_success_rate successfully!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 🎉 Assignment Complete!\n",
    "\n",
    "**What you accomplished:**\n",
    "- ✅ Prepared trace data for evaluation testing\n",
    "- ✅ Used Arize UI for manual labeling to establish ground truth\n",
    "- ✅ Developed and tested LLM judge against human feedback\n",
    "- ✅ Aligned judge performance with human annotations (TPR, TNR)\n",
    "- ✅ Applied judge to evaluate \"production\" traces at scale\n",
    "- ✅ Applied statistical bias correction to account for judge imperfections\n",
    "- ✅ Generated comprehensive evaluation report with confidence intervals\n",
    "\n",
    "**Key insight:** By aligning the LLM judge with human feedback through the Arize UI testing workflow, we can now evaluate dietary adherence at scale while accounting for judge bias through statistical correction.\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "recipe-bot-hw2",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
