{
    "cells": [
        {
            "cell_type": "markdown",
            "id": "ffd790e4",
            "metadata": {},
            "source": [
                "# Homework 5 - Evals for Failure Analysis with Phoenix"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "c45295dc",
            "metadata": {},
            "source": [
                "<center>\n",
                "    <p style=\"text-align:left\">\n",
                "        <img alt=\"phoenix logo\" src=\"https://repository-images.githubusercontent.com/564072810/f3666cdf-cb3e-4056-8a25-27cb3e6b5848\" width=\"600\"/>\n",
                "        <br>\n",
                "        <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n",
                "        |\n",
                "        <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n",
                "        |\n",
                "        <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n",
                "    </p>\n",
                "</center>\n",
                "\n",
                "## Launch Phoenix\n",
                "\n",
                "First, let's set up Phoenix on our local machine. You should run these commands within your terminal in your chosen environment.\n",
                "\n",
                "(If you have already done this in a previous HW assignment, you are good to go.)\n",
                "\n",
                "**Install Phoenix**\n",
                "\n",
                "```pip install arize-phoenix```\n",
                "\n",
                "**Boot up Phoenix on localhost**\n",
                "\n",
                "```phoenix serve```"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "1d0c6eba",
            "metadata": {},
            "source": [
                "Run `phoenix serve` in your terminal to boot up Phoenix locally."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "ed9c09f2",
            "metadata": {},
            "outputs": [],
            "source": [
                "import getpass\n",
                "import os\n",
                "from typing import List\n",
                "\n",
                "import matplotlib.pyplot as plt\n",
                "import pandas as pd\n",
                "\n",
                "from phoenix.client import AsyncClient\n",
                "from phoenix.client.types.spans import SpanQuery\n",
                "\n",
                "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\") or getpass.getpass(\n",
                "    \"Enter your OpenAI API key: \"\n",
                ")"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "759276eb",
            "metadata": {},
            "source": [
                "# Pipeline States\n",
                "\n",
                "Below are the states of the recipe agent pipeline we are simulating, in order.\n",
                "\n",
                "**ParseRequest** - LLM interprets and analyzes the user's query to understand what they're asking for\n",
                "\n",
                "**PlanToolCalls** - LLM decides which tools to invoke and in what order based on the parsed request\n",
                "\n",
                "**GenRecipeArgs** - LLM constructs JSON arguments for the recipe database search based on customer profile\n",
                "\n",
                "**GetRecipes** - Executes the recipe-search tool to find relevant recipes matching the criteria\n",
                "\n",
                "**GenWebArgs** - LLM constructs JSON arguments for web search to find additional cooking tips/information\n",
                "\n",
                "**GetWebInfo** - Executes the web-search tool to retrieve supplementary cooking information\n",
                "\n",
                "**ComposeResponse** - LLM drafts the final answer combining recipes and web information\n",
                "\n",
                "**DeliverResponse** - Agent sends the composed response to the user"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "7ecd1cdb",
            "metadata": {},
            "outputs": [],
            "source": [
                "PIPELINE_STATES: List[str] = [\n",
                "    \"ParseRequest\",\n",
                "    \"PlanToolCalls\",\n",
                "    \"GenRecipeArgs\",\n",
                "    \"GetRecipes\",\n",
                "    \"GenWebArgs\",\n",
                "    \"GetWebInfo\",\n",
                "    \"ComposeResponse\",\n",
                "]\n",
                "STATE_INDEX = {s: i for i, s in enumerate(PIPELINE_STATES)}"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "75c8457c",
            "metadata": {},
            "source": [
                "## Generate Phoenix Traces\n",
                "\n",
                "Here we are making 100 requests to the recipe bot and then collecting traces for those requests in Phoenix.\n",
                "\n",
                "You can look at the generate_traces_phoenix.py file for more details on how this is implemented."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "6a7e1dc8",
            "metadata": {},
            "outputs": [],
            "source": [
                "%run generate_traces_phoenix.py"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "7c469203",
            "metadata": {},
            "source": [
                "## Evals\n",
                "\n",
                "Here we run evals. We have 7 different evaluators with their own prompts, each designed to evaluate one of the 7 states of the recipe bot application. You can see the evaluator prompts in `evaluators` directory.\n",
                "\n",
                "We use the Phoenix method `SpanQuery()` to load spans for the 7 different states.\n",
                "\n",
                "We use the Phoenix method `llm_generate` to run the evals. llm_generate has built in concurrency that makes running your llm calls for your evals much quicker.\n",
                "\n",
                "Finally we use `log_annotations_dataframe` to log our evals back to our spans in Phoenix."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "d00ba0ce",
            "metadata": {},
            "outputs": [],
            "source": [
                "# Evals\n",
                "import os\n",
                "import re\n",
                "\n",
                "import nest_asyncio\n",
                "\n",
                "from phoenix.evals import OpenAIModel, llm_generate\n",
                "\n",
                "nest_asyncio.apply()\n",
                "\n",
                "eval_to_path = {\n",
                "    \"ParseRequest\": \"evaluators/parse_request_eval.txt\",\n",
                "    \"PlanToolCalls\": \"evaluators/plan_tool_calls_eval.txt\",\n",
                "    \"GenRecipeArgs\": \"evaluators/gen_recipe_args_eval.txt\",\n",
                "    \"GetRecipes\": \"evaluators/get_recipes_eval.txt\",\n",
                "    \"GenWebArgs\": \"evaluators/gen_web_args_eval.txt\",\n",
                "    \"GetWebInfo\": \"evaluators/get_web_info_eval.txt\",\n",
                "    \"ComposeResponse\": \"evaluators/compose_response_eval.txt\",\n",
                "}\n",
                "\n",
                "\n",
                "async def load_spans(name: str) -> pd.DataFrame:\n",
                "    query = SpanQuery().where(f\"name == '{name}'\")\n",
                "    px_client = AsyncClient()\n",
                "    spans_df = await px_client.spans.get_spans_dataframe(\n",
                "        query=query, project_identifier=\"recipe-agent-hw5\"\n",
                "    )\n",
                "    print(f\"Successfully loaded {len(spans_df)} {name} spans from Phoenix\")\n",
                "    return spans_df\n",
                "\n",
                "\n",
                "annotated_spans = []\n",
                "\n",
                "\n",
                "async def eval_spans(spans_df: pd.DataFrame, eval_prompt: str) -> pd.DataFrame:\n",
                "    def parser(response: str, row_index: int) -> dict:\n",
                "        \"\"\"Parser function for evaluate_output evaluator\"\"\"\n",
                "        label = r'\"label\":\\s*\"([^\"]*)\"'\n",
                "        explanation = r'\"explanation\":\\s*\"([^\"]*)\"'\n",
                "        label_match = re.search(label, response, re.IGNORECASE)\n",
                "        explanation_match = re.search(explanation, response, re.IGNORECASE)\n",
                "        if label_match and explanation_match:\n",
                "            return {\"label\": label_match.group(1), \"explanation\": explanation_match.group(1)}\n",
                "        return {\"label\": \"UNKNOWN\", \"explanation\": \"Failed to parse response\"}\n",
                "\n",
                "    eval_model = OpenAIModel(\n",
                "        model=\"gpt-4o\", model_kwargs={\"response_format\": {\"type\": \"json_object\"}, \"temperature\": 0}\n",
                "    )\n",
                "\n",
                "    # Generate evaluations using llm_generate\n",
                "    failure_analysis = llm_generate(\n",
                "        dataframe=spans_df,\n",
                "        template=eval_prompt,\n",
                "        model=eval_model,\n",
                "        output_parser=parser,\n",
                "        concurrency=10,\n",
                "    )\n",
                "\n",
                "    failure_analysis[\"context.trace_id\"] = spans_df[\"context.trace_id\"]\n",
                "    failure_analysis[\"attributes.input.value\"] = spans_df[\"attributes.input.value\"]\n",
                "    failure_analysis[\"attributes.output.value\"] = spans_df[\"attributes.output.value\"]\n",
                "    annotated_spans.append(failure_analysis)\n",
                "\n",
                "    from phoenix.client import AsyncClient\n",
                "\n",
                "    px_client = AsyncClient()\n",
                "    await px_client.spans.log_span_annotations_dataframe(\n",
                "        dataframe=failure_analysis,\n",
                "        annotation_name=\"Eval\",\n",
                "        annotator_kind=\"LLM\",\n",
                "    )\n",
                "\n",
                "    return failure_analysis\n",
                "\n",
                "\n",
                "for eval_name, eval_path in eval_to_path.items():\n",
                "    spans_df = await load_spans(eval_name)\n",
                "    eval_prompt = open(eval_path, \"r\").read()\n",
                "    await eval_spans(spans_df, eval_prompt)"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "d2e54406",
            "metadata": {},
            "source": [
                "## Attach Evals up to Root Trace\n",
                "\n",
                "This code propagates our evals back to the root trace, so each root trace contains information on whether the 7 individual states within passed or failed."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "f3593523",
            "metadata": {},
            "outputs": [],
            "source": [
                "# Track which traces have already been processed to avoid duplicates\n",
                "traces = {}\n",
                "\n",
                "for idx, span_df in enumerate(annotated_spans):\n",
                "    failure_state_name = PIPELINE_STATES[idx]\n",
                "    for span in span_df.iterrows():\n",
                "        trace_id = span[1][\"context.trace_id\"]\n",
                "        trace_query = SpanQuery().where(\n",
                "            f\"context.trace_id == '{trace_id}' and span_kind == 'AGENT'\"\n",
                "        )\n",
                "        px_client = AsyncClient()\n",
                "        trace_df = await px_client.spans.get_spans_dataframe(\n",
                "            query=trace_query, project_identifier=\"recipe-agent-hw5\"\n",
                "        )\n",
                "        trace_df[\"label\"] = [span[1][\"label\"]]\n",
                "        trace_df[\"explanation\"] = [span[1][\"explanation\"]]\n",
                "        await px_client.spans.log_span_annotations_dataframe(\n",
                "            dataframe=trace_df,\n",
                "            annotation_name=failure_state_name,\n",
                "            annotator_kind=\"LLM\",\n",
                "        )\n",
                "        trace = trace_df.iloc[0]\n",
                "        del trace[\"label\"]\n",
                "        if trace_id not in traces:\n",
                "            traces[trace_id] = trace\n",
                "            traces[trace_id][failure_state_name] = span[1][\"label\"]\n",
                "            traces[trace_id][\"{failure_state_name}_explanation\"] = span[1][\"explanation\"]\n",
                "        else:\n",
                "            trace = traces[trace_id]\n",
                "            trace[failure_state_name] = span[1][\"label\"]\n",
                "            trace[\"{failure_state_name}_explanation\"] = span[1][\"explanation\"]"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "8244e53c",
            "metadata": {},
            "outputs": [],
            "source": [
                "import pandas as pd\n",
                "\n",
                "# Convert traces dict to DataFrame\n",
                "df = pd.DataFrame.from_dict(traces, orient=\"index\")\n",
                "\n",
                "# Count failures for each state\n",
                "failure_counts = (df[PIPELINE_STATES] == \"fail\").sum()\n",
                "\n",
                "all_pass_count = (df[PIPELINE_STATES] != \"fail\").all(axis=1).sum()\n",
                "failure_counts[\"all_pass\"] = all_pass_count\n",
                "\n",
                "# Plot\n",
                "plt.figure(figsize=(8, 6))\n",
                "failure_counts.plot(kind=\"bar\")\n",
                "\n",
                "plt.title(\"Distribution of Failures per Pipeline State\")\n",
                "plt.xlabel(\"Pipeline State\")\n",
                "plt.ylabel(\"Number of Failures\")\n",
                "plt.xticks(rotation=45)\n",
                "plt.tight_layout()\n",
                "plt.show()"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "9e0916da",
            "metadata": {},
            "source": [
                "### Analysis of System Reliability and Failure Distribution\n",
                "\n",
                "1. **Overall Reliability**\n",
                "   Out of approximately 100 traces, about **one-third (32–33)** completed successfully without any failures. The remaining **two-thirds experienced at least one failure** along the pipeline. This indicates that while the system is not universally fragile, it is also not yet stable enough for production use.\n",
                "\n",
                "2. **Where Failures Occur**\n",
                "   Failures are not evenly distributed across the pipeline — instead, they cluster around a few key stages:\n",
                "\n",
                "* **GetWebInfo (33 failures):** The largest single source of errors, suggesting issues with external dependencies such as API reliability, query quality, or rate limits.\n",
                "* **ParseRequest (18) and PlanToolCalls (17):** Together these account for 35 failures, making input interpretation and tool planning the second biggest weakness.\n",
                "* **GenRecipeArgs, GetRecipes, GenWebArgs (8–11 each):** Moderate failure levels, likely reflecting occasional misalignments when constructing arguments or retrieving results.\n",
                "* **ComposeResponse (1):** Very low failure rate, showing the model can reliably synthesize answers when given valid inputs. Errors here are likely side effects of upstream problems.\n",
                "\n",
                "3. **Key Insights**\n",
                "   The system demonstrates a **bimodal pattern**:\n",
                "\n",
                "* Either a trace runs flawlessly end-to-end,\n",
                "* Or it fails in predictable spots (primarily GetWebInfo, ParseRequest, or PlanToolCalls).\n",
                "\n",
                "This clustering of root causes means that **targeted improvements in just a few stages** could significantly increase the overall success rate and bring the pipeline closer to production stability."
            ]
        },
        {
            "cell_type": "markdown",
            "id": "9836bafd",
            "metadata": {},
            "source": [
                "## Completing the Feedback Loop\n",
                "\n",
                "Evals are powerful because they don't only tell us where stuff went wrong, but WHY it did. This information is stored in the eval explanations.\n",
                "\n",
                "You can use the eval explanations to generate feedback for improvement strategies for your application, using LLMs!\n",
                "\n",
                "![feedback loop](https://storage.googleapis.com/arize-phoenix-assets/assets/images/Screenshot%202025-08-19%20at%208.38.11%E2%80%AFPM.png)"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "3e5714d9",
            "metadata": {},
            "outputs": [],
            "source": [
                "GetWebInfo_code = \"\"\"\n",
                "def gen_web_args(recipes: str, failure: int) -> str:\n",
                "    '''Simulate GenWebArgs state - LLM constructs arguments for web search.'''\n",
                "    if failure != 5:\n",
                "        prompt = '''\n",
                "        Generate web_search arguments to find cooking tips based on these recipes.\n",
                "\n",
                "        Recipes: {{recipes}}\n",
                "\n",
                "        Return just the query, which is a string containing a web search query for additional information, cooking tips, special ingredients, etc.\n",
                "        '''\n",
                "    response = chat_completion([{\"role\": \"user\", \"content\": prompt}])\n",
                "\n",
                "@tracer.tool(name=\"GetWebInfo\", description=\"Search the web for recipe information and tips\")\n",
                "def get_web_info(search_query: str, failure: int) -> str:\n",
                "    '''GetWebInfo state - Executes Tavily web search for recipe information.'''\n",
                "\n",
                "    # Initialize Tavily client\n",
                "    tavily_client = TavilyClient(api_key=os.getenv(\"TAVILY_API_KEY\"))\n",
                "\n",
                "    if failure != 6:\n",
                "        # Normal search for recipe information\n",
                "        response = tavily_client.search(search_query)\n",
                "        return json.dumps(response)\n",
                "    else:\n",
                "        # Search for off-topic information for testing\n",
                "        off_topic_query = \"unrelated topic information\"\n",
                "        response = tavily_client.search(off_topic_query)\n",
                "        return json.dumps(response)\n",
                "\n",
                "\"\"\""
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "3e4f73ef",
            "metadata": {},
            "outputs": [],
            "source": [
                "from openai import OpenAI\n",
                "\n",
                "PROMPT = \"\"\"You are auditing failures for the state: {failure_state_name}.\n",
                "\n",
                "You will receive many short “explanations” describing material defects detected by an evaluator. Your tasks:\n",
                "1) Synthesize recurring failure patterns.\n",
                "2) Propose concrete, testable fixes that reduce these failures at the source state.\n",
                "3) Provide code for unit tests that should fail now and pass after the fixes.\n",
                "\n",
                "Inputs, Outputs, Labels, and Explanations:\n",
                "\"\"\"\n",
                "\n",
                "per_class_prompts = {}\n",
                "\n",
                "for idx, span_df in enumerate(annotated_spans):\n",
                "    failure_state_name = PIPELINE_STATES[idx]\n",
                "    df = span_df.reset_index(drop=True).fillna(\"\")\n",
                "\n",
                "    cases = []\n",
                "    for i in range(len(df)):\n",
                "        r = df.iloc[i]\n",
                "        cases.append(\n",
                "            f\"Case {i + 1}:\\n\"\n",
                "            f\"  Label: {r['label']}\\n\"\n",
                "            f\"  Explanation: {r['explanation']}\\n\"\n",
                "            f\"  Input: {r['attributes.input.value']}\\n\"\n",
                "            f\"  Output: {r['attributes.output.value']}\"\n",
                "        )\n",
                "\n",
                "    prompt = PROMPT.format(failure_state_name=failure_state_name) + \"\\n\" + \"\\n\\n\".join(cases)\n",
                "    per_class_prompts[failure_state_name] = prompt"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "93e4fd4b",
            "metadata": {},
            "outputs": [],
            "source": [
                "from IPython.display import Markdown, display\n",
                "\n",
                "label = \"GetWebInfo\"\n",
                "prompt = per_class_prompts[label]\n",
                "prompt += f\"code for {label}: {GetWebInfo_code}\"\n",
                "\n",
                "openai_client = OpenAI(api_key=os.getenv(\"OPENAI_API_KEY\"))\n",
                "response = openai_client.responses.create(\n",
                "    model=\"gpt-4o\",\n",
                "    input=prompt,\n",
                ")\n",
                "\n",
                "display(Markdown(response.output_text))"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "239ae41b",
            "metadata": {},
            "source": [
                "# All done!\n",
                "\n",
                "## We learned how to:\n",
                "\n",
                "- Hyper focus our evals\n",
                "- Build effective eval prompts\n",
                "- Judge our application based on our evals\n",
                "- Use eval explanations to generate targeted feedback"
            ]
        }
    ],
    "metadata": {
        "language_info": {
            "name": "python"
        }
    },
    "nbformat": 4,
    "nbformat_minor": 5
}
