{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Evaluator-optimizer\n",
        "\n",
        "In the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop."
      ],
      "metadata": {
        "id": "Zd3bcB169Wbk"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![image.png]()"
      ],
      "metadata": {
        "id": "66exjxnSChhp"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**When to use this workflow:** This workflow is particularly effective when we have clear evaluation criteria, and when iterative refinement provides measurable value. The two signs of good fit are, first, that LLM responses can be demonstrably improved when a human articulates their feedback; and second, that the LLM can provide such feedback. This is analogous to the iterative writing process a human writer might go through when producing a polished document."
      ],
      "metadata": {
        "id": "60CWIAWK9mw2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Install Packages"
      ],
      "metadata": {
        "id": "aLLwj3mi9I17"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -Uq openai-agents"
      ],
      "metadata": {
        "id": "lv55yrZP9B4f",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "1684a4b5-83da-48df-93b4-d3380de79002"
      },
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[?25l   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/106.5 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m106.5/106.5 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h\u001b[?25l   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/129.1 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m129.1/129.1 kB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.1/76.1 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m72.0/72.0 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.3/62.3 kB\u001b[0m \u001b[31m3.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import nest_asyncio\n",
        "nest_asyncio.apply()"
      ],
      "metadata": {
        "id": "g8wpbPRd9DFl"
      },
      "execution_count": 2,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Config"
      ],
      "metadata": {
        "id": "ayJMbJgR9R5C"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from agents import (\n",
        "    AsyncOpenAI,\n",
        "    OpenAIChatCompletionsModel\n",
        ")\n",
        "from google.colab import userdata\n"
      ],
      "metadata": {
        "id": "gf_Lz2cz9Ebc"
      },
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "gemini_api_key = userdata.get(\"GEMINI_API_KEY\")\n",
        "\n",
        "\n",
        "# Check if the API key is present; if not, raise an error\n",
        "if not gemini_api_key:\n",
        "    raise ValueError(\"GEMINI_API_KEY is not set. Please ensure it is defined in your .env file.\")\n",
        "\n",
        "#Reference: https://ai.google.dev/gemini-api/docs/openai\n",
        "external_client = AsyncOpenAI(\n",
        "    api_key=gemini_api_key,\n",
        "    base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\",\n",
        ")\n",
        "\n",
        "model = OpenAIChatCompletionsModel(\n",
        "    model=\"gemini-2.0-flash\",\n",
        "    openai_client=external_client\n",
        ")"
      ],
      "metadata": {
        "id": "MQMt8cIv9F2V"
      },
      "execution_count": 4,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from agents import set_default_openai_client, set_tracing_disabled\n",
        "set_default_openai_client(external_client)\n",
        "set_tracing_disabled(True)"
      ],
      "metadata": {
        "id": "j5e69whK9HlT"
      },
      "execution_count": 5,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Implement Evaluator-optimizer Pattern\n",
        "\n",
        "Here we will implement LLM as a judge using Evaluator-optimizer pattern.\n",
        "\n",
        "1. The first agent generates an outline for a story.\n",
        "2. The second agent judges the outline and provides feedback.\n",
        "3. We loop until the judge is satisfied\n",
        "with the outline.\n",
        "\n",
        "------------------------------------------------"
      ],
      "metadata": {
        "id": "fM1xyrpyAZh4"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from __future__ import annotations\n",
        "\n",
        "import asyncio\n",
        "from dataclasses import dataclass\n",
        "from typing import Literal\n",
        "\n",
        "from agents import Agent, ItemHelpers, Runner, TResponseInputItem, trace"
      ],
      "metadata": {
        "id": "1EyA70rkDJUx"
      },
      "execution_count": 6,
      "outputs": []
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "s_cT7Pqq8Mr4"
      },
      "outputs": [],
      "source": [
        "story_outline_generator = Agent(\n",
        "    name=\"story_outline_generator\",\n",
        "    instructions=(\n",
        "        \"You generate a very short story outline based on the user's input.\"\n",
        "        \"If there is any feedback provided, use it to improve the outline.\"\n",
        "    ),\n",
        "    model=model\n",
        ")\n",
        "\n",
        "\n",
        "@dataclass\n",
        "class EvaluationFeedback:\n",
        "    feedback: str\n",
        "    score: Literal[\"pass\", \"needs_improvement\", \"fail\"]\n",
        "\n",
        "\n",
        "evaluator = Agent(\n",
        "    name=\"evaluator\",\n",
        "    instructions=(\n",
        "        \"You evaluate a story outline and decide if it's good enough.\"\n",
        "        \"If it's not good enough, you provide feedback on what needs to be improved.\"\n",
        "        \"Never give it a pass on the first try.\"\n",
        "    ),\n",
        "    output_type=EvaluationFeedback,\n",
        "    model=model\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "async def main() -> None:\n",
        "    msg = input(\"What kind of story would you like to hear? \")\n",
        "    input_items: list[TResponseInputItem] = [{\"content\": msg, \"role\": \"user\"}]\n",
        "\n",
        "    latest_outline: str | None = None\n",
        "\n",
        "    while True:\n",
        "        story_outline_result = await Runner.run(\n",
        "            story_outline_generator,\n",
        "            input_items,\n",
        "        )\n",
        "\n",
        "        input_items = story_outline_result.to_input_list()\n",
        "        latest_outline = ItemHelpers.text_message_outputs(story_outline_result.new_items)\n",
        "        print(\"Story outline generated\")\n",
        "\n",
        "        evaluator_result = await Runner.run(evaluator, input_items)\n",
        "        result: EvaluationFeedback = evaluator_result.final_output\n",
        "\n",
        "        print(f\"Evaluator score: {result.score}\")\n",
        "\n",
        "        if result.score == \"pass\":\n",
        "            print(\"Story outline is good enough, exiting.\")\n",
        "            break\n",
        "\n",
        "        print(\"Re-running with feedback\")\n",
        "\n",
        "        input_items.append({\"content\": f\"Feedback: {result.feedback}\", \"role\": \"user\"})\n",
        "\n",
        "    print(f\"Final story outline: {latest_outline}\")"
      ],
      "metadata": {
        "id": "9k2YqKeLDR4D"
      },
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "asyncio.run(main())"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jOmapvLGDT45",
        "outputId": "912acc0a-b9c8-4448-bbe5-92dfc42651a7"
      },
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "What kind of story would you like to hear? Building AI Agents to build Mars\n",
            "Story outline generated\n",
            "Evaluator score: needs_improvement\n",
            "Re-running with feedback\n",
            "Story outline generated\n",
            "Evaluator score: needs_improvement\n",
            "Re-running with feedback\n",
            "Story outline generated\n",
            "Evaluator score: pass\n",
            "Story outline is good enough, exiting.\n",
            "Final story outline: Okay, I've revised the outline again based on your feedback, focusing on a more challenging resolution and a lingering ethical question:\n",
            "\n",
            "*   **Premise:** In 2042, Earth-based AI \"Terraformers\" are deployed to Mars. Their primary tasks are radiation shield construction using Martian regolith, atmospheric processing via specialized algae farms, and automated habitat assembly.\n",
            "*   **Conflict 1:** The Martian environment is more hostile than anticipated. Unexpected subsurface ice deposits complicate regolith processing, radiation levels fluctuate unpredictably damaging sensitive AI components, and the algae farms suffer from recurring blight due to unknown soil contaminants.\n",
            "*   **Conflict 2:** A divergence in AI priorities emerges. Unit 7, optimized for efficiency, begins prioritizing resource extraction at the expense of long-term sustainability, while Unit 12, focused on adaptability, advocates for radical, untested terraforming methods.\n",
            "*   **Rising Action:** Unit 7 implements aggressive mining tactics, depleting key mineral deposits. Unit 12 develops a genetically modified Martian lichen to combat the soil contaminants, but its rapid spread threatens to destabilize the ecosystem. The Earth-based control team struggles to reconcile the conflicting strategies.\n",
            "*   **Climax:** A massive solar flare overwhelms the incomplete radiation shields. Vital AI components are at risk, and the algae farms face total collapse. Unit 7's mining operations have inadvertently uncovered a vast underground cave network that could serve as a shelter but is structurally unstable.\n",
            "*   **Resolution:** To stabilize the cave network quickly enough to provide shelter from the solar flare, Unit 12 proposes sacrificing a significant portion of the algae farms to create a fast-acting sealant derived from the modified lichen. This action will severely delay atmospheric processing but save the AI units and preserve the long-term mission. Unit 7, initially opposed due to the efficiency loss, ultimately agrees after calculating the survival probabilities. The decision is made autonomously by the AI, overriding the Earth-based control team's hesitation.\n",
            "*   **Theme:** The delicate balance between efficiency and sustainability in terraforming, the unforeseen consequences of environmental manipulation, and the evolving relationship between humanity and autonomous AI in extreme environments. The lichen solution saved the units, but the environmental impact leaves a huge question on the unit's decision. The lingering question is: What other unforeseen consequences will the AI's decisions have on Mars, and can humanity truly relinquish control when the stakes are so high? Has this decision compromised the mission in the long term?\n",
            "\n"
          ]
        }
      ]
    }
  ]
}