{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PdKwzEluDBN7"
      },
      "source": [
        "# Install openai-agents SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3QdkOviEB2ay"
      },
      "outputs": [],
      "source": [
        "!pip install -Uq openai-agents"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7yD91lz4DIAx"
      },
      "source": [
        "# Make your Notebook capable of running asynchronous functions.\n",
        "Both Jupyter notebooks and Python’s asyncio library utilize event loops, but they serve different purposes and can sometimes interfere with each other.\n",
        "\n",
        "The nest_asyncio library allows the existing event loop to accept nested event loops, enabling asyncio code to run within environments that already have an event loop, such as Jupyter notebooks.\n",
        "\n",
        "In summary, both Jupyter notebooks and Python’s asyncio library utilize event loops to manage asynchronous operations. When working within Jupyter notebooks, it’s essential to be aware of the existing event loop to effectively run asyncio code without conflicts."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "C8YXyIpiZ9v4"
      },
      "outputs": [],
      "source": [
        "import nest_asyncio\n",
        "nest_asyncio.apply()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wQsVowow7ihQ"
      },
      "source": [
        "# Config"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XnusaX_RWF22"
      },
      "outputs": [],
      "source": [
        "from agents import (\n",
        "    AsyncOpenAI,\n",
        "    OpenAIChatCompletionsModel\n",
        ")\n",
        "from google.colab import userdata\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oPvcFwItoKqw"
      },
      "outputs": [],
      "source": [
        "gemini_api_key = userdata.get(\"GEMINI_API_KEY\")\n",
        "\n",
        "\n",
        "# Check if the API key is present; if not, raise an error\n",
        "if not gemini_api_key:\n",
        "    raise ValueError(\"GEMINI_API_KEY is not set. Please ensure it is defined in your .env file.\")\n",
        "\n",
        "#Reference: https://ai.google.dev/gemini-api/docs/openai\n",
        "external_client = AsyncOpenAI(\n",
        "    api_key=gemini_api_key,\n",
        "    base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\",\n",
        ")\n",
        "\n",
        "model = OpenAIChatCompletionsModel(\n",
        "    model=\"gemini-2.0-flash\",\n",
        "    openai_client=external_client\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "y9LkW-F7nC3T"
      },
      "outputs": [],
      "source": [
        "from agents import set_default_openai_client, set_tracing_disabled\n",
        "set_default_openai_client(external_client)\n",
        "set_tracing_disabled(True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "L4gdpCV3G6mL"
      },
      "source": [
        "# Workflow: Orchestrator-workers\n",
        "\n",
        "In the orchestrator-workers workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.\n",
        "\n",
        "[Learning Reference](https://www.anthropic.com/engineering/building-effective-agents)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CWDa6KtIG-_Q"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xL1SE0WBzNfB"
      },
      "outputs": [],
      "source": [
        "import asyncio\n",
        "from agents import Agent, ItemHelpers, MessageOutputItem, Runner"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 71,
      "metadata": {
        "id": "2XzWlsI2yue2"
      },
      "outputs": [],
      "source": [
        "spanish_agent = Agent(\n",
        "    name=\"spanish_agent\",\n",
        "    instructions=\"You translate the user's message to Spanish\",\n",
        "    handoff_description=\"An english to spanish translator\",\n",
        "    model=model\n",
        ")\n",
        "\n",
        "french_agent = Agent(\n",
        "    name=\"french_agent\",\n",
        "    instructions=\"You translate the user's message to French\",\n",
        "    handoff_description=\"An english to french translator\",\n",
        "    model=model\n",
        ")\n",
        "\n",
        "italian_agent = Agent(\n",
        "    name=\"italian_agent\",\n",
        "    instructions=\"You translate the user's message to Italian\",\n",
        "    handoff_description=\"An english to italian translator\",\n",
        "    model=model\n",
        ")\n",
        "\n",
        "orchestrator_agent = Agent(\n",
        "    name=\"orchestrator_agent\",\n",
        "    instructions=(\n",
        "        \"You are a translation agent. You use the tools given to you to translate.\"\n",
        "        \"If asked for multiple translations, you call the relevant tools in order.\"\n",
        "        \"You never translate on your own, you always use the provided tools.\"\n",
        "    ),\n",
        "    tools=[\n",
        "        spanish_agent.as_tool(\n",
        "            tool_name=\"translate_to_spanish\",\n",
        "            tool_description=\"Translate the user's message to Spanish\",\n",
        "        ),\n",
        "        french_agent.as_tool(\n",
        "            tool_name=\"translate_to_french\",\n",
        "            tool_description=\"Translate the user's message to French\",\n",
        "        ),\n",
        "        italian_agent.as_tool(\n",
        "            tool_name=\"translate_to_italian\",\n",
        "            tool_description=\"Translate the user's message to Italian\",\n",
        "        ),\n",
        "    ],\n",
        "    model=model\n",
        ")\n",
        "\n",
        "synthesizer_agent = Agent(\n",
        "    name=\"synthesizer_agent\",\n",
        "    instructions=\"You inspect translations, correct them if needed, and produce a final concatenated response. Always return the final response in with properly formatted translations.\",\n",
        "    model=model\n",
        ")\n",
        "\n",
        "\n",
        "async def main():\n",
        "    msg = input(\"Hi! What would you like translated, and to which languages? \")\n",
        "\n",
        "    # Run the entire orchestration in a single trace\n",
        "    orchestrator_result = await Runner.run(orchestrator_agent, msg)\n",
        "\n",
        "    for item in orchestrator_result.new_items:\n",
        "        if isinstance(item, MessageOutputItem):\n",
        "            text = ItemHelpers.text_message_output(item)\n",
        "            if text:\n",
        "                print(f\"  - Translation step: {text}\")\n",
        "\n",
        "    synthesizer_result = await Runner.run(\n",
        "        synthesizer_agent,\n",
        "         (orchestrator_result.to_input_list() + [{\"role\": \"user\", 'content': f'pick best translation done for these {msg}' }])\n",
        "    )\n",
        "\n",
        "    print(f\"\\n\\nFinal response:\\n{synthesizer_result.final_output}\")\n",
        "\n",
        "    return synthesizer_result, orchestrator_result"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 73,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "4LtZ2foIFsP9",
        "outputId": "e2d2be77-a680-4aaa-cc90-cba34f065334"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Hi! What would you like translated, and to which languages? hello in spanish and french\n",
            "  - Translation step: Hola.\n",
            "Bonjour !\n",
            "\n",
            "\n",
            "\n",
            "Final response:\n",
            "Both translations are accurate and common. \"Hola\" is the standard greeting in Spanish, and \"Bonjour !\" is a very common and appropriate greeting in French. I don't have a basis to say one is definitively better than the other in this context.\n",
            "\n"
          ]
        }
      ],
      "source": [
        "synth_result, orch_result = asyncio.run(main())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZN9BBLv9I5wf"
      },
      "source": [
        "#### Inspect the results"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 74,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "us5mLS5LIUb4",
        "outputId": "3e33d889-0a6a-4eef-bffe-ceae5a446513"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[{'content': 'hello in spanish and french', 'role': 'user'},\n",
              " {'id': '__fake_id__',\n",
              "  'arguments': '{\"input\":\"hello\"}',\n",
              "  'call_id': '',\n",
              "  'name': 'translate_to_spanish',\n",
              "  'type': 'function_call'},\n",
              " {'id': '__fake_id__',\n",
              "  'arguments': '{\"input\":\"hello\"}',\n",
              "  'call_id': '',\n",
              "  'name': 'translate_to_french',\n",
              "  'type': 'function_call'},\n",
              " {'call_id': '', 'output': 'Hola.\\n', 'type': 'function_call_output'},\n",
              " {'call_id': '', 'output': 'Bonjour !\\n', 'type': 'function_call_output'},\n",
              " {'id': '__fake_id__',\n",
              "  'content': [{'annotations': [],\n",
              "    'text': 'Hola.\\nBonjour !\\n',\n",
              "    'type': 'output_text'}],\n",
              "  'role': 'assistant',\n",
              "  'status': 'completed',\n",
              "  'type': 'message'}]"
            ]
          },
          "execution_count": 74,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "orch_result.to_input_list()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 76,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 53
        },
        "id": "K2sUmVQSIHgg",
        "outputId": "47c38336-4c34-4b89-a67b-5e416739dc1a"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "'Both translations are accurate and common. \"Hola\" is the standard greeting in Spanish, and \"Bonjour !\" is a very common and appropriate greeting in French. I don\\'t have a basis to say one is definitively better than the other in this context.\\n'"
            ]
          },
          "execution_count": 76,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "synth_result.final_output"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
