{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PdKwzEluDBN7"
      },
      "source": [
        "# Install openai-agents SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 42,
      "metadata": {
        "id": "3QdkOviEB2ay"
      },
      "outputs": [],
      "source": [
        "!pip install -Uq openai-agents"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7yD91lz4DIAx"
      },
      "source": [
        "# Make your Notebook capable of running asynchronous functions.\n",
        "Both Jupyter notebooks and Python’s asyncio library utilize event loops, but they serve different purposes and can sometimes interfere with each other.\n",
        "\n",
        "The nest_asyncio library allows the existing event loop to accept nested event loops, enabling asyncio code to run within environments that already have an event loop, such as Jupyter notebooks.\n",
        "\n",
        "In summary, both Jupyter notebooks and Python’s asyncio library utilize event loops to manage asynchronous operations. When working within Jupyter notebooks, it’s essential to be aware of the existing event loop to effectively run asyncio code without conflicts."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 43,
      "metadata": {
        "id": "C8YXyIpiZ9v4"
      },
      "outputs": [],
      "source": [
        "import nest_asyncio\n",
        "nest_asyncio.apply()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wQsVowow7ihQ"
      },
      "source": [
        "# Config"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 44,
      "metadata": {
        "id": "XnusaX_RWF22"
      },
      "outputs": [],
      "source": [
        "from agents import (\n",
        "    AsyncOpenAI,\n",
        "    OpenAIChatCompletionsModel,\n",
        ")\n",
        "from google.colab import userdata\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 45,
      "metadata": {
        "id": "oPvcFwItoKqw"
      },
      "outputs": [],
      "source": [
        "gemini_api_key = userdata.get(\"GEMINI_API_KEY\")\n",
        "\n",
        "\n",
        "# Check if the API key is present; if not, raise an error\n",
        "if not gemini_api_key:\n",
        "    raise ValueError(\"GEMINI_API_KEY is not set. Please ensure it is defined in your .env file.\")\n",
        "\n",
        "#Reference: https://ai.google.dev/gemini-api/docs/openai\n",
        "external_client = AsyncOpenAI(\n",
        "    api_key=gemini_api_key,\n",
        "    base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\",\n",
        ")\n",
        "\n",
        "model = OpenAIChatCompletionsModel(\n",
        "    model=\"gemini-2.0-flash\",\n",
        "    openai_client=external_client\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 46,
      "metadata": {
        "id": "y9LkW-F7nC3T"
      },
      "outputs": [],
      "source": [
        "from agents import set_default_openai_client, set_tracing_disabled\n",
        "set_default_openai_client(external_client)\n",
        "set_tracing_disabled(True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rWL7EnI_7mIF"
      },
      "source": [
        "# Workflow: Parallelization\n",
        "\n",
        "LLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically.\n",
        "\n",
        "![image.png]()\n",
        "\n",
        "[Learning Reference](https://www.anthropic.com/engineering/building-effective-agents)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "metadata": {
        "id": "xL1SE0WBzNfB"
      },
      "outputs": [],
      "source": [
        "# Imports\n",
        "import asyncio\n",
        "\n",
        "from agents import Agent, ItemHelpers, Runner"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 49,
      "metadata": {
        "id": "2XzWlsI2yue2"
      },
      "outputs": [],
      "source": [
        "urdu_agent = Agent(\n",
        "    name=\"urdu_agent\",\n",
        "    instructions=\"You translate the user's message to Urdu\",\n",
        "    model=model\n",
        ")\n",
        "\n",
        "translation_picker = Agent(\n",
        "    name=\"translation_picker\",\n",
        "    instructions=\"You pick the best Urdu translation from the given options.\",\n",
        "    model=model\n",
        ")\n",
        "\n",
        "\n",
        "async def main():\n",
        "    msg = input(\"Hi! Enter a message, and we'll translate it to Spanish.\\n\\n\")\n",
        "\n",
        "    # Ensure the entire workflow is a single trace\n",
        "    res_1, res_2, res_3 = await asyncio.gather(\n",
        "        Runner.run(\n",
        "            urdu_agent,\n",
        "            msg,\n",
        "        ),\n",
        "        Runner.run(\n",
        "            urdu_agent,\n",
        "            msg,\n",
        "        ),\n",
        "        Runner.run(\n",
        "            urdu_agent,\n",
        "            msg,\n",
        "        ),\n",
        "    )\n",
        "\n",
        "    outputs = [\n",
        "        ItemHelpers.text_message_outputs(res_1.new_items),\n",
        "        ItemHelpers.text_message_outputs(res_2.new_items),\n",
        "        ItemHelpers.text_message_outputs(res_3.new_items),\n",
        "    ]\n",
        "\n",
        "    translations = \"\\n\\n\".join(outputs)\n",
        "    print(f\"\\n\\nTranslations:\\n\\n{translations}\")\n",
        "\n",
        "    best_translation = await Runner.run(\n",
        "        translation_picker,\n",
        "        f\"Input: {msg}\\n\\nTranslations:\\n{translations}\",\n",
        "    )\n",
        "\n",
        "    print(\"\\n\\n-----\")\n",
        "\n",
        "    print(f\"Best translation: {best_translation.final_output}\")\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 50,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "4LtZ2foIFsP9",
        "outputId": "49597b0d-107c-4b0f-b828-7252af995756"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Hi! Enter a message, and we'll translate it to Spanish.\n",
            "\n",
            "What is the Agentic AI and what will it be like at end of this year\n",
            "\n",
            "\n",
            "Translations:\n",
            "\n",
            "ایجنٹک اے آئی کیا ہے اور اس سال کے آخر تک یہ کیسا ہو گا؟\n",
            "\n",
            "\n",
            "Agentic AI کیا ہے اور اس سال کے آخر تک یہ کیسا ہوگا؟\n",
            "\n",
            "\n",
            "ایجنٹک اے آئی (Agentic AI) کیا ہے اور اس سال کے آخر تک یہ کیسا ہو گا؟\n",
            "\n",
            "\n",
            "\n",
            "-----\n",
            "Best translation: The best option is:\n",
            "\n",
            "**Agentic AI کیا ہے اور اس سال کے آخر تک یہ کیسا ہوگا؟**\n",
            "\n",
            "Here's why:\n",
            "\n",
            "*   **Clarity and Simplicity:** It's straightforward and easy to understand.\n",
            "*   **Correct Spelling:** All words are spelled correctly.\n",
            "*   **Natural Flow:** The sentence structure is natural for Urdu.\n",
            "\n",
            "The other options are also acceptable, but this one is slightly cleaner. The third option including \"(Agentic AI)\" adds unnecessary English when the question is clearly about Agentic AI within the context.\n",
            "\n"
          ]
        }
      ],
      "source": [
        "asyncio.run(main())"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
