{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "ed27526e",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#ff7800;\">Important point - please read</h2>\n",
    "            <span style=\"color:#ff7800;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.<br/><br/>If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1d3a7c44",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Start with imports\n",
    "\n",
    "import os\n",
    "import json\n",
    "from dotenv import load_dotenv\n",
    "from openai import OpenAI\n",
    "from anthropic import Anthropic\n",
    "from IPython.display import Markdown, display"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca5dc982",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Always remember to do this!\n",
    "load_dotenv(override=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a53039f5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Print the key prefixes to help with any debugging\n",
    "\n",
    "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
    "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
    "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
    "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
    "groq_api_key = os.getenv('GROQ_API_KEY')\n",
    "\n",
    "if openai_api_key:\n",
    "    print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
    "else:\n",
    "    print(\"OpenAI API Key not set\")\n",
    "    \n",
    "if anthropic_api_key:\n",
    "    print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
    "else:\n",
    "    print(\"Anthropic API Key not set (and this is optional)\")\n",
    "\n",
    "if google_api_key:\n",
    "    print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n",
    "else:\n",
    "    print(\"Google API Key not set (and this is optional)\")\n",
    "\n",
    "if deepseek_api_key:\n",
    "    print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
    "else:\n",
    "    print(\"DeepSeek API Key not set (and this is optional)\")\n",
    "\n",
    "if groq_api_key:\n",
    "    print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n",
    "else:\n",
    "    print(\"Groq API Key not set (and this is optional)\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a2f091d4",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate a challenging question\n",
    "\n",
    "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n",
    "request += \"Answer only with the question, no explanation.\"\n",
    "messages = [{\"role\": \"user\", \"content\": request}]\n",
    "\n",
    "openai = OpenAI()\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-5-mini\",\n",
    "    messages=messages,\n",
    ")\n",
    "question = response.choices[0].message.content\n",
    "print(f\"Generated Question: {question}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6db23f57",
   "metadata": {},
   "source": [
    "## Intelligent Orchestrator Pattern\n",
    "\n",
    "This pattern combines:\n",
    "1. **Orchestrator-Workers** - Breaking down complex tasks\n",
    "2. **Intelligent Routing** - Matching models to their strengths\n",
    "3. **Synthesis** - Combining specialized responses"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7659a40a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# STEP 1: Orchestrator breaks down the question and assigns models based on their strengths\n",
    "\n",
    "orchestrator_prompt = f\"\"\"You are an intelligent orchestrator AI. Analyze this complex question and:\n",
    "\n",
    "1. Break it down into 3-4 simpler sub-questions\n",
    "2. For each sub-question, recommend which type of AI model would be best suited\n",
    "\n",
    "Available models and their strengths:\n",
    "- gpt-5-nano: Excellent at reasoning, complex logic, and nuanced analysis\n",
    "- claude-sonnet-4-5: Strong at creative writing, empathy, and ethical reasoning\n",
    "- gemini-2.5-flash: Fast at factual retrieval, technical explanations, and structured data\n",
    "- deepseek-chat: Great at code generation, mathematical problems, and technical documentation\n",
    "- openai/gpt-oss-120b: Good general purpose, cost-effective for straightforward tasks\n",
    "- llama3.2: Privacy-focused local model, good for sensitive data and general tasks\n",
    "\n",
    "Original question: {question}\n",
    "\n",
    "Respond with JSON only, in this format:\n",
    "{{\n",
    "    \"sub_questions\": [\n",
    "        {{\n",
    "            \"question\": \"the sub-question text\",\n",
    "            \"reasoning\": \"why this model is best for this sub-question\",\n",
    "            \"recommended_model\": \"model_name\"\n",
    "        }},\n",
    "        ...\n",
    "    ]\n",
    "}}\"\"\"\n",
    "\n",
    "orchestrator_messages = [{\"role\": \"user\", \"content\": orchestrator_prompt}]\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-5-mini\",\n",
    "    messages=orchestrator_messages,\n",
    ")\n",
    "orchestration_plan = json.loads(response.choices[0].message.content)\n",
    "\n",
    "print(\"🎯 Orchestrator's Intelligent Routing Plan:\\n\")\n",
    "for i, item in enumerate(orchestration_plan[\"sub_questions\"], 1):\n",
    "    print(f\"{i}. SUB-QUESTION: {item['question']}\")\n",
    "    print(f\"   📍 ASSIGNED TO: {item['recommended_model']}\")\n",
    "    print(f\"   💡 REASONING: {item['reasoning']}\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d62e4fa8",
   "metadata": {},
   "source": [
    "## For Ollama setup\n",
    "\n",
    "Ollama runs a local web service that gives an OpenAI compatible endpoint,  \n",
    "and runs models locally using high performance C++ code.\n",
    "\n",
    "If you don't have Ollama, install it here by visiting https://ollama.com then pressing Download and following the instructions.\n",
    "\n",
    "After it's installed, you should be able to visit here: http://localhost:11434 and see the message \"Ollama is running\"\n",
    "\n",
    "You might need to restart Cursor (and maybe reboot). Then open a Terminal (control+`) and run `ollama serve`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2761338c",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#ff7800;\">Super important - ignore me at your peril!</h2>\n",
    "            <span style=\"color:#ff7800;\">The model called <b>llama3.3</b> is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized <b>llama3.2</b> or <b>llama3.2:1b</b> and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the <A href=\"https://ollama.com/models\">the Ollama models page</a> for a full list of models and sizes.\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "35785614",
   "metadata": {},
   "outputs": [],
   "source": [
    "!ollama pull llama3.2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e28b68fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# STEP 2: Initialize all model clients\n",
    "\n",
    "claude = Anthropic()\n",
    "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n",
    "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n",
    "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n",
    "ollama = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
    "\n",
    "# Map model names to their API clients\n",
    "model_clients = {\n",
    "    \"gpt-5-nano\": (\"openai\", openai),\n",
    "    \"claude-sonnet-4-5\": (\"claude\", claude),\n",
    "    \"gemini-2.5-flash\": (\"gemini\", gemini),\n",
    "    \"deepseek-chat\": (\"deepseek\", deepseek),\n",
    "    \"openai/gpt-oss-120b\": (\"groq\", groq),\n",
    "    \"llama3.2\": (\"ollama\", ollama)\n",
    "}\n",
    "\n",
    "print(\"✅ All model clients initialized\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "54b9bce6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# STEP 3: Execute sub-questions with orchestrator's model recommendations\n",
    "\n",
    "sub_answers = {}\n",
    "\n",
    "for idx, item in enumerate(orchestration_plan[\"sub_questions\"], 1):\n",
    "    sub_q = item[\"question\"]\n",
    "    recommended_model = item[\"recommended_model\"]\n",
    "    \n",
    "    print(f\"\\n🤖 Task {idx}: Using {recommended_model}\")\n",
    "    print(f\"📝 Question: {sub_q[:80]}...\")\n",
    "    \n",
    "    messages = [{\"role\": \"user\", \"content\": sub_q}]\n",
    "    \n",
    "    # Route to the appropriate client\n",
    "    client_type, client = model_clients.get(recommended_model, (\"openai\", openai))\n",
    "    \n",
    "    try:\n",
    "        if client_type == \"claude\":\n",
    "            response = client.messages.create(\n",
    "                model=recommended_model, \n",
    "                messages=messages, \n",
    "                max_tokens=800\n",
    "            )\n",
    "            answer = response.content[0].text\n",
    "        else:\n",
    "            response = client.chat.completions.create(\n",
    "                model=recommended_model, \n",
    "                messages=messages\n",
    "            )\n",
    "            answer = response.choices[0].message.content\n",
    "        \n",
    "        sub_answers[sub_q] = {\n",
    "            \"model\": recommended_model,\n",
    "            \"answer\": answer,\n",
    "            \"reasoning\": item[\"reasoning\"]\n",
    "        }\n",
    "        print(f\"✅ Completed successfully\\n\")\n",
    "        \n",
    "    except Exception as e:\n",
    "        print(f\"❌ Error with {recommended_model}: {str(e)}\")\n",
    "        # Fallback to GPT-5-mini\n",
    "        response = openai.chat.completions.create(\n",
    "            model=\"gpt-5-mini\", \n",
    "            messages=messages\n",
    "        )\n",
    "        answer = response.choices[0].message.content\n",
    "        sub_answers[sub_q] = {\n",
    "            \"model\": \"gpt-5-mini (fallback)\",\n",
    "            \"answer\": answer,\n",
    "            \"reasoning\": \"Fallback due to error\"\n",
    "        }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cfe99aba",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Display the sub-answers\n",
    "\n",
    "for sub_q, data in sub_answers.items():\n",
    "    display(Markdown(f\"### Sub-Question: {sub_q}\"))\n",
    "    display(Markdown(f\"**Model Used:** {data['model']}\"))\n",
    "    display(Markdown(f\"**Answer:** {data['answer']}\"))\n",
    "    print(\"\\n\" + \"=\"*80 + \"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff84289b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# STEP 4: Synthesis - Combine all specialized responses\n",
    "\n",
    "synthesis_prompt = f\"\"\"You are a synthesis AI combining specialized responses into a comprehensive answer.\n",
    "\n",
    "ORIGINAL QUESTION: {question}\n",
    "\n",
    "The orchestrator intelligently routed sub-questions to models based on their strengths:\n",
    "\n",
    "\"\"\"\n",
    "\n",
    "for sub_q, data in sub_answers.items():\n",
    "    synthesis_prompt += f\"\\n{'='*60}\\n\"\n",
    "    synthesis_prompt += f\"SUB-QUESTION: {sub_q}\\n\"\n",
    "    synthesis_prompt += f\"ASSIGNED TO: {data['model']}\\n\"\n",
    "    synthesis_prompt += f\"SELECTION REASONING: {data['reasoning']}\\n\"\n",
    "    synthesis_prompt += f\"ANSWER: {data['answer']}\\n\"\n",
    "\n",
    "synthesis_prompt += f\"\\n{'='*60}\\n\"\n",
    "synthesis_prompt += \"\\nSynthesize these specialized responses into one coherent, comprehensive answer to the original question.\"\n",
    "synthesis_prompt += \"\\nHighlight how different model strengths contributed to the final answer.\"\n",
    "\n",
    "synthesis_messages = [{\"role\": \"user\", \"content\": synthesis_prompt}]\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-5-nano\",\n",
    "    messages=synthesis_messages,\n",
    ")\n",
    "synthesized_answer = response.choices[0].message.content\n",
    "\n",
    "display(Markdown(\"## 🎯 Intelligently Orchestrated & Synthesized Answer:\"))\n",
    "display(Markdown(synthesized_answer))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5191a58a",
   "metadata": {},
   "source": [
    "## Pattern Analysis"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7fa0de4c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Display pattern analysis\n",
    "\n",
    "model_list = '\\n'.join(f'- **{data[\"model\"]}**: {data[\"reasoning\"]}' for data in sub_answers.values())\n",
    "\n",
    "analysis = f\"\"\"\n",
    "## 📊 Pattern Analysis\n",
    "\n",
    "### Patterns Used from Anthropic's Building Effective Agents:\n",
    "\n",
    "1. **Orchestrator-Workers Pattern** ✅\n",
    "   - One LLM coordinates the workflow\n",
    "   - Breaks complex tasks into subtasks\n",
    "   - Distributes work to specialized workers\n",
    "   - Synthesizes results into coherent output\n",
    "\n",
    "2. **Intelligent Routing Pattern** ✅\n",
    "   - Matches models to their specific strengths\n",
    "   - Dynamic model selection based on task requirements\n",
    "   - Optimizes for quality by leveraging specialization\n",
    "\n",
    "3. **Implicit Parallelization** ⚡\n",
    "   - Sub-questions can be executed in parallel\n",
    "   - Independent tasks distributed across models\n",
    "\n",
    "### Key Innovations:\n",
    "\n",
    "**Capability-Aware Orchestration**: This is more sophisticated than simple task distribution. \n",
    "The orchestrator:\n",
    "- Understands each model's strengths and weaknesses\n",
    "- Makes intelligent routing decisions\n",
    "- Documents its reasoning for transparency\n",
    "- Enables cost optimization (expensive models only where needed)\n",
    "\n",
    "### Models Used in This Run:\n",
    "{model_list}\n",
    "\n",
    "### Total API Calls:\n",
    "- 1 orchestrator call (question decomposition)\n",
    "- {len(sub_answers)} worker calls (sub-question answering)\n",
    "- 1 synthesizer call (final answer composition)\n",
    "- **Total: {len(sub_answers) + 2} API calls**\n",
    "\"\"\"\n",
    "\n",
    "display(Markdown(analysis))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3434b0a7",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
    "            <span style=\"color:#ff7800;\">Try modifying the orchestrator prompt to include cost considerations. Add a 'budget' field for each model and have the orchestrator balance quality vs. cost when making routing decisions.\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0168301c",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
    "            <span style=\"color:#00bfff;\">The Intelligent Orchestrator pattern is critical for production systems where:\n",
    "            <ul>\n",
    "                <li><b>Cost optimization</b> matters - use expensive models only where their strengths are needed</li>\n",
    "                <li><b>Quality is paramount</b> - leverage specialization for each aspect of complex tasks</li>\n",
    "                <li><b>Scalability is required</b> - easily add new models and define their capabilities</li>\n",
    "                <li><b>Transparency is valued</b> - document routing decisions and reasoning</li>\n",
    "            </ul>\n",
    "            This pattern mirrors how you'd assemble a team of specialists for a complex project, making it intuitive for business stakeholders to understand.\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "agents",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
