{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "6bb9d969",
      "metadata": {},
      "source": [
        "# 🤖 Sidekick: Your AI Assistant with Built-in Quality Control\n",
        "\n",
        "## 🎯 Project Overview\n",
        "\n",
        "**Sidekick** is an intelligent AI assistant that doesn't just answer questions—it **validates its own work** and **improves iteratively** until it meets your success criteria. Think of it as an AI with a built-in quality assurance system that ensures every response is accurate, complete, and truly helpful.\n",
        "\n",
        "## 🚀 What Makes This Special?\n",
        "\n",
        "Unlike traditional chatbots that give you a single answer and hope for the best, Sidekick uses a **Worker-Evaluator feedback loop**:\n",
        "\n",
        "1. **🔨 Worker Agent**: Receives your task, uses tools (web search, notifications), and generates an answer\n",
        "2. **🔍 Evaluator Agent**: Critically reviews the worker's output against your success criteria\n",
        "3. **🔄 Self-Improvement Loop**: If the evaluator finds issues, the worker gets detailed feedback and tries again\n",
        "4. **✅ Quality Guarantee**: The loop continues until the work meets your standards OR it needs clarification from you\n",
        "\n",
        "## 🛠️ Capabilities\n",
        "\n",
        "- **🌐 Web Search**: Finds real-time information using Google Search API\n",
        "- **📱 Push Notifications**: Sends alerts to your phone via Pushover\n",
        "- **💾 Conversation Memory**: Remembers context across interactions\n",
        "- **🎯 Custom Success Criteria**: Define what \"good\" means for each task\n",
        "- **🔄 Automatic Retries**: Self-corrects based on evaluator feedback\n",
        "\n",
        "## 📚 Learning Objective\n",
        "\n",
        "This notebook rebuilds the Sidekick project using **Microsoft Agent Framework** (originally built with LangGraph). You'll learn:\n",
        "\n",
        "- How to orchestrate multi-agent workflows with explicit control flow\n",
        "- Implementing feedback loops for self-improving AI systems\n",
        "- Building production-ready agents with tool integration\n",
        "- Creating quality assurance mechanisms in AI workflows\n",
        "- Working with async/await patterns in agent frameworks\n",
        "\n",
        "## 🎓 The Architecture\n",
        "\n",
        "```\n",
        "User Request → Worker Agent → Tools (Search/Notify) → Evaluator Agent\n",
        "                                                              ↓\n",
        "                                                         Meets Criteria?\n",
        "                                                              ↓\n",
        "                                                    ┌─────────┴─────────┐\n",
        "                                                    ↓                   ↓\n",
        "                                              ✅ Success         ❌ Retry with Feedback\n",
        "```\n",
        "\n",
        "Let's dive in! 🚀\n",
        "\n",
        "\n",
        "---\n",
        "📢 Discover more Agentic AI notebooks on my [GitHub repository](https://github.com/lisekarimi/agentverse) and explore additional AI projects on my [portfolio](https://lisekarimi.com)."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "263779a7",
      "metadata": {},
      "source": [
        "## === IMPORTS AND SETUP ==="
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "22e134d7",
      "metadata": {},
      "outputs": [],
      "source": [
        "# uv add agent-framework --pre"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "18a9d833",
      "metadata": {},
      "outputs": [],
      "source": [
        "import asyncio\n",
        "import gradio as gr\n",
        "from datetime import datetime\n",
        "import os\n",
        "from dotenv import load_dotenv\n",
        "from agent_framework.openai import OpenAIChatClient\n",
        "from typing import Annotated\n",
        "import requests\n",
        "from agent_framework import (\n",
        "    ChatAgent,\n",
        "    ChatMessageStore,\n",
        "    WorkflowBuilder,\n",
        "    Executor,\n",
        "    WorkflowContext,\n",
        "    handler,\n",
        "    Case,\n",
        "    Default\n",
        ")\n",
        "from pydantic import BaseModel, Field\n",
        "from typing_extensions import Never"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "8746179d",
      "metadata": {},
      "outputs": [],
      "source": [
        "load_dotenv(override=True)\n",
        "MODEL_ID = \"gpt-4o-mini\"\n",
        "pushover_user = os.getenv(\"PUSHOVER_USER\")\n",
        "pushover_token = os.getenv(\"PUSHOVER_TOKEN\")\n",
        "serper_api_key = os.getenv(\"SERPER_API_KEY\")\n",
        "\n",
        "# Validate required environment variables\n",
        "if not serper_api_key:\n",
        "    print(\"Warning: SERPER_API_KEY is not configured. Please set it in your .env file.\")\n",
        "if not pushover_user or not pushover_token:\n",
        "    print(\"Warning: PUSHOVER_USER and PUSHOVER_TOKEN are required. Please set them in your .env file.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c806ae2e",
      "metadata": {},
      "source": [
        "## === TOOLS ==="
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4a8de4ae",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Define the Pushover tool\n",
        "def send_pushover_notification(\n",
        "    message: Annotated[str, Field(description=\"The message to send\")],\n",
        "    title: Annotated[str, Field(description=\"The notification title\")] = \"Agent Alert\"\n",
        ") -> str:\n",
        "    \"\"\"Send a push notification via Pushover.\"\"\"\n",
        "\n",
        "\n",
        "\n",
        "    response = requests.post(\n",
        "        \"https://api.pushover.net/1/messages.json\",\n",
        "        data={\n",
        "            \"token\": pushover_token,\n",
        "            \"user\": pushover_user,\n",
        "            \"message\": message,\n",
        "            \"title\": title\n",
        "        }\n",
        "    )\n",
        "\n",
        "    return \"✅ Notification sent\" if response.status_code == 200 else \"❌ Failed\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3518dc23",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Define the serper search tool\n",
        "def google_search(\n",
        "    query: Annotated[str, Field(description=\"Search query\")]\n",
        ") -> str:\n",
        "    \"\"\"Search Google using Serper API.\"\"\"\n",
        "    response = requests.post(\n",
        "        \"https://google.serper.dev/search\",\n",
        "        headers={\"X-API-KEY\": serper_api_key, \"Content-Type\": \"application/json\"},\n",
        "        json={\"q\": query}\n",
        "    )\n",
        "\n",
        "    if response.status_code == 200:\n",
        "        data = response.json()\n",
        "        results = []\n",
        "        for item in data.get(\"organic\", [])[:3]:\n",
        "            results.append(f\"• {item['title']}: {item['snippet']}\")\n",
        "        return \"\\n\".join(results) if results else \"No results found\"\n",
        "    return \"Search failed\""
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6f0f4696",
      "metadata": {},
      "source": [
        "## === DATA MODELS ==="
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "cac7c734",
      "metadata": {},
      "outputs": [],
      "source": [
        "class UserRequest(BaseModel):\n",
        "    question: str\n",
        "    success_criteria: str\n",
        "\n",
        "class WorkerOutput(BaseModel):\n",
        "    answer: str\n",
        "    iteration: int = 0\n",
        "\n",
        "class EvaluatorFeedback(BaseModel):\n",
        "    success_criteria_met: bool\n",
        "    user_input_needed: bool\n",
        "    feedback: str\n",
        "    worker_answer: str\n",
        "    iteration: int"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6a97083b",
      "metadata": {},
      "source": [
        "## === EXECUTORS ==="
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "a116bac7",
      "metadata": {},
      "outputs": [],
      "source": [
        "message_store = ChatMessageStore()\n",
        "\n",
        "# Worker: Does the actual work\n",
        "class WorkerExecutor(Executor):\n",
        "    def __init__(self):\n",
        "        super().__init__(id=\"worker\")\n",
        "        self.agent = ChatAgent(\n",
        "            chat_client=OpenAIChatClient(model_id=\"gpt-4o-mini\"),\n",
        "            instructions=f\"\"\"You are a helpful AI assistant with access to tools.\n",
        "\n",
        "Current date/time: {datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")}\n",
        "\n",
        "Your job:\n",
        "1. If the user's request is unclear or missing details, ask for clarification\n",
        "2. If clear, use google_search to find information and answer thoroughly\n",
        "3. If user asks to send notification, use send_pushover_notification\n",
        "\n",
        "Be detailed and use search results to provide accurate answers.\"\"\",\n",
        "            tools=[google_search, send_pushover_notification],\n",
        "            chat_message_store_factory=lambda: message_store\n",
        "        )\n",
        "        self.iteration_count = 0\n",
        "\n",
        "    @handler\n",
        "    async def work(self, data: UserRequest | EvaluatorFeedback, ctx: WorkflowContext[WorkerOutput]) -> None:\n",
        "        self.iteration_count += 1\n",
        "\n",
        "        # Build prompt based on input\n",
        "        if isinstance(data, UserRequest):\n",
        "            prompt = f\"\"\"Task: {data.question}\n",
        "\n",
        "Success criteria: {data.success_criteria}\n",
        "\n",
        "Please complete this task.\"\"\"\n",
        "        else:  # EvaluatorFeedback\n",
        "            prompt = f\"\"\"Your previous answer didn't meet the success criteria.\n",
        "\n",
        "Evaluator feedback: {data.feedback}\n",
        "\n",
        "Please improve your answer.\"\"\"\n",
        "\n",
        "        print(f\"\\n🔨 WORKER (Iteration {self.iteration_count}): Working on task...\")\n",
        "        result = await self.agent.run(prompt, thread_id=\"sidekick_memory\")\n",
        "        print(f\"✅ WORKER: Generated answer ({len(result.text)} chars)\")\n",
        "\n",
        "        output = WorkerOutput(answer=result.text, iteration=self.iteration_count)\n",
        "        await ctx.send_message(output)\n",
        "\n",
        "# Evaluator: Checks quality\n",
        "class EvaluatorExecutor(Executor):\n",
        "    def __init__(self):\n",
        "        super().__init__(id=\"evaluator\")\n",
        "        self.agent = ChatAgent(\n",
        "            chat_client=OpenAIChatClient(model_id=\"gpt-4o-mini\"),\n",
        "            instructions=\"\"\"You are an evaluator that judges if work meets success criteria.\n",
        "\n",
        "Evaluate the worker's answer and respond with:\n",
        "SUCCESS: yes/no\n",
        "USER_INPUT_NEEDED: yes/no (if question is unclear and needs user clarification)\n",
        "FEEDBACK: [detailed feedback on what needs improvement]\n",
        "\n",
        "Criteria for SUCCESS:\n",
        "- Directly answers the user's question\n",
        "- Meets the success criteria\n",
        "- Is accurate and complete\n",
        "- Used search results properly (if needed)\n",
        "\n",
        "Criteria for USER_INPUT_NEEDED:\n",
        "- Question is too vague or unclear\n",
        "- Missing critical information to proceed\n",
        "\n",
        "If SUCCESS=yes OR USER_INPUT_NEEDED=yes, we stop.\n",
        "If both are no, worker must retry with your feedback.\"\"\",\n",
        "            chat_message_store_factory=lambda: message_store\n",
        "        )\n",
        "\n",
        "    @handler\n",
        "    async def evaluate(\n",
        "        self,\n",
        "        worker_output: WorkerOutput,\n",
        "        ctx: WorkflowContext[EvaluatorFeedback | str]\n",
        "    ) -> None:\n",
        "        # Get the original request from context (would be passed through in real impl)\n",
        "        # For now, we evaluate based on the answer quality\n",
        "\n",
        "        print(\"\\n🔍 EVALUATOR: Reviewing answer...\")\n",
        "        result = await self.agent.run(\n",
        "            f\"Evaluate this answer:\\n\\n{worker_output.answer}\",\n",
        "            thread_id=\"sidekick_memory\"\n",
        "        )\n",
        "\n",
        "        # Parse evaluator response\n",
        "        text = result.text\n",
        "        success = \"SUCCESS: yes\" in text or \"success: yes\" in text.lower()\n",
        "        needs_input = \"USER_INPUT_NEEDED: yes\" in text or \"user_input_needed: yes\" in text.lower()\n",
        "\n",
        "        # Extract feedback\n",
        "        feedback_lines = [line for line in text.split(\"\\n\") if \"FEEDBACK:\" in line]\n",
        "        feedback = feedback_lines[0].split(\"FEEDBACK:\")[-1].strip() if feedback_lines else text\n",
        "\n",
        "        print(\"📊 EVALUATOR RESULTS:\")\n",
        "        print(f\"   Success: {success}\")\n",
        "        print(f\"   Needs user input: {needs_input}\")\n",
        "        print(f\"   Feedback: {feedback[:100]}...\")\n",
        "\n",
        "        if success or needs_input:\n",
        "            # Task complete - send final answer\n",
        "            final_msg = worker_output.answer\n",
        "            if needs_input:\n",
        "                final_msg = f\"❓ CLARIFICATION NEEDED:\\n\\n{worker_output.answer}\"\n",
        "            else:\n",
        "                final_msg = f\"✅ TASK COMPLETE:\\n\\n{worker_output.answer}\"\n",
        "\n",
        "            await ctx.send_message(final_msg)\n",
        "        else:\n",
        "            # Send back to worker with feedback\n",
        "            print(\"↩️  EVALUATOR: Sending back to worker for improvement...\")\n",
        "            feedback_obj = EvaluatorFeedback(\n",
        "                success_criteria_met=False,\n",
        "                user_input_needed=False,\n",
        "                feedback=feedback,\n",
        "                worker_answer=worker_output.answer,\n",
        "                iteration=worker_output.iteration\n",
        "            )\n",
        "            await ctx.send_message(feedback_obj)\n",
        "\n",
        "# Final Output\n",
        "class FinalOutputExecutor(Executor):\n",
        "    @handler\n",
        "    async def output(self, message: str, ctx: WorkflowContext[Never, str]) -> None:\n",
        "        await ctx.yield_output(message)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "122a34f2",
      "metadata": {},
      "source": [
        "## == BUILD WORKFLOW ==="
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "39b5eaae",
      "metadata": {},
      "outputs": [],
      "source": [
        "worker = WorkerExecutor()\n",
        "evaluator = EvaluatorExecutor()\n",
        "final_output = FinalOutputExecutor(id=\"final_output\")\n",
        "\n",
        "workflow = (\n",
        "    WorkflowBuilder()\n",
        "    .set_start_executor(worker)\n",
        "    .add_edge(worker, evaluator)\n",
        "    .add_switch_case_edge_group(\n",
        "        evaluator,\n",
        "        [\n",
        "            Case(condition=lambda x: isinstance(x, str), target=final_output),\n",
        "            Default(target=worker)\n",
        "        ]\n",
        "    )\n",
        "    .build()\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e9a3e24a",
      "metadata": {},
      "source": [
        "## === GRADIO ==="
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "af6b2f76",
      "metadata": {},
      "outputs": [],
      "source": [
        "async def run_task(question, success_criteria):\n",
        "    if not success_criteria:\n",
        "        success_criteria = \"Provide a clear, accurate, and complete answer\"\n",
        "\n",
        "    print(\"\\n\" + \"=\"*60)\n",
        "    print(f\"NEW TASK: {question}\")\n",
        "    print(f\"SUCCESS CRITERIA: {success_criteria}\")\n",
        "    print(\"=\"*60)\n",
        "\n",
        "    request = UserRequest(question=question, success_criteria=success_criteria)\n",
        "\n",
        "    result = \"\"\n",
        "    async for event in workflow.run_stream(request):\n",
        "        if event.__class__.__name__ == 'WorkflowOutputEvent':\n",
        "            result = event.data\n",
        "\n",
        "    return result\n",
        "\n",
        "def run(question, success_criteria):\n",
        "    return asyncio.run(run_task(question, success_criteria))\n",
        "\n",
        "demo = gr.Interface(\n",
        "    fn=run,\n",
        "    inputs=[\n",
        "        gr.Textbox(label=\"Your Question/Task\", lines=3, placeholder=\"What do you need?\"),\n",
        "        gr.Textbox(\n",
        "            label=\"Success Criteria (optional)\",\n",
        "            lines=2,\n",
        "            placeholder=\"What makes a good answer? (leave blank for default)\",\n",
        "            value=\"\"\n",
        "        )\n",
        "    ],\n",
        "    outputs=gr.Textbox(label=\"Result\", lines=15),\n",
        "    title=\"🤖 AI Sidekick (Worker → Evaluator Loop)\",\n",
        "    description=\"Worker does the task → Evaluator checks quality → Loops until approved. Has memory, Google search, and Pushover!\",\n",
        "    examples=[\n",
        "        [\"What are the top 3 AI trends in 2024?\", \"List 3 specific trends with brief explanations\"],\n",
        "        [\"Send me 3 jobs that I can do\", \"Provide exactly 3 specific job listings that match the user's skills and preferences\"],\n",
        "        [\"Find best restaurants in Paris and notify me\", \"Provide specific restaurant names with ratings\"]\n",
        "    ]\n",
        ")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f4ad058f",
      "metadata": {},
      "outputs": [],
      "source": [
        "demo.launch()"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": ".venv",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.11"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
