{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/bitkira/Colab/blob/main/tutorial_notebooks/aflow_optimizer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install git+https://github.com/EvoAgentX/EvoAgentX.git"
      ],
      "metadata": {
        "id": "HwzYf1O8wVzR"
      },
      "id": "HwzYf1O8wVzR",
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install PyPDF2 selenium html2text fastmcp"
      ],
      "metadata": {
        "id": "YbS0NVQ_wWqx"
      },
      "id": "YbS0NVQ_wWqx",
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "id": "66e96cd9",
      "metadata": {
        "id": "66e96cd9"
      },
      "source": [
        "# AFlow Optimizer Tutorial\n",
        "\n",
        "This tutorial will guide you through the process of setting up and running the [AFlow](https://arxiv.org/abs/2410.10762) optimizer in EvoAgentX. We'll use the HumanEval benchmark as an example to demonstrate how to optimize a multi-agent workflow for code generation tasks.\n",
        "\n",
        "## 1. Overview\n",
        "\n",
        "The AFlow optimizer in EvoAgentX enables you to:\n",
        "\n",
        "- Automatically optimize multi-agent workflows for specific task types (code generation, QA, math, etc.)\n",
        "- Support different types of operators (Custom, CustomCodeGenerate, Test, ScEnsemble, etc.)\n",
        "- Evaluate optimization results on benchmark datasets\n",
        "- Use different LLMs for optimization and execution\n",
        "\n",
        "## 2. Setting Up the Environment\n",
        "\n",
        "First, let's import the necessary modules for setting up the AFlow optimizer:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "892efaf2",
      "metadata": {
        "id": "892efaf2"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from dotenv import load_dotenv\n",
        "from evoagentx.optimizers import AFlowOptimizer\n",
        "from evoagentx.models import LiteLLMConfig, LiteLLM, OpenAILLMConfig, OpenAILLM\n",
        "from evoagentx.benchmark import AFlowHumanEval\n",
        "\n",
        "try:\n",
        "    from google.colab import userdata\n",
        "    OPENAI_API_KEY = userdata.get(\"OPENAI_API_KEY\")\n",
        "    ANTHROPIC_API_KEY = userdata.get(\"ANTHROPIC_API_KEY\")\n",
        "except ImportError:\n",
        "    OPENAI_API_KEY = None\n",
        "    ANTHROPIC_API_KEY = None\n",
        "\n",
        "if not OPENAI_API_KEY or not ANTHROPIC_API_KEY:\n",
        "    load_dotenv()\n",
        "    OPENAI_API_KEY = OPENAI_API_KEY or os.getenv(\"OPENAI_API_KEY\")\n",
        "    ANTHROPIC_API_KEY = ANTHROPIC_API_KEY or os.getenv(\"ANTHROPIC_API_KEY\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5d966a1e",
      "metadata": {
        "id": "5d966a1e"
      },
      "source": [
        "\n",
        "### Configure the LLM Models\n",
        "\n",
        "Following the settings in the [original AFlow implementation](https://github.com/FoundationAgents/MetaGPT/tree/main/examples/aflow), the AFlow optimizer uses two different LLMs:\n",
        "1. An optimizer LLM (e.g., Claude 3.5 Sonnet) for workflow optimization\n",
        "2. An executor LLM (e.g., GPT-4o-mini) for task execution\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "a5eb6072",
      "metadata": {
        "id": "a5eb6072"
      },
      "outputs": [],
      "source": [
        "# Configure the optimizer LLM (Claude 3.5 Sonnet)\n",
        "claude_config = LiteLLMConfig(\n",
        "    model=\"anthropic/claude-3-5-sonnet-20240620\",\n",
        "    anthropic_key=ANTHROPIC_API_KEY\n",
        ")\n",
        "optimizer_llm = LiteLLM(config=claude_config)\n",
        "\n",
        "# Configure the executor LLM (GPT-4o-mini)\n",
        "openai_config = OpenAILLMConfig(\n",
        "    model=\"gpt-4o-mini\",\n",
        "    openai_key=OPENAI_API_KEY\n",
        ")\n",
        "executor_llm = OpenAILLM(config=openai_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e5b101e8",
      "metadata": {
        "id": "e5b101e8"
      },
      "source": [
        "\n",
        "## 3. Setting Up the Components\n",
        "\n",
        "### Step 1: Define Task Configuration\n",
        "\n",
        "The AFlow optimizer requires a configuration that specifies the task type and available operators. Here's an example configuration for different task types:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ef33b1bf",
      "metadata": {
        "id": "ef33b1bf"
      },
      "outputs": [],
      "source": [
        "EXPERIMENTAL_CONFIG = {\n",
        "    \"humaneval\": {\n",
        "        \"question_type\": \"code\",\n",
        "        \"operators\": [\"Custom\", \"CustomCodeGenerate\", \"Test\", \"ScEnsemble\"]\n",
        "    },\n",
        "    \"mbpp\": {\n",
        "        \"question_type\": \"code\",\n",
        "        \"operators\": [\"Custom\", \"CustomCodeGenerate\", \"Test\", \"ScEnsemble\"]\n",
        "    },\n",
        "    \"hotpotqa\": {\n",
        "        \"question_type\": \"qa\",\n",
        "        \"operators\": [\"Custom\", \"AnswerGenerate\", \"QAScEnsemble\"]\n",
        "    },\n",
        "    \"gsm8k\": {\n",
        "        \"question_type\": \"math\",\n",
        "        \"operators\": [\"Custom\", \"ScEnsemble\", \"Programmer\"]\n",
        "    },\n",
        "    \"math\": {\n",
        "        \"question_type\": \"math\",\n",
        "        \"operators\": [\"Custom\", \"ScEnsemble\", \"Programmer\"]\n",
        "    }\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9ff2e90a",
      "metadata": {
        "id": "9ff2e90a"
      },
      "source": [
        "\n",
        "### Step 2: Define initial workflow\n",
        "\n",
        "The AFlow optimizer requires two files:\n",
        "- `graph.py`: which defines the initial workflow graph in python code.\n",
        "- `prompt.py`: which defines the prompts used in the workflow.\n",
        "\n",
        "Below is an example of the `graph.py` file for the HumanEval benchmark:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "2d980f17",
      "metadata": {
        "id": "2d980f17"
      },
      "outputs": [],
      "source": [
        "import evoagentx.workflow.operators as operator\n",
        "import examples.aflow.code_generation.prompt as prompt_custom # noqa: F401\n",
        "from evoagentx.models.model_configs import LLMConfig\n",
        "from evoagentx.benchmark.benchmark import Benchmark\n",
        "from evoagentx.models.model_utils import create_llm_instance\n",
        "\n",
        "class Workflow:\n",
        "\n",
        "    def __init__(\n",
        "        self,\n",
        "        name: str,\n",
        "        llm_config: LLMConfig,\n",
        "        benchmark: Benchmark\n",
        "    ):\n",
        "        self.name = name\n",
        "        self.llm = create_llm_instance(llm_config)\n",
        "        self.benchmark = benchmark\n",
        "        self.custom = operator.Custom(self.llm)\n",
        "        self.custom_code_generate = operator.CustomCodeGenerate(self.llm)\n",
        "\n",
        "    async def __call__(self, problem: str, entry_point: str):\n",
        "        \"\"\"\n",
        "        Implementation of the workflow\n",
        "        Custom operator to generate anything you want.\n",
        "        But when you want to get standard code, you should use custom_code_generate operator.\n",
        "        \"\"\"\n",
        "        # await self.custom(input=, instruction=\"\")\n",
        "        solution = await self.custom_code_generate(problem=problem, entry_point=entry_point, instruction=prompt_custom.GENERATE_PYTHON_CODE_PROMPT) # But When you want to get standard code ,you should use customcodegenerator.\n",
        "        return solution['response']"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "42d48326",
      "metadata": {
        "id": "42d48326"
      },
      "source": [
        "\n",
        "!!! note\n",
        "    When defining your workflow, please pay attention to the following key points:\n",
        "\n",
        "    1. **Prompt Import Path**: Ensure the import path for `prompt.py` is correctly specified (e.g., `examples.aflow.code_generation.prompt`). This path should match your project structure to enable proper prompt loading.\n",
        "\n",
        "    2. **Operator Initialization**: In the `__init__` function, you must initialize all operators that will be used in the workflow. Each operator should be instantiated with the appropriate LLM instance.\n",
        "\n",
        "    3. **Workflow Execution**: The `__call__` function serves as the main entry point for workflow execution. It should define the complete execution logic of your workflow and return the final output that will be used for evaluation.\n",
        "\n",
        "\n",
        "Below is an example of the `prompt.py` file for the HumanEval benchmark:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "48f54f85",
      "metadata": {
        "id": "48f54f85"
      },
      "outputs": [],
      "source": [
        "GENERATE_PYTHON_CODE_PROMPT = \"\"\"\n",
        "Generate a functional and correct Python code for the given problem.\n",
        "\n",
        "Problem: \"\"\""
      ]
    },
    {
      "cell_type": "markdown",
      "id": "925cdbc9",
      "metadata": {
        "id": "925cdbc9"
      },
      "source": [
        "\n",
        "!!! note\n",
        "    If the workflow does not require any prompts, the `prompt.py` file can be empty.\n",
        "\n",
        "### Step 3: Prepare the Benchmark\n",
        "\n",
        "For this tutorial, we'll use the AFlowHumanEval benchmark. It follows the exact same data split and format as used in the [original AFlow implementation](https://github.com/FoundationAgents/MetaGPT/tree/main/examples/aflow).\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "069a71b9",
      "metadata": {
        "id": "069a71b9"
      },
      "outputs": [],
      "source": [
        "# Initialize the benchmark\n",
        "humaneval = AFlowHumanEval()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "30c7f82d",
      "metadata": {
        "id": "30c7f82d"
      },
      "source": [
        "\n",
        "## 4. Configuring and Running the AFlow Optimizer\n",
        "\n",
        "The AFlow optimizer can be configured with various parameters to control the optimization process:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "39de345d",
      "metadata": {
        "id": "39de345d"
      },
      "outputs": [],
      "source": [
        "optimizer = AFlowOptimizer(\n",
        "    graph_path=\"examples/aflow/code_generation\",  # Path to the initial workflow graph\n",
        "    optimized_path=\"examples/aflow/humaneval/optimized\",  # Path to save optimized workflows\n",
        "    optimizer_llm=optimizer_llm,  # LLM for optimization\n",
        "    executor_llm=executor_llm,    # LLM for execution\n",
        "    validation_rounds=3,          # Number of times to run validation on the development set during optimization\n",
        "    eval_rounds=3,               # Number of times to run evaluation on the test set during testing\n",
        "    max_rounds=20,               # Maximum optimization rounds\n",
        "    **EXPERIMENTAL_CONFIG[\"humaneval\"]  # Task-specific configuration, used to specify the task type and available operators\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6bd3e567",
      "metadata": {
        "id": "6bd3e567"
      },
      "source": [
        "\n",
        "### Running the Optimization\n",
        "\n",
        "To start the optimization process:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "5170d51a",
      "metadata": {
        "id": "5170d51a"
      },
      "outputs": [],
      "source": [
        "# Optimize the workflow\n",
        "optimizer.optimize(humaneval)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c7007e56",
      "metadata": {
        "id": "c7007e56"
      },
      "source": [
        "\n",
        "!!! note\n",
        "    During optimization, the workflow will be validated on the development set for `validation_rounds` times at each step. Make sure the benchmark `humaneval` contains a development set (i.e., `self._dev_data` is not empty).\n",
        "\n",
        "### Test the Optimized Workflow\n",
        "\n",
        "To test the optimized workflow:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "55737ce5",
      "metadata": {
        "id": "55737ce5"
      },
      "outputs": [],
      "source": [
        "optimizer.test(humaneval)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "b341f014",
      "metadata": {
        "id": "b341f014"
      },
      "source": [
        "By default, the optimizer will choose the workflow with the highest validation performance to test. You can also specify the test rounds using the `test_rounds: List[int]` parameter. For example, to evaluate the second round and the third round, you can use `optimizer.test(humaneval, test_rounds=[2, 3])`.\n",
        "\n",
        "!!! note\n",
        "    During testing, the workflow will be evaluated on the test set for `eval_rounds` times. Make sure the benchmark `humaneval` contains a test set (i.e., `self._test_data` is not empty).\n",
        "\n",
        "For a complete working example, please refer to [aflow_humaneval.py](https://github.com/EvoAgentX/EvoAgentX/blob/main/examples/optimization/aflow/aflow_humaneval.py)."
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    },
    "colab": {
      "provenance": [],
      "include_colab_link": true
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}