{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/bitkira/Colab/blob/main/tutorial_notebooks/quickstart.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "1d66b4a3",
      "metadata": {
        "id": "1d66b4a3"
      },
      "source": [
        "# EvoAgentX Quickstart Guide\n",
        "\n",
        "This quickstart guide will walk you through the essential steps to set up and start using EvoAgentX. In this tutorial, you'll learn how to:\n",
        "\n",
        "1. Configure your API keys to access LLMs\n",
        "2. Automatically create and execute workflows\n",
        "\n",
        "\n",
        "## Installation"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "cf318c87",
      "metadata": {
        "id": "cf318c87"
      },
      "outputs": [],
      "source": [
        "pip install git+https://github.com/EvoAgentX/EvoAgentX.git"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bff8f9e9",
      "metadata": {
        "id": "bff8f9e9"
      },
      "source": [
        "Please refere to [Installation Guide](./installation.md) for more details about the installation.\n",
        "\n",
        "## API Key & LLM Setup\n",
        "\n",
        "The first step to execute a workflow in EvoAgentX is configuring your API keys to access LLMs. There are two recommended methods to configure your API keys:\n",
        "\n",
        "### Method 1: Set Environment Variables in the Terminal\n",
        "\n",
        "This method sets the API key directly in your system environment.\n",
        "\n",
        "For Linux/macOS:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "75d2f250",
      "metadata": {
        "id": "75d2f250"
      },
      "outputs": [],
      "source": [
        "export OPENAI_API_KEY=<your-openai-api-key>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ceb3e853",
      "metadata": {
        "id": "ceb3e853"
      },
      "source": [
        "\n",
        "For Windows Command Prompt:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "2cd2d9e9",
      "metadata": {
        "id": "2cd2d9e9"
      },
      "outputs": [],
      "source": [
        "set OPENAI_API_KEY=<your-openai-api-key>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "70a56751",
      "metadata": {
        "id": "70a56751"
      },
      "source": [
        "\n",
        "For Windows PowerShell:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9d5b946e",
      "metadata": {
        "id": "9d5b946e"
      },
      "outputs": [],
      "source": [
        "$env:OPENAI_API_KEY=\"<your-openai-api-key>\" # \" is required"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "84f08019",
      "metadata": {
        "id": "84f08019"
      },
      "source": [
        "\n",
        "Once set, you can access the key in your Python code with:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "7d486b57",
      "metadata": {
        "id": "7d486b57"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5e4daa18",
      "metadata": {
        "id": "5e4daa18"
      },
      "source": [
        "\n",
        "### Method 2: Use a `.env` File\n",
        "\n",
        "You can also store your API key in a `.env` file inside the root folder of your project.\n",
        "\n",
        "Create a file named `.env` with the following content:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ad8cb735",
      "metadata": {
        "id": "ad8cb735"
      },
      "outputs": [],
      "source": [
        "OPENAI_API_KEY=<your-openai-api-key>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "74f0507c",
      "metadata": {
        "id": "74f0507c"
      },
      "source": [
        "\n",
        "Then, in your Python code, you can load the environment settings using `python-dotenv`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "47c5c640",
      "metadata": {
        "id": "47c5c640"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "try:\n",
        "    from google.colab import userdata\n",
        "    OPENAI_API_KEY = userdata.get(\"OPENAI_API_KEY\")\n",
        "except ImportError:\n",
        "    OPENAI_API_KEY = None\n",
        "\n",
        "if not OPENAI_API_KEY:\n",
        "    load_dotenv()  # Loads environment variables from .env file\n",
        "    OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "076a2bca",
      "metadata": {
        "id": "076a2bca"
      },
      "source": [
        "🔐 Tip: Never commit your `.env` file to public platform (e.g., GitHub). Add it to `.gitignore`.\n",
        "\n",
        "### Configure and Use the LLM in EvoAgentX\n",
        "Once your API key is configured, you can initialize and use the LLM as follows:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ac30e365",
      "metadata": {
        "id": "ac30e365"
      },
      "outputs": [],
      "source": [
        "from evoagentx.models import OpenAILLMConfig, OpenAILLM\n",
        "import os\n",
        "\n",
        "# Load the API key from environment\n",
        "OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n",
        "\n",
        "# Define LLM configuration\n",
        "openai_config = OpenAILLMConfig(\n",
        "    model=\"gpt-4o-mini\",       # Specify the model name\n",
        "    openai_key=OPENAI_API_KEY, # Pass the key directly\n",
        "    stream=True,               # Enable streaming response\n",
        "    output_response=True       # Print response to stdout\n",
        ")\n",
        "\n",
        "# Initialize the language model\n",
        "llm = OpenAILLM(config=openai_config)\n",
        "\n",
        "# Generate a response from the LLM\n",
        "response = llm.generate(prompt=\"What is Agentic Workflow?\")"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Method3:configure in colab\n",
        "The code below can read the apikey from colab"
      ],
      "metadata": {
        "id": "nzNXOUnTi6og"
      },
      "id": "nzNXOUnTi6og"
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "from dotenv import load_dotenv\n",
        "from evoagentx.models import OpenAILLMConfig, OpenAILLM\n",
        "from evoagentx.agents import CustomizeAgent\n",
        "\n",
        "try:\n",
        "    from google.colab import userdata\n",
        "    OPENAI_API_KEY = userdata.get(\"OPENAI_API_KEY\")\n",
        "except ImportError:\n",
        "    OPENAI_API_KEY = None\n",
        "\n",
        "if not OPENAI_API_KEY:\n",
        "    load_dotenv()\n",
        "    OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n",
        "\n",
        "# Configure LLM\n",
        "openai_config = OpenAILLMConfig(\n",
        "    model=\"gpt-4o-mini\",\n",
        "    openai_key=OPENAI_API_KEY,\n",
        "    stream=True\n",
        ")\n",
        "\n",
        "llm = OpenAILLM(config=openai_config)"
      ],
      "metadata": {
        "id": "nD8gskzEjZMm"
      },
      "id": "nD8gskzEjZMm",
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "id": "d03c4131",
      "metadata": {
        "id": "d03c4131"
      },
      "source": [
        "\n",
        "You can find more details about supported LLM types and their parameters in the [LLM module guide](./modules/llm.md).\n",
        "\n",
        "\n",
        "## Automatic WorkFlow Generation and Execution\n",
        "\n",
        "Once your API key and language model are configured, you can automatically generate and execute agentic workflows in EvoAgentX. This section walks you through the core steps: generating a workflow from a goal, instantiating agents, and running the workflow to get results.\n",
        "\n",
        "First, let's import the necessary modules:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "c93d2b1e",
      "metadata": {
        "id": "c93d2b1e"
      },
      "outputs": [],
      "source": [
        "from evoagentx.workflow import WorkFlowGenerator, WorkFlowGraph, WorkFlow\n",
        "from evoagentx.agents import AgentManager"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "f3af80c0",
      "metadata": {
        "id": "f3af80c0"
      },
      "source": [
        "\n",
        "### Step 1: Generate WorkFlow and Agents\n",
        "Use the `WorkFlowGenerator` to automatically create a workflow based on a natural language goal:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "c19ffee3",
      "metadata": {
        "id": "c19ffee3"
      },
      "outputs": [],
      "source": [
        "goal = \"Generate html code for the Tetris game that can be played in the browser.\"\n",
        "wf_generator = WorkFlowGenerator(llm=llm)\n",
        "workflow_graph: WorkFlowGraph = wf_generator.generate_workflow(goal=goal)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "263fb8bf",
      "metadata": {
        "id": "263fb8bf"
      },
      "source": [
        "`WorkFlowGraph` is a data structure that stores the overall workflow plan — including task nodes and their relationships — but does not yet include executable agents.\n",
        "\n",
        "You can optionally **visualize** or **save** the generated workflow:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "b4597286",
      "metadata": {
        "id": "b4597286"
      },
      "outputs": [],
      "source": [
        "# Visualize the workflow structure (optional)\n",
        "workflow_graph.display()\n",
        "\n",
        "# Save the workflow to a JSON file (optional)\n",
        "workflow_graph.save_module(\"/path/to/save/workflow_demo.json\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c0ec801f",
      "metadata": {
        "id": "c0ec801f"
      },
      "source": [
        "We provide an example generated workflow [here](https://github.com/EvoAgentX/EvoAgentX/blob/main/examples/output/tetris_game/workflow_demo_4o_mini.json). You can **reload** the saved workflow:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "b2cda749",
      "metadata": {
        "id": "b2cda749"
      },
      "outputs": [],
      "source": [
        "workflow_graph = WorkFlowGraph.from_file(\"/path/to/save/workflow_demo.json\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d05eddbd",
      "metadata": {
        "id": "d05eddbd"
      },
      "source": [
        "\n",
        "### Step 2: Create and Manage Executable Agents\n",
        "\n",
        "Use `AgentManager` to instantiate and manage agents based on the workflow graph:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "440d8979",
      "metadata": {
        "id": "440d8979"
      },
      "outputs": [],
      "source": [
        "agent_manager = AgentManager()\n",
        "agent_manager.add_agents_from_workflow(workflow_graph, llm_config=openai_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5bcf7fcd",
      "metadata": {
        "id": "5bcf7fcd"
      },
      "source": [
        "\n",
        "### Step 3: Execute the Workflow\n",
        "Once agents are ready, you can create a `WorkFlow` instance and run it:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "aa0e4921",
      "metadata": {
        "id": "aa0e4921"
      },
      "outputs": [],
      "source": [
        "workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm)\n",
        "output = workflow.execute()\n",
        "print(output)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "If the above code reports an error, please execute the code below"
      ],
      "metadata": {
        "id": "W3xVm-VsnDzm"
      },
      "id": "W3xVm-VsnDzm"
    },
    {
      "cell_type": "code",
      "source": [
        "import nest_asyncio\n",
        "\n",
        "nest_asyncio.apply()\n",
        "workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm)\n",
        "output = workflow.execute()\n",
        "print(output)"
      ],
      "metadata": {
        "id": "iU2124jInCdy"
      },
      "id": "iU2124jInCdy",
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "id": "035e410d",
      "metadata": {
        "id": "035e410d"
      },
      "source": [
        "\n",
        "For a complete working example, check out the [full workflow demo](https://github.com/EvoAgentX/EvoAgentX/blob/main/examples/workflow_demo.py).\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    },
    "colab": {
      "provenance": [],
      "include_colab_link": true
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}