{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "77i3shTcHB6O"
   },
   "source": [
    "## Introduction\n",
    "\n",
    "The goal of the BeeAI project is to make AI agents interoperable, regardless of their underlying implementation. The project consists of two key components:\n",
    "- **BeeAI Platform**: The platform to easily discover, run, and compose AI agents from any framework.\n",
    "- **BeeAI Framework**: A production-grade framework for building AI agents in either Python or TypeScript.\n",
    "\n",
    "Detailed information on BeeAI can be found [here](https://beeai.dev/).\n",
    "\n",
    "### What's in this notebook?\n",
    "\n",
    "In this notebook we will learn about BeeAI Workflows, which are available in the BeeAI Framework. An agent's behavior is defined through workflow steps and the transitions between them. You can think of a workflow as a graph that outlines an agent's behavior.\n",
    "\n",
    "You can run this notebook on [**Google Colab**](https://colab.research.google.com/). The notebook uses **Ollama** to provide access to a variety of foundation models for remote execution. The notebook will run faster on Colab if you use the free *T4 GPU* option by selecting *Runtime / Change runtime type* in the Colab system menu."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "IC76fDZJl6eK"
   },
   "source": [
    "Run the Next Cell to wrap Notebook output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "XOnHGl8vl8JI"
   },
   "outputs": [],
   "source": [
    "from IPython.display import HTML, display\n",
    "\n",
    "\n",
    "def set_css():\n",
    "    display(HTML(\"\\n<style>\\n pre{\\n white-space: pre-wrap;\\n}\\n</style>\\n\"))\n",
    "\n",
    "\n",
    "get_ipython().events.register(\"pre_run_cell\", set_css)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "21BXJs668SSz"
   },
   "source": [
    "### Install Libraries\n",
    "We start by installing the required dependencies and starting Ollama server."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "57q7sQN7Yu9J"
   },
   "outputs": [],
   "source": [
    "%pip install -q langchain_community wikipedia requests==2.32.4 beeai-framework\n",
    "\n",
    "!curl -fsSL https://ollama.com/install.sh | sh > /dev/null\n",
    "!nohup ollama serve >/dev/null 2>&1 &\n",
    "!ollama pull granite3.3:8b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "z_ojnzy98X1D"
   },
   "source": [
    "### Import Libraries"
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "# Imports\n",
    "\n",
    "import traceback\n",
    "import warnings\n",
    "\n",
    "import wikipedia\n",
    "from bs4 import GuessedAtParserWarning\n",
    "from langchain_community.utilities import SearxSearchWrapper\n",
    "from pydantic import BaseModel, Field, InstanceOf, ValidationError\n",
    "\n",
    "from beeai_framework.backend.chat import ChatModel, ChatModelOutput\n",
    "from beeai_framework.backend.message import AssistantMessage, SystemMessage, UserMessage\n",
    "from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory\n",
    "from beeai_framework.template import PromptTemplate, PromptTemplateInput\n",
    "from beeai_framework.workflows.workflow import Workflow, WorkflowError\n",
    "\n",
    "# Suppress parser warnings\n",
    "warnings.filterwarnings(\"ignore\", category=GuessedAtParserWarning)\n",
    "\n",
    "\n",
    "def object_on_screen(obj):\n",
    "    display(obj)\n",
    "\n",
    "\n",
    "print(\"Imports completed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qVkYRxoFHB6g"
   },
   "source": [
    "## Basics of BeeAI Workflows\n",
    "\n",
    "The main components of a BeeAI Workflow are state, defined as a Pydantic model, and steps, which are Python functions.\n",
    "\n",
    "- State: Think of state as structured memory that the workflow can read from and write to during execution. It holds the data that flows through the workflow.\n",
    "- Steps: These are the functional components of the workflow, connecting together to perform an agent’s actions.\n",
    "\n",
    "The following simple workflow example highlights these key features:\n",
    "\n",
    "- The state definition includes a required message field.\n",
    "- The step (my_first_step) is defined as a function that takes the state instance as a parameter.\n",
    "- The state can be modified within a step, and changes to the state are preserved across steps and workflow executions.\n",
    "- The step function returns a string (Workflow.END), indicating the name of the next step (this is how step transitions are handled).\n",
    "- Workflow.END signifies the end of the workflow."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "xMjp1e_EHB6h"
   },
   "outputs": [],
   "source": [
    "print(\"We will build a MessageState workflow.\")\n",
    "\n",
    "\n",
    "# Define global state that is accessible to each step in the workflow graph\n",
    "# The message field is required when instantiating the state object\n",
    "class MessageState(BaseModel):\n",
    "    message: str\n",
    "\n",
    "\n",
    "# Each step in the workflow is defined as a python function\n",
    "async def my_first_step(state: MessageState) -> None:\n",
    "    state.message += \" World\"  # Modify the state\n",
    "    print(\"\\nRunning Step 1: adding 'World' to state.\")\n",
    "    return Workflow.END\n",
    "\n",
    "\n",
    "try:\n",
    "    # Define the structure of the workflow graph\n",
    "    basic_workflow = Workflow(schema=MessageState, name=\"MyWorkflow\")\n",
    "\n",
    "    # Add a step, each step has a name and a function that implements the step\n",
    "    basic_workflow.add_step(\"my_first_step\", my_first_step)\n",
    "    print(\"Setting initial workflow state to 'Hello'.\")\n",
    "    print(\"Each step in a workflow is implemented as a Python function.\")\n",
    "\n",
    "    # Execute the workflow\n",
    "    basic_response = await basic_workflow.run(MessageState(message=\"Hello\"))\n",
    "    print(\"State after workflow run: \" + basic_response.state.message)\n",
    "\n",
    "except WorkflowError:\n",
    "    traceback.print_exc()\n",
    "except ValidationError:\n",
    "    traceback.print_exc()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Tsx13Ag1HB6i"
   },
   "source": [
    "## A Multi-Step Workflow with Tools\n",
    "\n",
    "Now that you understand the basic components of a BeeAI Workflow, let’s explore the power of BeeAI Workflows by building a simple web search workflow.\n",
    "\n",
    "This workflow creates a search query based on an input question, runs the query to retrieve search results, and then generates an answer to the question based on the results.\n",
    "\n",
    "Let’s begin by defining our workflow State.\n",
    "\n",
    "In this case, the question field is required when instantiating the State. The other fields, search_results and answer, are optional during construction (defaulting to None), but they will be populated by the workflow steps as the execution progresses."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Jm5ZLdsTHB6k"
   },
   "outputs": [],
   "source": [
    "print(\"We will build a SearchAgentState workflow.\")\n",
    "\n",
    "\n",
    "# Workflow State\n",
    "class SearchAgentState(BaseModel):\n",
    "    question: str\n",
    "    search_results: str | None = None\n",
    "    answer: str | None = None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "jMcIxcGcHB6l"
   },
   "source": [
    "Next, we define the ChatModel instance that will handle interaction with our LLM. For this example, we'll use IBM Granite 3.3 8B via watsonx. This model will be used to process the search query and generate answers based on the retrieved results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "knaK0eX4HB6l"
   },
   "outputs": [],
   "source": [
    "# Construct ChatModel\n",
    "model = ChatModel.from_name(\"ollama:granite3.3:8b\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WEKDZb9lHB6m"
   },
   "source": [
    "Since this is a web search workflow, we need a way to run web searches.\n",
    "\n",
    "- Option 1 is to use the SearxSearchWrapper from the Langchain community tools project. To use the SearxSearchWrapper, you need to set up a local SearXNG service, as described [here](https://github.com/i-am-bee/beeai-framework/blob/411a76558c1ddf42115e601baf8d5d1922a04695/python/examples/notebooks/searXNG.md) to configure your local SearXNG instance before proceeding.\n",
    "\n",
    "- Option 2 is to use Wikipedia search.\n",
    "\n",
    "Currently option 1 doesn't work in this implementation, so we will use option 2."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "tNYaUL4IHB6m"
   },
   "outputs": [],
   "source": [
    "search_option = \"Wikipedia\"\n",
    "# search_option = \"Searx\"\n",
    "\n",
    "# Search tool option 1 (currently doesn't work in this implementation)\n",
    "\n",
    "search_tool = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\")\n",
    "\n",
    "# Search tool option 2\n",
    "\n",
    "\n",
    "def search_wikipedia(question, sentences_per_page=10):\n",
    "    try:\n",
    "        # Search for relevant pages\n",
    "        results = wikipedia.search(question)\n",
    "        if not results:\n",
    "            return \"Page not found.\"\n",
    "\n",
    "        # Loop through the search results and get summaries for each page\n",
    "        for idx, result in enumerate(results):\n",
    "            print(\"\\nResult \" + str(idx + 1) + \": \" + result)\n",
    "            try:\n",
    "                page = wikipedia.page(result)\n",
    "                summary = wikipedia.summary(page.title, sentences=sentences_per_page)\n",
    "                print(\"Summary: \" + summary)\n",
    "            except wikipedia.exceptions.PageError:\n",
    "                print(\"No summary found\")\n",
    "            except wikipedia.exceptions.DisambiguationError:\n",
    "                print(\"Too many results\")\n",
    "            except Exception:\n",
    "                print(\"An error occurred\")\n",
    "\n",
    "    except wikipedia.exceptions.DisambiguationError as e:\n",
    "        return f\"Too many results. Try to be more specific: {e.options[:5]}\"\n",
    "    except wikipedia.exceptions.PageError:\n",
    "        return \"Page not found.\"\n",
    "    except Exception as e:\n",
    "        return f\"An error occurred: {e!s}\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "zGTJinZNHB6n"
   },
   "source": [
    "In this workflow, we make extensive use of PromptTemplates and structured outputs.\n",
    "\n",
    "Here, we define the various templates, input schemas, and structured output schemas that are essential for implementing the workflow. These templates will allow us to generate the search query and structure the results in a way that the workflow can process effectively."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8GsX-88CHB6o"
   },
   "outputs": [],
   "source": [
    "# PromptTemplate Input Schemas\n",
    "class QuestionInput(BaseModel):\n",
    "    question: str\n",
    "\n",
    "\n",
    "class SearchRAGInput(BaseModel):\n",
    "    question: str\n",
    "    search_results: str\n",
    "\n",
    "\n",
    "# Prompt Templates\n",
    "search_query_template = PromptTemplate(\n",
    "    PromptTemplateInput(\n",
    "        schema=QuestionInput,\n",
    "        template=\"\"\"Convert the following question into a concise, effective web search query using keywords and operators for accuracy.\n",
    "Question: {{question}}\"\"\",\n",
    "    )\n",
    ")\n",
    "\n",
    "search_rag_template = PromptTemplate(\n",
    "    PromptTemplateInput(\n",
    "        schema=SearchRAGInput,\n",
    "        template=\"\"\"Search results:\n",
    "{{search_results}}\n",
    "\n",
    "Question: {{question}}\n",
    "Provide a concise answer based on the search results provided. If the results are irrelevant or insufficient, say 'I don't know.' Avoid phrases such as 'According to the results...'.\"\"\",\n",
    "    )\n",
    ")\n",
    "\n",
    "\n",
    "# Structured output Schemas\n",
    "class WebSearchQuery(BaseModel):\n",
    "    query: str = Field(description=\"The web search query.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "W16diH2NHB6o"
   },
   "source": [
    "Now, we can define the first step of the workflow, named web_search.\n",
    "\n",
    "In this step:\n",
    "\n",
    "- The LLM is prompted to generate an effective search query using the search_query_template.\n",
    "- The generated search query is then used to run a web search via the web search tool.\n",
    "- The search results are stored in the search_results field of the workflow state.\n",
    "- Finally, the step returns generate_answer, passing control to the next step, named generate_answer."
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "async def web_search(state: SearchAgentState) -> str:\n",
    "    print(\"\\nRunning Step 1: web_search\")\n",
    "\n",
    "    # Generate a search query\n",
    "    prompt = search_query_template.render(QuestionInput(question=state.question))\n",
    "    response: ChatModelOutput = await model.run([UserMessage(prompt)], response_format=WebSearchQuery)\n",
    "    generated_search_query = response.output_structured.query\n",
    "    print(\"Search query: \" + generated_search_query)\n",
    "\n",
    "    # Run search and store results in state\n",
    "    try:\n",
    "        if search_option == \"Searx\":\n",
    "            state.search_results = str(search_tool.run(generated_search_query))\n",
    "        else:\n",
    "            state.search_results = str(search_wikipedia(generated_search_query))\n",
    "\n",
    "    except Exception:\n",
    "        print(\"Search tool failed! Agent will answer from memory.\")\n",
    "        state.search_results = \"No search results available.\"\n",
    "\n",
    "    return \"generate_answer\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "MNIckLh4HB6p"
   },
   "source": [
    "The next step in the workflow is generate_answer.\n",
    "\n",
    "This step:\n",
    "\n",
    "- Takes the question and search_results from the workflow state.\n",
    "- Uses the search_rag_template to generate an answer based on the provided data.\n",
    "- The generated answer is stored in the workflow state.\n",
    "- Finally, the workflow ends by returning Workflow.END."
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "async def generate_answer(state: SearchAgentState) -> str:\n",
    "    print(\"\\nRunning Step 2: generate_answer\")\n",
    "\n",
    "    # Generate answer based on question and search results from previous step.\n",
    "    prompt = search_rag_template.render(\n",
    "        SearchRAGInput(question=state.question, search_results=state.search_results or \"No results available.\")\n",
    "    )\n",
    "    output: ChatModelOutput = await model.run([UserMessage(prompt)])\n",
    "\n",
    "    # Store answer in state\n",
    "    state.answer = output.get_text_content()\n",
    "    return Workflow.END"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "aSwEoK7mHB6q"
   },
   "source": [
    "The next step in the workflow is generate_answer.\n",
    "\n",
    "This step:\n",
    "\n",
    "- Takes the question and search_results from the workflow state.\n",
    "- Uses the search_rag_template to generate an answer based on the provided data.\n",
    "- The generated answer is stored in the workflow state.\n",
    "- Finally, the workflow ends by returning Workflow.END."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "hwqEHUz7HB6q"
   },
   "outputs": [],
   "source": [
    "try:\n",
    "    # Define the structure of the workflow graph\n",
    "    search_agent_workflow = Workflow(schema=SearchAgentState, name=\"WebSearchAgent\")\n",
    "    search_agent_workflow.add_step(\"web_search\", web_search)\n",
    "    search_agent_workflow.add_step(\"generate_answer\", generate_answer)\n",
    "\n",
    "    # Execute the workflow\n",
    "    message = \"What is a hedgehog?\"\n",
    "    print(\"Original Question: \" + message)\n",
    "    search_response = await search_agent_workflow.run(SearchAgentState(question=message))\n",
    "    print(\"\\nOriginal Question: \" + search_response.state.question)\n",
    "    print(\"\\nFinal Answer: \" + search_response.state.answer)\n",
    "\n",
    "except WorkflowError:\n",
    "    traceback.print_exc()\n",
    "except ValidationError:\n",
    "    traceback.print_exc()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "T7kED6PSHB6r"
   },
   "source": [
    "# Adding Memory to a Workflow\n",
    "\n",
    "The web search workflow from the previous example can answer questions, but it cannot engage in a conversation because it doesn't maintain message history.\n",
    "\n",
    "In the next example, we'll show you how to add memory to your workflow, allowing it to interactively chat while keeping track of the conversation history. This will enable the workflow to remember previous interactions and provide more context-aware responses."
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "# Workflow State\n",
    "\n",
    "\n",
    "class ChatState(BaseModel):\n",
    "    memory: InstanceOf[UnconstrainedMemory]\n",
    "    output: str | None = None\n",
    "\n",
    "\n",
    "async def chat(state: ChatState) -> str:\n",
    "    output: ChatModelOutput = await model.run(state.memory.messages)\n",
    "    state.output = output.get_text_content()\n",
    "    return Workflow.END\n",
    "\n",
    "\n",
    "memory = UnconstrainedMemory()\n",
    "await memory.add(SystemMessage(content=\"You are a helpful and friendly AI assistant.\"))\n",
    "\n",
    "try:\n",
    "    # Define the structure of the workflow graph\n",
    "    chat_workflow = Workflow(ChatState)\n",
    "    chat_workflow.add_step(\"chat\", chat)\n",
    "    chat_workflow.add_step(\"generate_answer\", generate_answer)\n",
    "\n",
    "    while True:\n",
    "        user_input = input(\"Type a message to the Assistant (type exit to stop): \")\n",
    "        if user_input == \"exit\":\n",
    "            break\n",
    "        print(user_input)\n",
    "        # Add user message to memory\n",
    "        await memory.add(UserMessage(content=user_input))\n",
    "        # Run workflow with memory\n",
    "        response = await chat_workflow.run(ChatState(memory=memory))\n",
    "        # Add assistant response to memory\n",
    "        await memory.add(AssistantMessage(content=response.state.output))\n",
    "        print(\"Assistant: \" + response.state.output)\n",
    "\n",
    "except WorkflowError:\n",
    "    traceback.print_exc()\n",
    "except ValidationError:\n",
    "    traceback.print_exc()\n",
    "\n",
    "print(\"Demo complete\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "CbdCXYSxHB6r"
   },
   "source": [
    "## Learn More\n",
    "\n",
    "Detailed information on BeeAI can be found [here](https://beeai.dev/).\n",
    "\n",
    "In this notebook, you learned how to build BeeAI Workflows in the BeeAI Framework.\n"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "gpuType": "T4",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
