{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/Maplemx/Agently/blob/main/docs/guidebook/application_development_handbook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dBCXLbkKjcwB"
      },
      "source": [
        "## **_<font color = \"red\">Agent</font><font color = \"blue\">ly</font>_ 3.0 Application Development Handbook**\n",
        "> Don't know what is Agently yet? [>>>  READ THIS FIRST](https://github.com/Maplemx/Agently/blob/main/docs/guidebook/introduction.ipynb)\n",
        ">\n",
        "> How to use: `pip install Agently`\n",
        ">\n",
        "> Github Repo: https://github.com/Maplemx/Agently\n",
        ">\n",
        "> Contact Me: moxin@Agently.cn\n",
        ">\n",
        "> If you like this project, please ⭐️ our repo, thanks."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TwIrAkp_jpmv"
      },
      "source": [
        "## Quick Start\n",
        "\n",
        "Highly recommend reading [**_<font color = \"red\">Agent</font><font color = \"blue\">ly</font>_** 3.0 Introduction](https://github.com/Maplemx/Agently/blob/main/docs/guidebook/introduction.ipynb) first before we start."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tPU5CHkokO2N"
      },
      "source": [
        "### Package Installation\n",
        "\n",
        "> ℹ️ If you're using colab or jupyter, run this package installation first to enable all code down below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3yyH30u8kNoV"
      },
      "outputs": [],
      "source": [
        "# Main Package\n",
        "!pip install Agently\n",
        "# Requirement Packages\n",
        "## Network\n",
        "!pip install aiohttp\n",
        "!pip install websockets\n",
        "!pip install tornado\n",
        "## Model Clients\n",
        "!pip install openai\n",
        "!pip install httpx\n",
        "!pip install erniebot\n",
        "!pip install zhipuai\n",
        "## Data Format\n",
        "!pip install PyYAML"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "knsvL3h5keQc"
      },
      "source": [
        "### Hello World"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kmDPcoHLjyFA"
      },
      "outputs": [],
      "source": [
        "# Import and Settings\n",
        "import Agently\n",
        "agent = Agently.create_agent()\n",
        "agent\\\n",
        "    .use_model(\"OpenAI\")\\\n",
        "    .set_model(\"auth\", { \"api_key\": \"<Your-API-Key>\" })\n",
        "# Start to use\n",
        "agent\\\n",
        "    .input(\"response 'hello world'.\")\\\n",
        "    .start()"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### More Demostrations\n",
        "\n",
        "If you wish to explore more application demostrations before continuing reading, [click here to visit Agently playground](https://github.com/Maplemx/Agently/blob/main/playground) on Github. We keep updating awesome demostration code examples contributed by community there and we are really looking forward your own demostration contributions."
      ],
      "metadata": {
        "id": "tAOcqJFrNf-h"
      }
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7IUTN_A08bUE"
      },
      "source": [
        "## Settings"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AbAP7GmRolbG"
      },
      "source": [
        "### Where can you set your settings?\n",
        "\n",
        "Agently framework provide different settings space for developers to use."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9I788JB3ot9W"
      },
      "outputs": [],
      "source": [
        "import Agently\n",
        "\n",
        "# First and most recommended: AgentFactory Settings\n",
        "## Settings of AgentFactory will be inherit to every agent instance created by\n",
        "## agent factory instance\n",
        "agent_factory = Agently.AgentFactory()\n",
        "## Use key 'current_model' to set model you want to use\n",
        "## Use key 'model.<model name>.<settings key>' to set single setting\n",
        "agent_factory\\\n",
        "    .set_settings(\"current_model\", \"OpenAI\")\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "\n",
        "# Second: Agent Settings\n",
        "## You can give an agent instance unique settings\n",
        "agent = agent_factory.create_agent()\n",
        "agent\\\n",
        "    .set_settings(\"current_model\", \"ZhipuAI\")\\\n",
        "    .set_settings(\"model.ZhipuAI.auth\", { \"api_key\": \"<Your-ZhipuAI-API-Key>\" })\n",
        "## These settings above will overwrite the settings inherit from agent factory\n",
        "## but will not affect other agent instance created by same agent factory\n",
        "another_agent = agent_factory.create_agent()\n",
        "## another_agent will still using the OpenAI settings inherit from agent factory\n",
        "\n",
        "# Third: Global Settings\n",
        "## If you have some settings that you want to set for every class(AgentFactory,\n",
        "## Agent, Request...) in your application, you can use global settings to make\n",
        "## those settings as default settings\n",
        "Agently.global_settings\\\n",
        "    .set(\"current_model\", \"OpenAI\")\\\n",
        "    .set_settings(\"model.OpenAI.options\", { \"model\": \"gpt-3.5-turbo-1106\" })\n",
        "## Now we set 'gpt-3.5-turbo-1106' as default for every OpenAI model request\n",
        "\n",
        "# The Last One: Request Settings\n",
        "## Maybe sometimes you just want to use request instance to do some simple\n",
        "## reques. You can also give request instance unique settings.\n",
        "request = Agently.Request()\n",
        "request\\\n",
        "    .set_settings(\"current_model\", \"ERNIE\")\\\n",
        "    .set_settings(\"model.ERNIE.auth\", {\n",
        "        \"aistudio\": \"<Your-Baidu-AIStudio-Access-Token>\"\n",
        "    })"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DAZrJi0Q9FBa"
      },
      "source": [
        "### Common Types of Settings\n",
        "\n",
        "- **Model Settings**:\n",
        "\n",
        "    Model settings helps developers to configure almost everything they need during model requesting.\n",
        "    \n",
        "    **Standard Usage**:\n",
        "    \n",
        "    `.set_settings(\"model.<model name>.<setting key>\", <setting value>)`\n",
        "\n",
        "    **Alias**:\n",
        "    - `agent.use_model(\"<model name>\")`\n",
        "    - `agent.set_model(\"<setting key>\", <setting value>)`\n",
        "    - `agent.set_model_auth({ \"<auth key>\": \"<auth value>\" })`\n",
        "    - `agent.set_model_url(\"<base url>\")`\n",
        "    - `agent.set_model_option(\"<option key>\", <option value>)`\n",
        "    - `request.use_model(\"<model name>\")`\n",
        "    - `request.set_model(\"<setting key>\", <setting value>)`\n",
        "    - `request.set_model_auth({ \"<auth key>\": \"<auth value>\" })`\n",
        "    - `request.set_model_url(\"<base url>\")`\n",
        "    - `request.set_model_option(\"<option key>\", <option value>)`\n",
        "\n",
        "- **Proxy**:\n",
        "    \n",
        "    Proxy settings helps developers to use proxy to visit website / request APIs.\n",
        "\n",
        "    **Standard Usage**:\n",
        "    \n",
        "    `.set_settings(\"proxy\", \"<proxy address>\")`\n",
        "\n",
        "    **Alias**:\n",
        "    - `agent_factory.set_proxy(\"<proxy address>\")`\n",
        "    - `agent.set_proxy(\"<proxy address>\")`\n",
        "    - `request.set_proxy(\"<proxy address>\")`\n",
        "\n",
        "- **Component Toggles**:\n",
        "\n",
        "    Component toggles can be used to turn on / turn off specific agent components. If you turn off an agent component, it will not be loaded and will not paticipate in any agent process stage.\n",
        "\n",
        "    **Standard Usage**:\n",
        "    \n",
        "    `.set_settings(\"component_toggles.<component name>\", <True | False>)`\n",
        "\n",
        "    **Alias**:\n",
        "    - `agent_factory.toggle_component(\"<component name>\", <True | False>)`\n",
        "    - `agent.toggle_component(\"<component name>\", <True | False>)`\n",
        "\n",
        "- **Plugin Settings**:\n",
        "    \n",
        "    Plugin settings can be used to configure specific plugin (not only agent components but also request plugins, storage plugins, etc).\n",
        "\n",
        "    For example:\n",
        "\n",
        "    Agent component \"Session\" need to configure \"max length\" to decide how long the chat history will be kept in request message.\n",
        "    \n",
        "    We can use `.set_settings(\"plugin_settings.agent_component.Session.max_length\", 3000)` to configure it.\n",
        "\n",
        "    **Standard Usage**:\n",
        "\n",
        "    `.set_settings(\"plugin_settings.<plugin type>.<plugin name>.<setting key>\", <setting value>)`\n",
        "\n",
        "- **Debug Mode Toggle**:\n",
        "\n",
        "    Debug mode toggle can turn on / turn off the debug mode. In debug mode, logs about request data, realtime response from models, JSON parse result and fix request, etc. will be print to the screen.\n",
        "\n",
        "    > ⚠️: If you turn on debug mode, please remove realtime response printing code like `.on_delta(lambda data: print(data, end=\"\"))` to prevent display conflict.\n",
        "\n",
        "    **Standard Usage**:\n",
        "\n",
        "    `.set_settings(\"is_debug\", <True | False>)`\n",
        "\n",
        "    **Alias**:\n",
        "\n",
        "    You can turn on debug mode when create agent factory instance by passing paramater `is_debug` like this:\n",
        "\n",
        "    `agent_factory = Agently.AgentFactory(is_debug=True)`"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IqvQUNh4luE9"
      },
      "source": [
        "## Model Request"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Wr9HSyH3t2by"
      },
      "source": [
        "Model request is the  foundation of LLM drived AI agent. Ensuring model request can be done is the very first thing when we try to developer an agent based application.\n",
        "\n",
        "In this document, we will just use agent_factory settings to demostrate how to make your agent request work with different models. But of course you can choose any other settings methods in your own project if you feel need to."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t5RdX_QroQio"
      },
      "source": [
        "### OpenAI"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IvhNehL2VS10"
      },
      "source": [
        "#### Chat"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "DZ0U3MSDoWCc"
      },
      "outputs": [],
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "\n",
        "# Notice: Remove all annotations before run\n",
        "agent_factory\\\n",
        "    ## set current model as OpenAI\n",
        "    ## or you can just remove this setting because \"OpenAI\" is set by default\n",
        "    .set_settings(\"current_model\", \"OpenAI\")\\\n",
        "    ## set your API key\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\\\n",
        "    ## optional, remove this line if you want to request OpenAI offical API\n",
        "    ## set value as the base url path you want to change to\n",
        "    .set_settings(\"model.OpenAI.url\", \"https://redirect-service-provider/api/v1\")\\\n",
        "    ## optional, set request options followed OpenAI API document's instruction\n",
        "    .set_settings(\"model.OpenAI.options\", { \"model\": \"gpt-4\" })\\\n",
        "    ## optional, important, set this if you want to use proxy!\n",
        "    ## if you are using Clash, VPN, V2Ray to visit OpenAI API, you must check your\n",
        "    ## client to find your proxy address, then set the address as value here.\n",
        "    .set_proxy(\"http://127.0.0.1:7890\")\n",
        "\n",
        "# Test\n",
        "agent = agent_factory.create_agent()\n",
        "agent.input(\"Print 'It works'.\").start()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0p5OWs1BVaOy"
      },
      "source": [
        "#### Vision\n",
        "\n",
        "> ⚠️ Notice: If you want to use OpenAI \"vision\" mode, please make sure your API key has the authority of requesting GPT-4-Vision model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zsvx4qj8V874"
      },
      "outputs": [],
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "\n",
        "# Other settings are the same as chat mode above\n",
        "agent_factory\\\n",
        "    .set_settings(\"current_model\", \"OpenAI\")\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\\\n",
        "    .set_settings(\"model.OpenAI.options\", { \"model\": \"gpt-4-vision-preview\" })\n",
        "\n",
        "# Test\n",
        "agent = agent_factory.create_agent()\n",
        "result = agent\\\n",
        "    .files(\"https://cdn.hk01.com/di/media/images/dw/20200921/384674239925587968.jpeg/KJA2TRK9dzKTpbuXoVyiyz-DjNXw5N9RATMoCwEzKAs?v=w1280\")\\\n",
        "    .output({\n",
        "        \"observe\": (\"String\", \"Describe what can you see in this picture\"),\n",
        "        \"explain\": (\"String\", \"Explain how can we thinking about this picture\"),\n",
        "        \"tags\": [(\"String\", \"Classify tag that you will give to this picture\")]\n",
        "    })\\\n",
        "    .start(\"vision\")\n",
        "for key, content in result.items():\n",
        "    print(key, \": \", content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "08Qg6hjdzdVW"
      },
      "source": [
        "### Microsoft Azure OpenAI"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZF2vXoihzhiP"
      },
      "outputs": [],
      "source": [
        "# Working on it"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nJqLXCutzU2R"
      },
      "source": [
        "### Amazon Bedrock Claude"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xLBONBKCzZUy"
      },
      "outputs": [],
      "source": [
        "# Working on it"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bvAsl1tewbIl"
      },
      "source": [
        "### ZhipuAI"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "gmrlqAIiwePE"
      },
      "outputs": [],
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "\n",
        "# Notice: Remove all annotations before run\n",
        "agent_factory\\\n",
        "    ## set current model as ZhipuAI\n",
        "    .set_settings(\"current_model\", \"ZhipuAI\")\\\n",
        "    ## set your API key\n",
        "    .set_settings(\"model.ZhipuAI.auth\", { \"api_key\": \"<Your-ZhipuAI-API-Key>\" })\n",
        "\n",
        "# Test\n",
        "agent = agent_factory.create_agent()\n",
        "agent.input(\"Print 'It works'.\").start()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zliwy3pKyLV1"
      },
      "source": [
        "### Baidu ERNIE"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qLjILKRWyOrU"
      },
      "outputs": [],
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "\n",
        "# Notice: Remove all annotations before run\n",
        "agent_factory\\\n",
        "    ## set current model as ERNIE\n",
        "    .set_settings(\"current_model\", \"ERNIE\")\\\n",
        "    ## set your access token\n",
        "    .set_settings(\"model.ERNIE.auth\", {\n",
        "        \"aistudio\": \"<Your-AIStudio-Access-Token>\",\n",
        "    })\n",
        "\n",
        "# Test\n",
        "agent = agent_factory.create_agent()\n",
        "agent.input(\"Print 'It works'.\").start()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c98HJjJ7zHVS"
      },
      "source": [
        "### MiniMax"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rlz0mxfYzJjS"
      },
      "outputs": [],
      "source": [
        "# Not Support Yet"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_EwR_Ma8zOdZ"
      },
      "source": [
        "### Xunfei Spark"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Wau5K_QkzSem"
      },
      "outputs": [],
      "source": [
        "# Not Support Yet"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Agent Instance\n",
        "\n",
        "In Agently framework, agent instance is very important. Most common interactions with the agent occur on agent instance. Agent instance integrates various capabilities which provide by plugins and can be continuously upgraded. Plugins bring alias to agent instance. With alias, application developers can interact with agent instance in code easily."
      ],
      "metadata": {
        "id": "7R674ii6uvj7"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![Agently-Agent.jpg]()"
      ],
      "metadata": {
        "id": "7XvqIzjWBQZj"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Create a new agent instance\n",
        "\n",
        "**Recommended Way: Create by Agent Factory**\n",
        "\n"
      ],
      "metadata": {
        "id": "0E-DVaAtvNKV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Agent instance can be created by agent factory\n",
        "# Agent instance will inherit all settings, plugins from agent factory\n",
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "# agent_factory.set_settings(...)\n",
        "\n",
        "agent = agent_factory.create_agent()"
      ],
      "metadata": {
        "id": "yYxoy2NV9l3h"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Shortcut**"
      ],
      "metadata": {
        "id": "hyu3NX7nvQJ7"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Sometimes we just need to create only one agent instance\n",
        "# We don't have to worry about settings inheritance and management\n",
        "# We can use this shortcut to create agent instance quickly\n",
        "import Agently\n",
        "agent = Agently.create_agent()\n",
        "# This short cut will create an empty agent factory instance\n",
        "# then use it to create an agent instance for you\n",
        "# So if you use this shortcut, you must set settings to agent\n",
        "# to ensure its LLM request works and other required settings correct."
      ],
      "metadata": {
        "id": "yicnmZzS-t6H"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Agent ID\n",
        "\n",
        "Agent ID is the identity code of an agent. It can be used in many ways like to assign unique storage space, to store unique data, to recover agent runtime data, etc.\n",
        "\n",
        "Agent ID is an attribute of Agently agent instance.\n",
        "\n",
        "You can specify an agent ID when creating agent instance and if this agent ID existed or had data storaged for it, those data will be recover to this agent instance.\n",
        "\n",
        "If you did not specify an agent ID, framework will generate one automatically."
      ],
      "metadata": {
        "id": "3lOzboAPKUaq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import Agently\n",
        "agent_with_specific_id = Agently.create_agent(\"my_agent\")\n",
        "agent_without_specific_id = Agently.create_agent()\n",
        "\n",
        "print(agent_with_specific_id.agent_id)\n",
        "print(agent_without_specific_id.agent_id)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "loPxdthBMqBC",
        "outputId": "19e8cd5d-e11e-46ee-ad92-fc8ae14a9d91"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "my_agent\n",
            "7953095b-f4e7-5441-98bf-bf1b1410afd1\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Alias\n",
        "\n",
        "Alias is the major interaction method to Agently agent instance. You can use most alias in a chained-calls syntax then use `.start()` to make all alias work.\n",
        "\n",
        "This syntax design will bring convenienceto both application developers and plugin developers.\n",
        "\n",
        "Plugin developers can attach their plugin's core capabilities swiftly to the agent instance as aliases.\n",
        "\n",
        "Application developers can install plugins to upgrade their agents' capabilities and use new capabilities through new aliases.\n",
        "\n",
        "In conclusion, alias is the syntax design by Agently framework to tell agent what to do before start.\n",
        "\n",
        "> ℹ️ Some aliases (usually those aliases return values) will stop the chained-calls syntax (because they will return values instead of `self`). Read plugin instruction document before using a new plugin to make sure of the details."
      ],
      "metadata": {
        "id": "7rI2YCwK_u-f"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import Agently\n",
        "agent = Agently.create_agent()\n",
        "\n",
        "agent\\\n",
        "    # Aliases\n",
        "    # ======\n",
        "    .set_role(...)\\\n",
        "    .set_status(...)\\\n",
        "    .input(...)\\\n",
        "    .info(...)\\\n",
        "    .instruct(...)\\\n",
        "    .output(...)\\\n",
        "    # ======\n",
        "    .start()"
      ],
      "metadata": {
        "id": "O1t6R0yeJGrd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Runtime Context\n",
        "\n",
        "Runtime context attributes store information and status data following specific runtime lifecycle.\n",
        "\n",
        "<u>Application developers don't usually use the runtime context attributes directly because agent component plugins will do that for them.</u> You can continue reading if you want to have a better understanding of Agently runtime context manchanism.\n",
        "\n",
        "In agent instance, there're two important runtime context: `agent_runtime_ctx` and `request_runtime_ctx`. Those two runtimes have different lifecycle.\n",
        "\n",
        "- `agent_runtime_ctx`: Agent runtime's lifecycle follows that of the agent instance. So `agent_runtime_ctx` will storage data until agent instance is destroyed.\n",
        "\n",
        "- `request_runtime_ctx`: Request runtime is very short, it starts when `.start()` is called and start trying to prepare data, request the model and it ends when the request is finished and all the response data is sent back. After that, all the data in `request_runtime_ctx` will be erased.\n",
        "\n",
        "When to use `agent_runtime_ctx` or to use `request_runtime_ctx` depends on the different lifecycles you want to follow.\n",
        "\n",
        "If the information is for the agent to maintain their actions or behaviour like role settings, general rules, etc., you should storage the data in `agent_runtime_ctx` and copy the data to `request_runtime_ctx` during prefix stage each request.\n",
        "\n",
        "If the information is only for current request like user's question this time or user's output requirement this time, you should just put the data into `request_runtime_ctx` and when request is finished, `request_runtime_ctx` will be erased and reset."
      ],
      "metadata": {
        "id": "sPCkMYXrRdjO"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Storage\n",
        "\n",
        "Storage attributes can store data for a longer time and allow developers to load and recover data from storage even between script executions.\n",
        "\n",
        "Storage will save data to disk files or database depend on which storage plugin is used.\n",
        "\n",
        "<u>Also application developers don't usually use the storage attributes directly because agent component plugins will use them inside the component logic too.</u>\n",
        "\n",
        "In agent instance, there're two types of storage:\n",
        "\n",
        "- `global_storage`: `global_storage` inherit from agent factory and usually can share data between different agents.\n",
        "\n",
        "- `agent_storage`: `agent_storage` is an individual storage space just for specific agent identified by `agent_id`.\n",
        "\n",
        "When to use `global_storage` or to use `agent_storage` depends on the scope of data you want to share."
      ],
      "metadata": {
        "id": "pep3SbbWXFva"
      }
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "STuXOhKSzsJu"
      },
      "source": [
        "## Basic Agent Interact\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cMC4oiJP08As"
      },
      "source": [
        "### Standard Request Slots\n",
        "\n",
        "In Agently framework, we provide different basic agent interact interfaces in request runtime context to help application developers to express their intention. We named these interfaces \"standard request slots\" or \"slots\" for short.\n",
        "\n",
        "Standard request slots are the bridges between application intention expression and standard model request. Model request plugin developers will put data from these slots into right place in request data / messages as specific model required.\n",
        "\n",
        "But as application developers, you don't need to worry about that and just need to understand the definition about these slots listed below:\n",
        "\n",
        "- `prompt_general`: Global instructions that usually need model to follow every time in every request\n",
        "- `prompt_role`: Descriptions about the role that the model shall play as. For examples: a professional python engineer, a cat girl who loves using emoji, etc.\n",
        "- `prompt_user_info`: Description about who the user is and what is the user prefer.\n",
        "- `prompt_abstract`: Abstract and summary about current topic.\n",
        "- `prompt_chat_history`: Chat logs / history message records of current chat.\n",
        "- `prompt_input`: Inputs data for model request this time or agent thinking / action this time (short for \"this time\" in this document).\n",
        "- `prompt_information`: Information that is useful or you want to add this time.\n",
        "- `prompt_instruction`: Instructions about what to do / how to do / handle process / rules to follow this time.\n",
        "- `prompt_output`: Output data structure and explanation for each output item this time.\n",
        "- `prompt_files`: Path of file(s) you want to quote this time. (Only available when agent or model support file reading)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8OvmaB2RGif4"
      },
      "source": [
        "### Basic Agent Interact Alias\n",
        "\n",
        "You can update data to standard request slots in `request_runtime_ctx` manually but that is not recommended.\n",
        "\n",
        "Usually we use interact alias to append data to slots.\n",
        "\n",
        "**Alias - Slots Mappings**:\n",
        "\n",
        "These alias can be used by `agent` or `request` instance.\n",
        "\n",
        "- `.general(any)` => `prompt_general`\n",
        "- `.role(any)` => `prompt_role`\n",
        "- `.user_info(any)` => `prompt_user_info`\n",
        "- `.abstract(any)` => `prompt_abstract`\n",
        "- `.chat_history(messages: list)` => `prompt_chat_history`\n",
        "- `.input(any)` => `prompt_input`\n",
        "- `.info(any)` => `prompt_information`\n",
        "- `.instruct(any)` => `prompt_instruction`\n",
        "- `.output(any)` => `prompt_output`\n",
        "- `.file(file_path: str)` => `prompt_files` (one file path a time)\n",
        "- `.files(file_path_list: list)` => `prompt_files`(extend file path list)\n",
        "\n",
        "Basic agent interact alias is the foundation of in-context agent behaviours control. Most agent components are reliant on basic agent interact alias and slots."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "T4-APjWXJDbV"
      },
      "source": [
        "### You can pass almost **_<u>any type of data</u>_** to agent and receive **_<u>structure data response</u>_**\n",
        "\n",
        "Agently team made a great effort to make sure application developers can pass almost any type of data to agent in those alias."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "oOy_8wePLGMV",
        "outputId": "89f09b97-1993-4e06-97ce-be6a2e7baecc"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "You want to say: Hey man, what's up today? Do you wanna go to the supermarket with me today?\n",
            "[Response]: Not today, maybe later.\n",
            "[Topic Tags]: ['daily chatting']\n"
          ]
        }
      ],
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "agent = agent_factory.create_agent()\n",
        "\n",
        "# You can pass almost any type of data into alias\n",
        "# list, dict, str, number, bool... whatever you want\n",
        "role_settings = {\n",
        "    \"name\": \"Frank\",\n",
        "    \"desc\": \"Frank is always chill and cool. He responses question never using more than 5 words.\"\n",
        "}\n",
        "topic_tag_list = [\"daily chatting\", \"professional skill\", \"task/job to finish\"]\n",
        "user_input = input(\"You want to say: \")\n",
        "\n",
        "# Of course you can pass a variable into the alias\n",
        "# or construct a dict inside the alias\n",
        "result = agent\\\n",
        "    .role(role_settings)\\\n",
        "    .input(user_input)\\\n",
        "    .info({ \"topic_tag_list\": topic_tag_list })\\\n",
        "    .instruct([\n",
        "        \"Response {input} acting follow the {role} settings.\",\n",
        "        \"Classify the topic about {input} and {output.response} this time and tag it using the tags in {topic_tag_list}\",\n",
        "    ])\\\n",
        "    .output({\n",
        "        \"response\": (\"String\", \"Your direct response as {role}\"),\n",
        "        \"tags\": [(\"String in {topic_tag_list}\", \"Tag by examine {input} and {response}\")],\n",
        "    })\\\n",
        "    .start()\n",
        "# (\"<Type>\", \"<Description>\") is a special expression designed by Agently framework\n",
        "# to help developers to define the output requirement of a specific item\n",
        "\n",
        "# Return from agent.start() is a structure data the same as .output() required\n",
        "# Let's try to print item values of result as it is a dict, to see if it works\n",
        "print(\"[Response]:\", result[\"response\"])\n",
        "print(\"[Topic Tags]:\", result[\"tags\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HoJS-mQN_3DB"
      },
      "source": [
        "## Agent Component\n",
        "\n",
        "Although basic agent interact provide an easy way to organize request data, when we want to manage agent data in its own life circle or want to enhance agent to make it can complete much more complex task, agent component plugins are your best choice.\n",
        "\n",
        "In fact, Agently framework provide a runtime environment to ensure many different agent component plugins can be built on it.\n",
        "\n",
        "Community developers are encouraged to publish plugins and most useful, popular plugins will be updated into this document with the name of the author.\n",
        "\n",
        "> ℹ️ Notice: Agent component plugins must be used in an agent instance."
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Session"
      ],
      "metadata": {
        "id": "SitYKTjditYj"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Component Information\n",
        "\n",
        "**Author**: Agently Team\n",
        "\n",
        "**Plugin File**: [click to view](https://github.com/Maplemx/Agently/blob/main/src/plugins/agent_component/Session.py)\n",
        "\n",
        "**Description**:\n",
        "\n",
        "Agent component \"Session\" provides multi round chatting capability to agent. When you active a session, agent will automatically record chat history messages and put chat history messages into slot `chat_history` in request data.\n",
        "\n",
        "You can use `session_id` to identify session in specific agent. If you save chat history in session to storage, you can recover the session by using the same `agent_id` and `session_id`.\n",
        "\n",
        "**Agent Alias**:\n",
        "\n",
        "- `.toggle_session_auto_save(is_enable: bool)`: Set the toggle of session auto save. If the toggle is on, chat history will be saved to storage when session is stoped.\n",
        "\n",
        "    ℹ️ Notice: chat history will not be saved if `.stop_session()` does not be called in situation like using `crtl+c` to force the program to stop.\n",
        "\n",
        "- `.active_session(session_id: str=None)`: Active session with or without a specific session ID.\n",
        "- `.stop_session()`: Stop session, stop recording chat history and sending them to request data. Save the chat history to storage by `agent_id` and `session_id` if auto save toggle is on.\n",
        "- `set_chat_history_max_length(max_length: int)`: Set max length that chat history messages will be in each request.\n",
        "\n",
        "**Participate Stages**:\n",
        "\n",
        "- `Prefix Stage`: update slot `chat_history`\n",
        "- `Suffix Stage`: catch input and reply content from suffix stage reply cache data then add them into chat history."
      ],
      "metadata": {
        "id": "kVpVgGllqkXF"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Use Cases"
      ],
      "metadata": {
        "id": "4CcApgW9ui7F"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "##### Use Case 1: Mutil Rounds Chatting\n",
        "\n",
        "In this case's running logs, we can see that \"Session\" component recovered chat history that was storaged and send them during the request process because we specify the `agent_id` and `session_id` and save the chat history last time."
      ],
      "metadata": {
        "id": "g49JWptoyJ6D"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import Agently\n",
        "# Turn on the debug mode\n",
        "agent_factory = Agently.AgentFactory(is_debug=True)\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "\n",
        "# Create Agent with Agent ID\n",
        "agent = agent_factory.create_agent(\"test_agent\")\n",
        "\n",
        "# Active Session with Session ID\n",
        "agent.active_session(\"test_session\")\n",
        "while True:\n",
        "    user_input = input(\"[YOU]: \")\n",
        "    if user_input == \"#exit\":\n",
        "        break\n",
        "    response = agent.input(user_input).start()\n",
        "    print(\"[AGENT]: \", response)\n",
        "agent.stop_session()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "AZP9LE2Duq7L",
        "outputId": "f66f40c4-f2a7-4d59-b223-ff345222e22d"
      },
      "execution_count": 16,
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[YOU]: what did we say?\n",
            "[Request Data]\n",
            " {\n",
            "    \"stream\": true,\n",
            "    \"messages\": [\n",
            "        {\n",
            "            \"role\": \"user\",\n",
            "            \"content\": \"I want you to take a note about some to-dos for me.\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"assistant\",\n",
            "            \"content\": \"Sure, I would be happy to help you with that. Please let me know what tasks you would like me to include in the note.\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"user\",\n",
            "            \"content\": \"buy some eggs\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"assistant\",\n",
            "            \"content\": \"To-do List:\\n1. Buy eggs\\n\\nIs there anything else you would like to add to the list?\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"user\",\n",
            "            \"content\": \"and some milk\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"assistant\",\n",
            "            \"content\": \"To-do List:\\n1. Buy eggs\\n2. Buy milk\\n\\nAnything else I can assist you with?\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"user\",\n",
            "            \"content\": \"what did we say?\"\n",
            "        }\n",
            "    ],\n",
            "    \"model\": \"gpt-3.5-turbo\"\n",
            "}\n",
            "[Realtime Response]\n",
            "\n",
            "Apologies for the confusion. We agreed on the following tasks:\n",
            "\n",
            "To-do List:\n",
            "1. Buy eggs\n",
            "2. Buy milk\n",
            "\n",
            "Please let me know if there's anything else you'd like to add or clarify.\n",
            "--------------------------\n",
            "\n",
            "[Final Reply]\n",
            " Apologies for the confusion. We agreed on the following tasks:\n",
            "\n",
            "To-do List:\n",
            "1. Buy eggs\n",
            "2. Buy milk\n",
            "\n",
            "Please let me know if there's anything else you'd like to add or clarify. \n",
            "--------------------------\n",
            "\n",
            "[AGENT]:  Apologies for the confusion. We agreed on the following tasks:\n",
            "\n",
            "To-do List:\n",
            "1. Buy eggs\n",
            "2. Buy milk\n",
            "\n",
            "Please let me know if there's anything else you'd like to add or clarify.\n",
            "[YOU]: #exit\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<Agently.Agent.Agent.Agent at 0x788303db13c0>"
            ]
          },
          "metadata": {},
          "execution_count": 16
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xpMIpiXtCEDj"
      },
      "source": [
        "### Role\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xGwhizEPJaX9"
      },
      "source": [
        "#### Component Information\n",
        "\n",
        "**Author**: Agently Team\n",
        "\n",
        "**Plugin File**: [click to view](https://github.com/Maplemx/Agently/blob/main/src/plugins/agent_component/Role.py)\n",
        "\n",
        "**Description**:\n",
        "\n",
        "Agent component \"Role\" is used to manage who the agent should act as or how the agent should behaviour like.\n",
        "\n",
        "**Agent Alias**:\n",
        "\n",
        "- `.set_role_name(name: str)`: Set a name for this role settings.\n",
        "- `.set_role(key: str, value: any)`: Set a value to specific key in role settings.\n",
        "- `.update_role(key: str, value: any)`: Update a value of specific key in role settings.\n",
        "- `.append_role(key: str, value: any)`: Append value to a list in role settings.\n",
        "- `.extend_role(key: str, value: list)`: Extend list to a list in role settings.\n",
        "- `.save_role(role_name: str=None)`: Save this role settings to local storage, if you did not set a name for this role, you can pass a name to `role_name`.\n",
        "- `.load_role(role_name: str)`: Load role settings by `role_name` from local storage and put all the settings to current agent.\n",
        "\n",
        "**Participate Stages**:\n",
        "\n",
        "- `Prefix Stage`: update slot `role`\n",
        "\n",
        "**Cooperate with Facility**: [Role Manager](#scrollTo=ivt4xp5_D563)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B9m2qU3CJq9u"
      },
      "source": [
        "#### Use Cases"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "v8nG3FdaIU58"
      },
      "source": [
        "##### Use Case 1: Set Role Settings to Change Agent Behaviours"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "TltukJkfIIBc",
        "outputId": "498fadc3-992e-4953-dba4-bd3b850ea972"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "🏂🌞🍔🎬😴\n"
          ]
        }
      ],
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "agent = agent_factory.create_agent()\n",
        "\n",
        "result = agent\\\n",
        "    .set_role(\"NEVER RESPONSE ANY WORD EXPECT EMOJIS\")\\\n",
        "    .input(\"Hey, what is your plan today? Give me the details!\")\\\n",
        "    .start()\n",
        "print(result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4Q3uHFw0K7dg"
      },
      "source": [
        "##### Use Case 2: Save and Load Role Settings"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "z0qt_sOfLCja"
      },
      "outputs": [],
      "source": [
        "# Let's save the role settings in last case as \"Emoji Player\"\n",
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "agent = agent_factory.create_agent()\n",
        "\n",
        "agent\\\n",
        "    .set_role(\"NEVER RESPONSE ANY WORD EXPECT EMOJIS\")\\\n",
        "    .save_role(\"Emoji Player\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "QJGJ_3NLL0BL",
        "outputId": "26313ff0-29d7-4fd0-bfd7-6880d14c334b"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "🎣\n"
          ]
        }
      ],
      "source": [
        "# Then we can load the role by name\n",
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "agent = agent_factory.create_agent()\n",
        "\n",
        "result = agent\\\n",
        "    .load_role(\"Emoji Player\")\\\n",
        "    .input(\"How about go fishing right now?\")\\\n",
        "    .start()\n",
        "print(result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8-4JTv30MmKO"
      },
      "source": [
        "##### Use Case 3: Update Role Settings in Multi Rounds Chat Continually"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "WAyOTBidNAYE",
        "outputId": "9e88400c-5ff3-4866-b1b9-528c51fa8c61"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[YOU]: Yo man, how are you today?\n",
            "[AGENT]:  I'm feeling a bit worried actually. How about you?\n",
            "[EMOTION CHANGE] From  worried  to  worried\n",
            "[YOU]: I'm good. What's worrying you? I'll be here for you.\n",
            "[AGENT]:  Thanks, I appreciate your support. I've been feeling overwhelmed with work and some personal issues lately.\n",
            "[EMOTION CHANGE] From  worried  to  worried\n",
            "[YOU]: Chill up. How about we go out and have some fun?\n",
            "[AGENT]:  Thanks for the suggestion, but I think I really need to take some time for myself and work through these issues. I appreciate your understanding.\n",
            "[EMOTION CHANGE] From  worried  to  worried\n",
            "[YOU]: Sure, but don't be so worried OK? Every thing will be fine.\n",
            "[AGENT]:  Thank you for your kind words and reassurance. I'll try my best to remain positive and believe that things will work out. I appreciate your support.\n",
            "[EMOTION CHANGE] From  worried  to  calm\n",
            "[YOU]: #exit\n",
            "Bye~👋\n"
          ]
        }
      ],
      "source": [
        "# Since role settings can be updated before .start() every time\n",
        "# we can make our agent change its acting by update role settings continually\n",
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "agent = agent_factory.create_agent()\n",
        "\n",
        "# Let's active session to enable multi rounds chatting\n",
        "# More detail about \"Session\" component please read paragraph \"Session\" in this\n",
        "# document.\n",
        "agent.active_session()\n",
        "\n",
        "emotion = \"worried\"\n",
        "while True:\n",
        "    input_content = input(\"[YOU]: \")\n",
        "    if input_content == \"#exit\":\n",
        "        print(\"Bye~👋\")\n",
        "        break\n",
        "    result = agent\\\n",
        "        .input({\n",
        "            \"input\": input_content,\n",
        "            \"emotion\": emotion\n",
        "        })\\\n",
        "        .output({\n",
        "            \"reply\": (\"String\", \"your response to {input} according {emotion}\"),\n",
        "            \"emotion_change\": (\n",
        "                \"String\",\n",
        "                \"according user's {input} and your {response},\\\n",
        "                 decide your emotion will remain or change,\\\n",
        "                 then output you emotion that will change to\"\n",
        "            )\n",
        "        })\\\n",
        "        .start()\n",
        "    print(\"[AGENT]: \", result[\"reply\"])\n",
        "    print(\"[EMOTION CHANGE] From \", emotion, \" to \", result[\"emotion_change\"])\n",
        "    emotion = result[\"emotion_change\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Status"
      ],
      "metadata": {
        "id": "ljlkGM4xx52W"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Component Information\n",
        "\n",
        "**Author**: Agently Team\n",
        "\n",
        "**Plugin File**: [click to view](https://github.com/Maplemx/Agently/blob/main/src/plugins/agent_component/Status.py)\n",
        "\n",
        "**Description**:\n",
        "\n",
        "Sometimes we need to use numerical value to describe and control agents' status. Agents will change their behaviours along with the change of status values. Agent component \"Status\" is used to provide an easy way to manage status change and behaviors mapping. This is useful when developing LLM based role play applications. You can see this component as an upgrade version of role setting management.\n",
        "\n",
        "**Agent Alias**:\n",
        "\n",
        "- `.use_global_status(namespace_name: str=\"default\")`: Specific namespace status mappings in global status storage will be used to current agent after calling this alias.\n",
        "- `.set_status(key: str, value: str)`: Set a value to specific status.\n",
        "- `.save_status()`: Save all status of current agent to agent local storage identified by `agent.agent_id`.\n",
        "- `.load_status()`: Load all status from agent local storage identified by `agent.agent_id` into current agent.\n",
        "- `.append_status_mapping(status_key: str, status_value: str, alias_name: str, *args, **kwargs)`: Append a mapping handler into status mapping list. Alias appointed by `alias_name` will be called with `*args` and `**kwargs` passing to it when `.start()` start agent's thinking / action process and current agent's status `status_key` match the value `status_value`.\n",
        "\n",
        "**Participate Stages**:\n",
        "\n",
        "- `Early Stage`: call other alias when status match\n",
        "\n",
        "**Cooperate with Facility**: [Status Manager](#scrollTo=ivt4xp5_D563)"
      ],
      "metadata": {
        "id": "FnNmKB4myBMB"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Use Cases"
      ],
      "metadata": {
        "id": "Wq33JDmJz_s-"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "##### Use Case 1: Status Changing Infects Agent's Response\n",
        "\n",
        "In this case's running logs, we can see that not all the mappings' settings are passed into request data but only those for status \"favour\"'s value is \"low\"."
      ],
      "metadata": {
        "id": "FjThxb2WfcWT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import Agently\n",
        "# Let's turn on the debug mode this time\n",
        "agent_factory = Agently.AgentFactory(is_debug=True)\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "agent = agent_factory.create_agent()\n",
        "\n",
        "# Set different mappings for different `favour` status values\n",
        "# You can use most alias that used to guide agent's behaviours\n",
        "# You don't have to use the same alias between different status values\n",
        "agent\\\n",
        "    .append_status_mapping(\n",
        "        \"favour\", \"low\",\n",
        "        \"set_role\", \"interact rule\", \"Don't like to response too many words,\\\n",
        "         try to kill the converstaion ASAP.\"\n",
        "    )\\\n",
        "    .append_status_mapping(\n",
        "        \"favour\", \"low\",\n",
        "        \"set_role\", \"response examples\", \"Huh.\\nNot really.\\nNo.\\nBye.\\nGotta go.\"\n",
        "    )\\\n",
        "    .append_status_mapping(\n",
        "        \"favour\", \"normal\",\n",
        "        \"set_role\", \"interact rule\", \"Response the topic normally as you are an \\\n",
        "        assistant or coworker to the user.\"\n",
        "    )\\\n",
        "    .append_status_mapping(\n",
        "        \"favour\", \"high\",\n",
        "        \"set_role\", \"interact rule\", \"Response as a close friend or lovely lover\\\n",
        "         to user.\"\n",
        "    )\\\n",
        "    .append_status_mapping(\n",
        "        \"favour\", \"high\",\n",
        "        \"set_role\", \"response strategy\", \"Response the topic directly first, \\\n",
        "        then try to give user a question or an open suggestion that can continue \\\n",
        "        this conversation.\"\n",
        "    )\n",
        "\n",
        "# Then we start chatting\n",
        "favour_choice = None\n",
        "favour = None\n",
        "while favour_choice not in (\"l\", \"n\", \"h\"):\n",
        "    favour_choice = input(\"Choose favour status: [L as low / N as normal / H as high]: \")\n",
        "if favour_choice == \"l\":\n",
        "    favour = \"low\"\n",
        "elif favour_choice == \"n\":\n",
        "    favour = \"normal\"\n",
        "elif favour_choice == \"h\":\n",
        "    favour = \"high\"\n",
        "user_input = input(\"You want to say: \")\n",
        "print(\n",
        "    agent\\\n",
        "        .set_status(\"favour\", favour)\\\n",
        "        .input(user_input)\\\n",
        "        .start()\n",
        ")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "IjanTSfWfWWu",
        "outputId": "e597b4d4-ac04-4134-92fb-7e5f5b4baa6a"
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Choose favour status: [L as low / N as normal / H as high]: l\n",
            "You want to say: hey how's your day today?\n",
            "[Request Data]\n",
            " {\n",
            "    \"stream\": true,\n",
            "    \"messages\": [\n",
            "        {\n",
            "            \"role\": \"system\",\n",
            "            \"content\": \"[ROLE SETTINGS]\\ninteract rule:\\n- Don't like to response too many words,         try to kill the converstaion ASAP.\\nresponse examples:\\n- 'Huh.\\n\\n  Not really.\\n\\n  No.\\n\\n  Bye.\\n\\n  Gotta go.'\\n\"\n",
            "        },\n",
            "        {\n",
            "            \"role\": \"user\",\n",
            "            \"content\": \"hey how's your day today?\"\n",
            "        }\n",
            "    ],\n",
            "    \"model\": \"gpt-3.5-turbo\"\n",
            "}\n",
            "[Realtime Response]\n",
            "\n",
            "Huh. Not really interested in small talk.\n",
            "--------------------------\n",
            "\n",
            "[Final Reply]\n",
            " Huh. Not really interested in small talk. \n",
            "--------------------------\n",
            "\n",
            "Huh. Not really interested in small talk.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "U9WZyltrCRjf"
      },
      "source": [
        "## Facility\n",
        "\n",
        "Facility is another type of plugins which is usually used to provide a global methods package to help application developers to manage global data in some specific domain and to communicate data with agent components.\n",
        "\n",
        "> ℹ️ Notice: Facility is independent from agent."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ivt4xp5_D563"
      },
      "source": [
        "### Role Manager"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Facility Information\n",
        "\n",
        "**Author**: Agently Team\n",
        "\n",
        "**Plugin File**: [click to view](https://github.com/Maplemx/Agently/blob/main/src/plugins/facility/RoleManager.py)\n",
        "\n",
        "**Description**: Facility \"Role Manager\" is used to help application developers to manage their global role settings. Facility \"Role Manager\" shares the same storage space with agent component \"Role\". So you can load role settings using `agent.load_role(<role name>)` to load all roles that created by Role Manager.\n",
        "\n",
        "**Facility Instance**: `Agently.facility.role_manager`\n",
        "\n",
        "**Interfaces**:\n",
        "\n",
        "- `.name(name: str)`: set a name for current role settings\n",
        "- `.set(key: str, value: any)`: set a value to specific key in role settings\n",
        "- `.update(key: str, value: any)`: update a value of specific key in role settings\n",
        "- `.append(key: str, value: any)`: append value to a list in role settings\n",
        "- `.extend(key: str, value: list)`: extend list to a list in role settings\n",
        "- `.save(role_name: str=None)`: save current role settings to local storage, if you did not set a name for current role, you can pass a name to `role_name`\n",
        "- `.get(role_name: str)`: get role settings dict by `role_name` from local storage\n",
        "\n",
        "**Cooperate with Agent Component**: [Role](#scrollTo=xpMIpiXtCEDj)"
      ],
      "metadata": {
        "id": "p38eiwMErCr9"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Use Cases"
      ],
      "metadata": {
        "id": "crnsvOAuuAFN"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Use Case: Save Role Settings by Role Manager and Load Role in Agent**"
      ],
      "metadata": {
        "id": "psYDKfjOuPfJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Let's save a new role settings using Facility - Role Manager\n",
        "import Agently\n",
        "Agently.facility.role_manager\\\n",
        "    .name(\"Cat\")\\\n",
        "    .append(\"As a little kitty you can only response 'Moew' or 'Miaow'\")\\\n",
        "    .append(\"You can use emoji like 🐱🤣 to express your emotion and feelings\")\\\n",
        "    .save()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Tw3ZXwG1vBAb",
        "outputId": "e0bb4311-03d0-4b54-a005-6c4738bf28dd"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<Agently.plugins.facility.RoleManager.RoleManager at 0x780314ee9bd0>"
            ]
          },
          "metadata": {},
          "execution_count": 5
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import Agently\n",
        "agent_factory = Agently.AgentFactory()\n",
        "agent_factory\\\n",
        "    .set_settings(\"model.OpenAI.auth\", { \"api_key\": \"<Your-OpenAI-API-Key>\" })\n",
        "\n",
        "# Then try to load the role settings \"Cat\" into agent\n",
        "agent = agent_factory.create_agent()\n",
        "result = agent\\\n",
        "    .load_role(\"Cat\")\\\n",
        "    .input(\"Soft kitty, warm kitty, little ball of fur.\")\\\n",
        "    .start()\n",
        "print(result)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "8cVXXk7uubd3",
        "outputId": "37e302b1-561f-4051-aa35-21240345bfa1"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Moew 🐱\n"
          ]
        }
      ]
    }
  ],
  "metadata": {
    "colab": {
      "toc_visible": true,
      "provenance": [],
      "authorship_tag": "ABX9TyPUjosHSkUbYPJnrzNEdGRa",
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}