{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "authorship_tag": "ABX9TyOGrSKaDXPcRi1LTBtpDbFF",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/sugarforever/LangChain-Advanced/blob/main/claude_3_xml_agent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Claude 3 XML Agent\n",
        "\n",
        "## Why Use Agent?\n",
        "\n",
        "The core idea of agents is to use a language model to choose a sequence of actions to take.\n",
        "\n",
        "## What is XML Agent?\n",
        "\n",
        "XML tags are a powerful tool for structuring prompts and guiding Claude's responses. Claude is particularly familiar with prompts that have XML tags as Claude was exposed to such prompts during training. By wrapping key parts of your prompt (such as instructions, examples, or input data) in XML tags, you can help Claude better understand the context and generate more accurate outputs. This technique is especially useful when working with complex prompts or variable inputs.\n",
        "\n",
        "The agent using Claude 3 models utilizing XML tags in prompts can be called XML agent.\n",
        "\n",
        "## Why Use XML Tags?\n",
        "\n",
        "Please refer to the following user guide for more details:\n",
        "\n",
        "https://docs.anthropic.com/claude/docs/use-xml-tags#why-use-xml-tags\n",
        "\n",
        "## About This Tutorial\n",
        "\n",
        "In this tutorial, we will use the following tools to implement XML agent that fully utilizes XML in agent with Claude 3:\n",
        "\n",
        "1. Claude 3 (opus or sonnet)\n",
        "2. LangChain\n",
        "3. Faiss\n",
        "4. PyPDF"
      ],
      "metadata": {
        "id": "FrZhgiDo3a0H"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "LSnWIDBfo0GG",
        "outputId": "f487db9a-adaa-4bd8-ce80-e140b4789eaf"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m809.1/809.1 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m27.0/27.0 MB\u001b[0m \u001b[31m17.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m286.1/286.1 kB\u001b[0m \u001b[31m10.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m47.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m260.9/260.9 kB\u001b[0m \u001b[31m14.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m68.0/68.0 kB\u001b[0m \u001b[31m3.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m850.5/850.5 kB\u001b[0m \u001b[31m33.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m257.5/257.5 kB\u001b[0m \u001b[31m13.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m23.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m4.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.0/53.0 kB\u001b[0m \u001b[31m4.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m138.5/138.5 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.8/77.8 kB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "!pip install -q -U langchain langchainhub langchain-anthropic langchain-openai faiss-cpu pypdf"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!wget -O aesopagent.pdf https://arxiv.org/pdf/2403.07952.pdf"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "e5IS-9eEpPYl",
        "outputId": "830f4f49-c6c6-45dc-ae83-eecd7fc00bdd"
      },
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2024-03-18 21:24:20--  https://arxiv.org/pdf/2403.07952.pdf\n",
            "Resolving arxiv.org (arxiv.org)... 151.101.131.42, 151.101.67.42, 151.101.195.42, ...\n",
            "Connecting to arxiv.org (arxiv.org)|151.101.131.42|:443... connected.\n",
            "HTTP request sent, awaiting response... 403 Forbidden\n",
            "2024-03-18 21:24:20 ERROR 403: Forbidden.\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "from google.colab import userdata\n",
        "\n",
        "os.environ[\"OPENAI_API_KEY\"] = userdata.get('OPENAI_API_KEY')\n",
        "os.environ[\"ANTHROPIC_API_KEY\"] = userdata.get('ANTHROPIC_API_KEY')"
      ],
      "metadata": {
        "id": "yNHAfXOsqUZA"
      },
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain_community.document_loaders.pdf import PyPDFLoader\n",
        "from langchain_community.vectorstores.faiss import FAISS\n",
        "from langchain_openai import OpenAIEmbeddings"
      ],
      "metadata": {
        "id": "egUwhsNEpYY9"
      },
      "execution_count": 4,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "splits = PyPDFLoader(\"./aesopagent.pdf\").load_and_split()"
      ],
      "metadata": {
        "id": "k_paDvAgp08A"
      },
      "execution_count": 11,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "vectorstore = FAISS.from_documents(splits, OpenAIEmbeddings())"
      ],
      "metadata": {
        "id": "6VWuaAc0p7w2"
      },
      "execution_count": 12,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.agents import tool\n",
        "\n",
        "@tool\n",
        "def aesop_agent_search(query: str) -> str:\n",
        "    \"Use this tool when answering questions about Aesop Agent\"\n",
        "\n",
        "    similar_docs = vectorstore.similarity_search(query)\n",
        "    results_str = \"\\n\\n\".join(\n",
        "        [doc.page_content for doc in similar_docs]\n",
        "    )\n",
        "    return results_str\n",
        "\n",
        "tools = [aesop_agent_search]"
      ],
      "metadata": {
        "id": "rXN7XdBfoOCj"
      },
      "execution_count": 13,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "print(\n",
        "    aesop_agent_search.run(tool_input={\"query\": \"What is Aesop Agent?\"})\n",
        ")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "eB-h0l98rLpx",
        "outputId": "2986044c-f756-4fd5-985d-5684a749d952"
      },
      "execution_count": 52,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "AesopAgent: Agent-driven Evolutionary System on\n",
            "Story-to-Video Production\n",
            "Jiuniu Wang∗Zehua Du∗Yuyuan Zhao∗Bo Yuan Kexiang Wang Jian Liang\n",
            "Yaxi Zhao Yihen Lu Gengliang Li Junlong Gao Xin Tu Zhenyu Guo†\n",
            "DAMO Academy, Alibaba Group\n",
            "Abstract\n",
            "The Agent and AIGC (Artificial Intelligence Generated Content) technologies have\n",
            "recently made significant progress. We propose Aesop Agent, an Agent-driven\n",
            "Evolutionary System on Story-to-Video Production. AesopAgent is a practical\n",
            "application of agent technology for multimodal content generation. The system\n",
            "integrates multiple generative capabilities within a unified framework, so that\n",
            "individual users can leverage these modules easily. This innovative system would\n",
            "convert user story proposals into scripts, images, and audio, and then integrate\n",
            "these multimodal contents into videos. Additionally, the animating units (e.g.,\n",
            "Gen-2 and Sora) could make the videos more infectious. The AesopAgent system\n",
            "could orchestrate task workflow for video generation, ensuring that the generated\n",
            "video is both rich in content and coherent. This system mainly contains two\n",
            "layers, i.e., the Horizontal Layer and the Utility Layer. In the Horizontal Layer, we\n",
            "introduce a novel RAG-based evolutionary system that optimizes the whole video\n",
            "generation workflow and the steps within the workflow. It continuously evolves and\n",
            "iteratively optimizes workflow by accumulating expert experience and professional\n",
            "knowledge, including optimizing the LLM prompts and utilities usage. The Utility\n",
            "Layer provides multiple utilities, leading to consistent image generation that is\n",
            "visually coherent in terms of composition, characters, and style. Meanwhile, it\n",
            "provides audio and special effects, integrating them into expressive and logically\n",
            "arranged videos. Overall, our AesopAgent achieves state-of-the-art performance\n",
            "compared with many previous works in visual storytelling. Our AesopAgent is\n",
            "designed for convenient service for individual users, which is available on the\n",
            "following page: https://aesopai.github.io/ .\n",
            "1 Introduction\n",
            "The recent development of base Large Language Models (LLMs) [ 3,38,61,63], and Multimodal\n",
            "Models [ 69,33,39,30] has catalyzed significant changes in Artificial Intelligence Generated Content\n",
            "(AIGC). This advancement has led to the effective integration of generative AI technology with\n",
            "traditional Professional Generated Content (PGC) and User Generated Content (UGC), addressing a\n",
            "wide range of user requirements. Notably, technologies such as Stable Diffusion [ 46], DALL-E 3 [ 2],\n",
            "and ControlNet [ 70] have excelled in generating and editing high-quality images, attracting significant\n",
            "interests in both academy and industry. Video generation, in contrast to simple static image generation,\n",
            "necessitates managing complex semantic and temporal information, presenting great challenges.\n",
            "Recent initiatives, such as Make-A-Video [ 50], Imagen Video [ 17], PikaLab [ 27], and Gen-2 [ ?],\n",
            "have demonstrated the ability to produce short videos from textual descriptions. And Sora [ 40] is\n",
            "capable of generating high-definition videos up to one minute in length. However, story-to-video\n",
            "∗Equal Contribution.\n",
            "†Corresponding author.arXiv:2403.07952v1  [cs.CV]  12 Mar 2024\n",
            "\n",
            "representation, and logic through RAG techniques. Manual evaluations against ComicAI [ 15] in\n",
            "image element restoration, rationality, and composition indicate AesopAgent’s superiority. When\n",
            "compared with parallel video generation research such as NUWA-XL [ 64] and AutoStory [ 59], our\n",
            "system demonstrates leading performance in image complexity and narrative depth. Furthermore,\n",
            "AesopAgent’s adaptability is evident in its ability to integrate external software like Runway, catering\n",
            "to specific user needs and underscoring its extensive scalability.\n",
            "In summary, the contributions of our AesopAgent are threefold:\n",
            "1) We have leveraged agent techniques to automate task workflow orchestration for video generation,\n",
            "enabling the effective process of converting story proposals into final videos. The workflow is\n",
            "designed by the agent and includes three main steps (script generation, image generation, and video\n",
            "assembly).\n",
            "2) We proposed a RAG-based evolutionary system , collecting expert experience (E-RAG) and\n",
            "professional knowledge (K-RAG) to improve the system performance. The prompts of LLM would\n",
            "be optimized iteratively according to the experience and knowledge in the RAG database, generating\n",
            "narrative coherence text (e.g., script, image descriptions). The RAG techniques also guide the utilities\n",
            "usage, consequently enhancing the visual quality of the generated videos.\n",
            "3) We achieve consistent image generation since our AesopAgent skillfully applies specialized\n",
            "utilities, thus the generated images maintain a cohesive and professional visual expression throughout\n",
            "the shots. The main components of consistent image generation are the image composition module,\n",
            "the multiple characters consistency module, and the image style consistency module. There are many\n",
            "utilities in these modules, and some of them have been optimized by our AesopAgent.\n",
            "2 Related Works\n",
            "2.1 AI Agent\n",
            "Recent advancements in AI agent technology stand at the vanguard of transformative developments\n",
            "within the artificial intelligence domain. A plethora of commendable works involving AI agents\n",
            "have emerged, showcasing groundbreaking progress and innovation. Notable works include: Ope-\n",
            "nAGI [ 12], AutoGPT [ 62], V oyager [ 57], RET-LLM [ 37], ChatDev [ 42], AutoGen [ 60], etc. These\n",
            "agents exhibit diverse performances in aspects like memory function configuration, task planning\n",
            "capabilities, and modifications to the base Large Language Model (LLM).\n",
            "As for memory function configuration, recent multi-agent systems (e.g., AutoGen [ 60], MetaGPT [ 19],\n",
            "ChatDev [ 42], AutoGPT [ 62]) adopted a shared memory pool for top-level information synchro-\n",
            "nization, with different technical implementation solutions. Specifically, MetaGPT [ 19] use sub/pub\n",
            "mechanisms, AutoGen [ 60] use dialogue delivery. The single-agent systems (e.g., V oyager [ 57],\n",
            "GITM [ 74], ChatDB [ 21]) have consensus capabilities (reading, writing, and thinking), combining\n",
            "long-term and short-term memory. In this context, long-term memory structures mostly use historical\n",
            "decision sequences, memory streams, or vector databases, while short-term memory is primarily\n",
            "based on contextual information.\n",
            "With regard to task planning and utility usage capabilities. The technical solutions for planning\n",
            "capabilities range from prompts and chain mechanisms to standard operating procedures (SOP).\n",
            "Some models, such as Generative Agents and MetaGPT [ 19], enhance execution accuracy and\n",
            "controllability through refined planning and structured output, while models like CAMEL [ 29] and\n",
            "ViperGPT [ 51] focus on using historical information and context to guide the behavior of the Agents.\n",
            "AutoGPT [ 62] and AutoGen [ 60] models emphasize dialogue and cooperation among multiple Agents\n",
            "to accomplish more complex tasks.\n",
            "When it comes to modifications of the base model (LLM), The majority of agents (e.g., CAMEL [ 29],\n",
            "ViperGPT [ 51], AutoGPT [ 62], ChatDev [ 42], MetaGPT [ 19], AutoGen [ 60]) did not modify the\n",
            "\n",
            "Agent\n",
            "UserAesopAgentI want a video for this story: In an ancient forest, a tiny dragon discovered its fiery breath, kindling a spark of courage that set ablaze its destiny, becoming the legendary guardian of the realm.\n",
            "script generationimagegenerationvideoassemblyvideo for this story: \n",
            "ttsgenerationvideoclippingcharacterdesignscriptgeneration\n",
            "action1-shot1:Image description:In the heart of an ancient forest …narration: In a dimly-lit corner of……imagecreatingimageediting\n",
            "voice description:deeply resonant and formidable     generate tts：\n",
            "frame1frame2frame3RAGDatabaseFigure 1: Overview of our AesopAgent. This system would convert the user story proposal into a\n",
            "video assembled with images, audio, narration, and special effects. The video generation workflow\n",
            "suggested by AesopAgent, utilizing agent-based approaches and RAG techniques, encompasses script\n",
            "generation, image generation, and video assembly.\n",
            "production requires the cooperation of various modules, so gaps remain in image expressiveness,\n",
            "narrative quality, and user engagement. To bridge these gaps, we propose AesopAgent, an agent-\n",
            "driven evolutionary system on story-to-video production. With agent technology, we could effectively\n",
            "harness advanced visual and innovative narrative generation technologies to tackle the complex video\n",
            "generation task, thereby creating videos with compelling narratives and attractive visual effects. Our\n",
            "system could understand user intentions and identify suitable AI utilities to fulfill user requirements.\n",
            "This feature enables users to effortlessly employ AI capabilities.\n",
            "AesopAgent utilizes agent-based approaches and RAG techniques, coupled with the incorporation\n",
            "of expert insights, to facilitate an iterative evolutionary process that results in an efficient workflow.\n",
            "This workflow creates high-quality videos from user story proposals automatically. As illustrated in\n",
            "Figure 1, upon receiving a user’s “Dragon Story” proposal, AesopAgent employs the well-designed\n",
            "workflow by agents implemented with RAGs, including script generation, image generation, and\n",
            "video assembly, ultimately generates a high-quality dragon story video.\n",
            "In this paper, our system features a comprehensive process orchestration, including a Horizontal Layer\n",
            "and a Utility Layer. The Horizontal Layer systematically executes high-level strategic management\n",
            "and optimization in the video generation workflow continuously and iteratively. The Utility Layer\n",
            "provides a suite of fully functional utilities for each task of the workflow, ensuring the effective\n",
            "execution of the video generation process. In the Horizontal Layer, we introduce two types of\n",
            "Retrieval-Augmented Generation (RAG) techniques[ 11]: Knowledge-RAG and Experience-RAG\n",
            "(denoted as K-RAG and E-RAG). E-RAG collects expert experience, and K-RAG leverages existing\n",
            "professional knowledge. Our system constructs and updates the database of E-RAG and K-RAG\n",
            "with the guidance of experts, leading to our evolution ability. The Utility Layer assembles multiple\n",
            "utilities based on each unit’s fundamental capabilities, optimizing their usage based on feedback to\n",
            "ensure stable, rational image composition in terms of character portrayal and style.\n",
            "The effectiveness of our system is evaluated in three key points: storytelling, image expressiveness,\n",
            "and user engagement. A comparative analysis with systems like ComicAI [ 15] and Artflow [ 14]\n",
            "shows that AesopAgent achieves state-of-the-art visual storytelling ability, excelling in coherence,\n",
            "2\n",
            "\n",
            "generating the shot with actions (e.g., walking, dialogues involving multiple characters, and driving\n",
            "a car), all of which are more vivid compared with NUWA-XL [ 64]. In the story “The Cat and the\n",
            "Dog”, our method successfully captures the playfulness and friendship between the cat and dog,\n",
            "whereas AutoStory [ 59] simply places the cat and the dog in the same frame without capturing their\n",
            "interaction.\n",
            "5 Conclusion\n",
            "We propose AesopAgent, an agent system capable of converting user story proposals into videos.\n",
            "This system consists of two primary components: the Horizontal Layer and the Utility Layer. In\n",
            "the Horizontal Layer, we iteratively construct E-RAG and K-RAG based on expert experience and\n",
            "professional knowledge, leading a RAG-based evolutionary system . This approach facilitates efficient\n",
            "task workflow orchestration , prompt optimization, and utilities usage. The Utility Layer focuses\n",
            "on providing utilities to ensure consistent image generation through image composition rationality,\n",
            "consistency across multiple characters, and image style consistency. Furthermore, AesopAgent\n",
            "assembles images, audio, and narration into videos enhanced with special effects. Experiments\n",
            "demonstrate that it is feasible to use agent techniques to optimize workflow design and utilities usage\n",
            "for complex tasks like video generation. The generated scripts and images of our AesopAgent obtain\n",
            "higher scores than other similar systems, such as ComicAI [ 15] and Artflow [ 14]. And its overall\n",
            "narrative capability surpasses the previous research, including NUWA-XL [ 64] and AutoStory [ 59].\n",
            "Our system can be further integrated with additional downstream AI utilitise (e.g., Gen-2 [ 9] and\n",
            "Sora [ 40]) to meet the requirements of individual users. Our future work aims to meet a broader\n",
            "range of user requirements, such as scripts of various themes and genres, a greater diversity of styles,\n",
            "and user-defined characters. This endeavor could even extend to assist the movie generation. At the\n",
            "same time, we will develop a more intelligent agent system, which can better understand the user’s\n",
            "intention and complete the user’s complex tasks quickly and conveniently.\n",
            "References\n",
            "[1]Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse H. Engel, Mauro Verzetti, Antoine\n",
            "Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matthew Sharifi,\n",
            "Neil Zeghidour, and Christian Havnø Frank. MusicLM: Generating music from text. CoRR ,\n",
            "abs/2301.11325, 2023.\n",
            "[2]James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang,\n",
            "Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions.\n",
            "[3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,\n",
            "Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel\n",
            "Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler,\n",
            "Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott\n",
            "Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya\n",
            "Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle,\n",
            "M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information\n",
            "Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc., 2020.\n",
            "[4]Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, and Nanyun Peng. Character-centric\n",
            "story visualization via visual planning and token alignment. In Yoav Goldberg, Zornitsa\n",
            "Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods\n",
            "in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December\n",
            "7-11, 2022 , pages 8259–8272. Association for Computational Linguistics, 2022.\n",
            "[5]Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. Lift yourself up:\n",
            "Retrieval-augmented text generation with self memory, 2023.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain import hub\n",
        "\n",
        "prompt = hub.pull(\"hwchase17/xml-agent-convo\")\n",
        "prompt"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "XppEE1iFrngN",
        "outputId": "839cc9db-e4a1-4899-92d1-175baf1f436f"
      },
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "ChatPromptTemplate(input_variables=['agent_scratchpad', 'input', 'tools'], partial_variables={'chat_history': ''}, metadata={'lc_hub_owner': 'hwchase17', 'lc_hub_repo': 'xml-agent-convo', 'lc_hub_commit_hash': '00f6b7470fa25a24eef6e4e3c1e44ba07189f3e91c4d987223ad232490673be8'}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'chat_history', 'input', 'tools'], template=\"You are a helpful assistant. Help the user answer any questions.\\n\\nYou have access to the following tools:\\n\\n{tools}\\n\\nIn order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>\\nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\\n\\n<tool>search</tool><tool_input>weather in SF</tool_input>\\n<observation>64 degrees</observation>\\n\\nWhen you are done, respond with a final answer between <final_answer></final_answer>. For example:\\n\\n<final_answer>The weather in SF is 64 degrees</final_answer>\\n\\nBegin!\\n\\nPrevious Conversation:\\n{chat_history}\\n\\nQuestion: {input}\\n{agent_scratchpad}\"))])"
            ]
          },
          "metadata": {},
          "execution_count": 14
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain_anthropic import ChatAnthropic\n",
        "\n",
        "'''\n",
        "Claude 3\n",
        "- Opus: most powerful model, delivering state-of-the-art performance on highly complex tasks and demonstrating fluency and human-like understanding\n",
        "- Sonnet: most balanced model between intelligence and speed, a great choice for enterprise workloads and scaled AI deployments\n",
        "- Haiku: fastest and most compact model, designed for near-instant responsiveness and seamless AI experiences that mimic human interactions\n",
        "\n",
        "'''\n",
        "llm = ChatAnthropic(\n",
        "    model_name=\"claude-3-sonnet-20240229\",\n",
        "    temperature=0.0\n",
        ")"
      ],
      "metadata": {
        "id": "Vo1YYnfsvFru"
      },
      "execution_count": 15,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "llm.invoke(\"Who developed Aesop Agent\")"
      ],
      "metadata": {
        "id": "sJhPAEUe4jwY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def convert_intermediate_steps(intermediate_steps):\n",
        "    log = \"\"\n",
        "    for action, observation in intermediate_steps:\n",
        "        log += (\n",
        "            f\"<tool>{action.tool}</tool><tool_input>{action.tool_input}\"\n",
        "            f\"</tool_input><observation>{observation}</observation>\"\n",
        "        )\n",
        "    return log"
      ],
      "metadata": {
        "id": "AmdE7BCzvfiW"
      },
      "execution_count": 16,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def convert_tools(tools):\n",
        "    return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
      ],
      "metadata": {
        "id": "fAG_-67uvhsv"
      },
      "execution_count": 17,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.agents.output_parsers import XMLAgentOutputParser\n",
        "\n",
        "agent = (\n",
        "    {\n",
        "        \"input\": lambda x: x[\"input\"],\n",
        "        \"chat_history\": lambda x: x[\"chat_history\"],\n",
        "        \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
        "            x[\"intermediate_steps\"]\n",
        "        ),\n",
        "    }\n",
        "    | prompt.partial(tools=convert_tools(tools))\n",
        "    | llm.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
        "    | XMLAgentOutputParser()\n",
        ")"
      ],
      "metadata": {
        "id": "Or5T5hwsvko0"
      },
      "execution_count": 18,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.chains.conversation.memory import ConversationBufferWindowMemory\n",
        "\n",
        "conversational_memory = ConversationBufferWindowMemory(\n",
        "    memory_key='chat_history',\n",
        "    k=4,\n",
        "    return_messages=True\n",
        ")"
      ],
      "metadata": {
        "id": "X7afPUVywRwH"
      },
      "execution_count": 26,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain_core.messages.human import HumanMessage\n",
        "\n",
        "def memory_to_chat_history(memory: ConversationBufferWindowMemory):\n",
        "    messages = memory.chat_memory.messages\n",
        "    memory_list = [\n",
        "        f\"Human: {mem.content}\" if isinstance(mem, HumanMessage) \\\n",
        "        else f\"AI: {mem.content}\" for mem in messages\n",
        "    ]\n",
        "    memory_str = \"\\n\".join(memory_list)\n",
        "    return memory_str"
      ],
      "metadata": {
        "id": "-0CtBGCOwXpj"
      },
      "execution_count": 20,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.agents import AgentExecutor\n",
        "\n",
        "agent_executor = AgentExecutor(\n",
        "    agent=agent, tools=tools, return_intermediate_steps=True, verbose=True\n",
        ")\n"
      ],
      "metadata": {
        "id": "haXVLO6nvnKu"
      },
      "execution_count": 27,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def chat(text: str):\n",
        "    output = agent_executor.invoke({\n",
        "        \"input\": text,\n",
        "        \"chat_history\": memory_to_chat_history(conversational_memory)\n",
        "    })\n",
        "    conversational_memory.chat_memory.add_user_message(text)\n",
        "    conversational_memory.chat_memory.add_ai_message(output[\"output\"])\n",
        "\n",
        "    return output"
      ],
      "metadata": {
        "id": "CXrgBvVCwfhO"
      },
      "execution_count": 22,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "output = chat(\"Who developed Aesop Agent?\")\n",
        "answer = output[\"output\"]"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "kNSAgrttvqtw",
        "outputId": "aba30eb4-ed1f-47b6-939c-6ff06802489d"
      },
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\n",
            "\n",
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
            "\u001b[32;1m\u001b[1;3mTo find information about who developed Aesop Agent, I will use the provided tool:\n",
            "\n",
            "<tool>aesop_agent_search</tool>\n",
            "<tool_input>Who developed Aesop Agent?\u001b[0m\u001b[36;1m\u001b[1;3mAesopAgent: Agent-driven Evolutionary System on\n",
            "Story-to-Video Production\n",
            "Jiuniu Wang∗Zehua Du∗Yuyuan Zhao∗Bo Yuan Kexiang Wang Jian Liang\n",
            "Yaxi Zhao Yihen Lu Gengliang Li Junlong Gao Xin Tu Zhenyu Guo†\n",
            "DAMO Academy, Alibaba Group\n",
            "Abstract\n",
            "The Agent and AIGC (Artificial Intelligence Generated Content) technologies have\n",
            "recently made significant progress. We propose Aesop Agent, an Agent-driven\n",
            "Evolutionary System on Story-to-Video Production. AesopAgent is a practical\n",
            "application of agent technology for multimodal content generation. The system\n",
            "integrates multiple generative capabilities within a unified framework, so that\n",
            "individual users can leverage these modules easily. This innovative system would\n",
            "convert user story proposals into scripts, images, and audio, and then integrate\n",
            "these multimodal contents into videos. Additionally, the animating units (e.g.,\n",
            "Gen-2 and Sora) could make the videos more infectious. The AesopAgent system\n",
            "could orchestrate task workflow for video generation, ensuring that the generated\n",
            "video is both rich in content and coherent. This system mainly contains two\n",
            "layers, i.e., the Horizontal Layer and the Utility Layer. In the Horizontal Layer, we\n",
            "introduce a novel RAG-based evolutionary system that optimizes the whole video\n",
            "generation workflow and the steps within the workflow. It continuously evolves and\n",
            "iteratively optimizes workflow by accumulating expert experience and professional\n",
            "knowledge, including optimizing the LLM prompts and utilities usage. The Utility\n",
            "Layer provides multiple utilities, leading to consistent image generation that is\n",
            "visually coherent in terms of composition, characters, and style. Meanwhile, it\n",
            "provides audio and special effects, integrating them into expressive and logically\n",
            "arranged videos. Overall, our AesopAgent achieves state-of-the-art performance\n",
            "compared with many previous works in visual storytelling. Our AesopAgent is\n",
            "designed for convenient service for individual users, which is available on the\n",
            "following page: https://aesopai.github.io/ .\n",
            "1 Introduction\n",
            "The recent development of base Large Language Models (LLMs) [ 3,38,61,63], and Multimodal\n",
            "Models [ 69,33,39,30] has catalyzed significant changes in Artificial Intelligence Generated Content\n",
            "(AIGC). This advancement has led to the effective integration of generative AI technology with\n",
            "traditional Professional Generated Content (PGC) and User Generated Content (UGC), addressing a\n",
            "wide range of user requirements. Notably, technologies such as Stable Diffusion [ 46], DALL-E 3 [ 2],\n",
            "and ControlNet [ 70] have excelled in generating and editing high-quality images, attracting significant\n",
            "interests in both academy and industry. Video generation, in contrast to simple static image generation,\n",
            "necessitates managing complex semantic and temporal information, presenting great challenges.\n",
            "Recent initiatives, such as Make-A-Video [ 50], Imagen Video [ 17], PikaLab [ 27], and Gen-2 [ ?],\n",
            "have demonstrated the ability to produce short videos from textual descriptions. And Sora [ 40] is\n",
            "capable of generating high-definition videos up to one minute in length. However, story-to-video\n",
            "∗Equal Contribution.\n",
            "†Corresponding author.arXiv:2403.07952v1  [cs.CV]  12 Mar 2024\n",
            "\n",
            "representation, and logic through RAG techniques. Manual evaluations against ComicAI [ 15] in\n",
            "image element restoration, rationality, and composition indicate AesopAgent’s superiority. When\n",
            "compared with parallel video generation research such as NUWA-XL [ 64] and AutoStory [ 59], our\n",
            "system demonstrates leading performance in image complexity and narrative depth. Furthermore,\n",
            "AesopAgent’s adaptability is evident in its ability to integrate external software like Runway, catering\n",
            "to specific user needs and underscoring its extensive scalability.\n",
            "In summary, the contributions of our AesopAgent are threefold:\n",
            "1) We have leveraged agent techniques to automate task workflow orchestration for video generation,\n",
            "enabling the effective process of converting story proposals into final videos. The workflow is\n",
            "designed by the agent and includes three main steps (script generation, image generation, and video\n",
            "assembly).\n",
            "2) We proposed a RAG-based evolutionary system , collecting expert experience (E-RAG) and\n",
            "professional knowledge (K-RAG) to improve the system performance. The prompts of LLM would\n",
            "be optimized iteratively according to the experience and knowledge in the RAG database, generating\n",
            "narrative coherence text (e.g., script, image descriptions). The RAG techniques also guide the utilities\n",
            "usage, consequently enhancing the visual quality of the generated videos.\n",
            "3) We achieve consistent image generation since our AesopAgent skillfully applies specialized\n",
            "utilities, thus the generated images maintain a cohesive and professional visual expression throughout\n",
            "the shots. The main components of consistent image generation are the image composition module,\n",
            "the multiple characters consistency module, and the image style consistency module. There are many\n",
            "utilities in these modules, and some of them have been optimized by our AesopAgent.\n",
            "2 Related Works\n",
            "2.1 AI Agent\n",
            "Recent advancements in AI agent technology stand at the vanguard of transformative developments\n",
            "within the artificial intelligence domain. A plethora of commendable works involving AI agents\n",
            "have emerged, showcasing groundbreaking progress and innovation. Notable works include: Ope-\n",
            "nAGI [ 12], AutoGPT [ 62], V oyager [ 57], RET-LLM [ 37], ChatDev [ 42], AutoGen [ 60], etc. These\n",
            "agents exhibit diverse performances in aspects like memory function configuration, task planning\n",
            "capabilities, and modifications to the base Large Language Model (LLM).\n",
            "As for memory function configuration, recent multi-agent systems (e.g., AutoGen [ 60], MetaGPT [ 19],\n",
            "ChatDev [ 42], AutoGPT [ 62]) adopted a shared memory pool for top-level information synchro-\n",
            "nization, with different technical implementation solutions. Specifically, MetaGPT [ 19] use sub/pub\n",
            "mechanisms, AutoGen [ 60] use dialogue delivery. The single-agent systems (e.g., V oyager [ 57],\n",
            "GITM [ 74], ChatDB [ 21]) have consensus capabilities (reading, writing, and thinking), combining\n",
            "long-term and short-term memory. In this context, long-term memory structures mostly use historical\n",
            "decision sequences, memory streams, or vector databases, while short-term memory is primarily\n",
            "based on contextual information.\n",
            "With regard to task planning and utility usage capabilities. The technical solutions for planning\n",
            "capabilities range from prompts and chain mechanisms to standard operating procedures (SOP).\n",
            "Some models, such as Generative Agents and MetaGPT [ 19], enhance execution accuracy and\n",
            "controllability through refined planning and structured output, while models like CAMEL [ 29] and\n",
            "ViperGPT [ 51] focus on using historical information and context to guide the behavior of the Agents.\n",
            "AutoGPT [ 62] and AutoGen [ 60] models emphasize dialogue and cooperation among multiple Agents\n",
            "to accomplish more complex tasks.\n",
            "When it comes to modifications of the base model (LLM), The majority of agents (e.g., CAMEL [ 29],\n",
            "ViperGPT [ 51], AutoGPT [ 62], ChatDev [ 42], MetaGPT [ 19], AutoGen [ 60]) did not modify the\n",
            "\n",
            "Agent\n",
            "UserAesopAgentI want a video for this story: In an ancient forest, a tiny dragon discovered its fiery breath, kindling a spark of courage that set ablaze its destiny, becoming the legendary guardian of the realm.\n",
            "script generationimagegenerationvideoassemblyvideo for this story: \n",
            "ttsgenerationvideoclippingcharacterdesignscriptgeneration\n",
            "action1-shot1:Image description:In the heart of an ancient forest …narration: In a dimly-lit corner of……imagecreatingimageediting\n",
            "voice description:deeply resonant and formidable     generate tts：\n",
            "frame1frame2frame3RAGDatabaseFigure 1: Overview of our AesopAgent. This system would convert the user story proposal into a\n",
            "video assembled with images, audio, narration, and special effects. The video generation workflow\n",
            "suggested by AesopAgent, utilizing agent-based approaches and RAG techniques, encompasses script\n",
            "generation, image generation, and video assembly.\n",
            "production requires the cooperation of various modules, so gaps remain in image expressiveness,\n",
            "narrative quality, and user engagement. To bridge these gaps, we propose AesopAgent, an agent-\n",
            "driven evolutionary system on story-to-video production. With agent technology, we could effectively\n",
            "harness advanced visual and innovative narrative generation technologies to tackle the complex video\n",
            "generation task, thereby creating videos with compelling narratives and attractive visual effects. Our\n",
            "system could understand user intentions and identify suitable AI utilities to fulfill user requirements.\n",
            "This feature enables users to effortlessly employ AI capabilities.\n",
            "AesopAgent utilizes agent-based approaches and RAG techniques, coupled with the incorporation\n",
            "of expert insights, to facilitate an iterative evolutionary process that results in an efficient workflow.\n",
            "This workflow creates high-quality videos from user story proposals automatically. As illustrated in\n",
            "Figure 1, upon receiving a user’s “Dragon Story” proposal, AesopAgent employs the well-designed\n",
            "workflow by agents implemented with RAGs, including script generation, image generation, and\n",
            "video assembly, ultimately generates a high-quality dragon story video.\n",
            "In this paper, our system features a comprehensive process orchestration, including a Horizontal Layer\n",
            "and a Utility Layer. The Horizontal Layer systematically executes high-level strategic management\n",
            "and optimization in the video generation workflow continuously and iteratively. The Utility Layer\n",
            "provides a suite of fully functional utilities for each task of the workflow, ensuring the effective\n",
            "execution of the video generation process. In the Horizontal Layer, we introduce two types of\n",
            "Retrieval-Augmented Generation (RAG) techniques[ 11]: Knowledge-RAG and Experience-RAG\n",
            "(denoted as K-RAG and E-RAG). E-RAG collects expert experience, and K-RAG leverages existing\n",
            "professional knowledge. Our system constructs and updates the database of E-RAG and K-RAG\n",
            "with the guidance of experts, leading to our evolution ability. The Utility Layer assembles multiple\n",
            "utilities based on each unit’s fundamental capabilities, optimizing their usage based on feedback to\n",
            "ensure stable, rational image composition in terms of character portrayal and style.\n",
            "The effectiveness of our system is evaluated in three key points: storytelling, image expressiveness,\n",
            "and user engagement. A comparative analysis with systems like ComicAI [ 15] and Artflow [ 14]\n",
            "shows that AesopAgent achieves state-of-the-art visual storytelling ability, excelling in coherence,\n",
            "2\n",
            "\n",
            "generating the shot with actions (e.g., walking, dialogues involving multiple characters, and driving\n",
            "a car), all of which are more vivid compared with NUWA-XL [ 64]. In the story “The Cat and the\n",
            "Dog”, our method successfully captures the playfulness and friendship between the cat and dog,\n",
            "whereas AutoStory [ 59] simply places the cat and the dog in the same frame without capturing their\n",
            "interaction.\n",
            "5 Conclusion\n",
            "We propose AesopAgent, an agent system capable of converting user story proposals into videos.\n",
            "This system consists of two primary components: the Horizontal Layer and the Utility Layer. In\n",
            "the Horizontal Layer, we iteratively construct E-RAG and K-RAG based on expert experience and\n",
            "professional knowledge, leading a RAG-based evolutionary system . This approach facilitates efficient\n",
            "task workflow orchestration , prompt optimization, and utilities usage. The Utility Layer focuses\n",
            "on providing utilities to ensure consistent image generation through image composition rationality,\n",
            "consistency across multiple characters, and image style consistency. Furthermore, AesopAgent\n",
            "assembles images, audio, and narration into videos enhanced with special effects. Experiments\n",
            "demonstrate that it is feasible to use agent techniques to optimize workflow design and utilities usage\n",
            "for complex tasks like video generation. The generated scripts and images of our AesopAgent obtain\n",
            "higher scores than other similar systems, such as ComicAI [ 15] and Artflow [ 14]. And its overall\n",
            "narrative capability surpasses the previous research, including NUWA-XL [ 64] and AutoStory [ 59].\n",
            "Our system can be further integrated with additional downstream AI utilitise (e.g., Gen-2 [ 9] and\n",
            "Sora [ 40]) to meet the requirements of individual users. Our future work aims to meet a broader\n",
            "range of user requirements, such as scripts of various themes and genres, a greater diversity of styles,\n",
            "and user-defined characters. This endeavor could even extend to assist the movie generation. At the\n",
            "same time, we will develop a more intelligent agent system, which can better understand the user’s\n",
            "intention and complete the user’s complex tasks quickly and conveniently.\n",
            "References\n",
            "[1]Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse H. Engel, Mauro Verzetti, Antoine\n",
            "Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matthew Sharifi,\n",
            "Neil Zeghidour, and Christian Havnø Frank. MusicLM: Generating music from text. CoRR ,\n",
            "abs/2301.11325, 2023.\n",
            "[2]James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang,\n",
            "Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions.\n",
            "[3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,\n",
            "Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel\n",
            "Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler,\n",
            "Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott\n",
            "Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya\n",
            "Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle,\n",
            "M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information\n",
            "Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc., 2020.\n",
            "[4]Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, and Nanyun Peng. Character-centric\n",
            "story visualization via visual planning and token alignment. In Yoav Goldberg, Zornitsa\n",
            "Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods\n",
            "in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December\n",
            "7-11, 2022 , pages 8259–8272. Association for Computational Linguistics, 2022.\n",
            "[5]Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. Lift yourself up:\n",
            "Retrieval-augmented text generation with self memory, 2023.\u001b[0m\u001b[32;1m\u001b[1;3mBased on the information provided, it seems that Aesop Agent was developed by researchers at DAMO Academy, Alibaba Group. The key points are:\n",
            "\n",
            "<final_answer>\n",
            "- Aesop Agent is an agent-driven evolutionary system for story-to-video production, proposed by researchers at DAMO Academy, Alibaba Group.\n",
            "\n",
            "- It integrates multiple generative capabilities like script generation, image generation, and video assembly into a unified framework to convert user story proposals into videos.\n",
            "\n",
            "- The system utilizes agent-based approaches, retrieval-augmented generation (RAG) techniques, and incorporates expert knowledge and experience to iteratively optimize the video generation workflow.\n",
            "\n",
            "- Aesop Agent consists of a Horizontal Layer that manages the overall workflow, and a Utility Layer that provides specialized utilities for tasks like image composition, character consistency, style consistency etc.\n",
            "\n",
            "- It was developed by researchers including Jiuniu Wang, Zehua Du, Yuyuan Zhao, Bo Yuan, Kexiang Wang, Jian Liang, Yaxi Zhao, Yihen Lu, Gengliang Li, Junlong Gao, Xin Tu, and Zhenyu Guo at DAMO Academy, Alibaba Group.\n",
            "\u001b[0m\n",
            "\n",
            "\u001b[1m> Finished chain.\u001b[0m\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "print(answer)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "HfiFDBN4wRBq",
        "outputId": "a5a292ac-d1f9-4723-a223-01a94589b2d6"
      },
      "execution_count": 29,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\n",
            "- Aesop Agent is an agent-driven evolutionary system for story-to-video production, proposed by researchers at DAMO Academy, Alibaba Group.\n",
            "\n",
            "- It integrates multiple generative capabilities like script generation, image generation, and video assembly into a unified framework to convert user story proposals into videos.\n",
            "\n",
            "- The system utilizes agent-based approaches, retrieval-augmented generation (RAG) techniques, and incorporates expert knowledge and experience to iteratively optimize the video generation workflow.\n",
            "\n",
            "- Aesop Agent consists of a Horizontal Layer that manages the overall workflow, and a Utility Layer that provides specialized utilities for tasks like image composition, character consistency, style consistency etc.\n",
            "\n",
            "- It was developed by researchers including Jiuniu Wang, Zehua Du, Yuyuan Zhao, Bo Yuan, Kexiang Wang, Jian Liang, Yaxi Zhao, Yihen Lu, Gengliang Li, Junlong Gao, Xin Tu, and Zhenyu Guo at DAMO Academy, Alibaba Group.\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "print(output['intermediate_steps'])"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ll1fzWw41C77",
        "outputId": "6cdcf2ab-ce0a-450e-e33a-7e804500831a"
      },
      "execution_count": 30,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[(AgentAction(tool='aesop_agent_search', tool_input='Who developed Aesop Agent?', log='To find information about who developed Aesop Agent, I will use the provided tool:\\n\\n<tool>aesop_agent_search</tool>\\n<tool_input>Who developed Aesop Agent?'), 'AesopAgent: Agent-driven Evolutionary System on\\nStory-to-Video Production\\nJiuniu Wang∗Zehua Du∗Yuyuan Zhao∗Bo Yuan Kexiang Wang Jian Liang\\nYaxi Zhao Yihen Lu Gengliang Li Junlong Gao Xin Tu Zhenyu Guo†\\nDAMO Academy, Alibaba Group\\nAbstract\\nThe Agent and AIGC (Artificial Intelligence Generated Content) technologies have\\nrecently made significant progress. We propose Aesop Agent, an Agent-driven\\nEvolutionary System on Story-to-Video Production. AesopAgent is a practical\\napplication of agent technology for multimodal content generation. The system\\nintegrates multiple generative capabilities within a unified framework, so that\\nindividual users can leverage these modules easily. This innovative system would\\nconvert user story proposals into scripts, images, and audio, and then integrate\\nthese multimodal contents into videos. Additionally, the animating units (e.g.,\\nGen-2 and Sora) could make the videos more infectious. The AesopAgent system\\ncould orchestrate task workflow for video generation, ensuring that the generated\\nvideo is both rich in content and coherent. This system mainly contains two\\nlayers, i.e., the Horizontal Layer and the Utility Layer. In the Horizontal Layer, we\\nintroduce a novel RAG-based evolutionary system that optimizes the whole video\\ngeneration workflow and the steps within the workflow. It continuously evolves and\\niteratively optimizes workflow by accumulating expert experience and professional\\nknowledge, including optimizing the LLM prompts and utilities usage. The Utility\\nLayer provides multiple utilities, leading to consistent image generation that is\\nvisually coherent in terms of composition, characters, and style. Meanwhile, it\\nprovides audio and special effects, integrating them into expressive and logically\\narranged videos. Overall, our AesopAgent achieves state-of-the-art performance\\ncompared with many previous works in visual storytelling. Our AesopAgent is\\ndesigned for convenient service for individual users, which is available on the\\nfollowing page: https://aesopai.github.io/ .\\n1 Introduction\\nThe recent development of base Large Language Models (LLMs) [ 3,38,61,63], and Multimodal\\nModels [ 69,33,39,30] has catalyzed significant changes in Artificial Intelligence Generated Content\\n(AIGC). This advancement has led to the effective integration of generative AI technology with\\ntraditional Professional Generated Content (PGC) and User Generated Content (UGC), addressing a\\nwide range of user requirements. Notably, technologies such as Stable Diffusion [ 46], DALL-E 3 [ 2],\\nand ControlNet [ 70] have excelled in generating and editing high-quality images, attracting significant\\ninterests in both academy and industry. Video generation, in contrast to simple static image generation,\\nnecessitates managing complex semantic and temporal information, presenting great challenges.\\nRecent initiatives, such as Make-A-Video [ 50], Imagen Video [ 17], PikaLab [ 27], and Gen-2 [ ?],\\nhave demonstrated the ability to produce short videos from textual descriptions. And Sora [ 40] is\\ncapable of generating high-definition videos up to one minute in length. However, story-to-video\\n∗Equal Contribution.\\n†Corresponding author.arXiv:2403.07952v1  [cs.CV]  12 Mar 2024\\n\\nrepresentation, and logic through RAG techniques. Manual evaluations against ComicAI [ 15] in\\nimage element restoration, rationality, and composition indicate AesopAgent’s superiority. When\\ncompared with parallel video generation research such as NUWA-XL [ 64] and AutoStory [ 59], our\\nsystem demonstrates leading performance in image complexity and narrative depth. Furthermore,\\nAesopAgent’s adaptability is evident in its ability to integrate external software like Runway, catering\\nto specific user needs and underscoring its extensive scalability.\\nIn summary, the contributions of our AesopAgent are threefold:\\n1) We have leveraged agent techniques to automate task workflow orchestration for video generation,\\nenabling the effective process of converting story proposals into final videos. The workflow is\\ndesigned by the agent and includes three main steps (script generation, image generation, and video\\nassembly).\\n2) We proposed a RAG-based evolutionary system , collecting expert experience (E-RAG) and\\nprofessional knowledge (K-RAG) to improve the system performance. The prompts of LLM would\\nbe optimized iteratively according to the experience and knowledge in the RAG database, generating\\nnarrative coherence text (e.g., script, image descriptions). The RAG techniques also guide the utilities\\nusage, consequently enhancing the visual quality of the generated videos.\\n3) We achieve consistent image generation since our AesopAgent skillfully applies specialized\\nutilities, thus the generated images maintain a cohesive and professional visual expression throughout\\nthe shots. The main components of consistent image generation are the image composition module,\\nthe multiple characters consistency module, and the image style consistency module. There are many\\nutilities in these modules, and some of them have been optimized by our AesopAgent.\\n2 Related Works\\n2.1 AI Agent\\nRecent advancements in AI agent technology stand at the vanguard of transformative developments\\nwithin the artificial intelligence domain. A plethora of commendable works involving AI agents\\nhave emerged, showcasing groundbreaking progress and innovation. Notable works include: Ope-\\nnAGI [ 12], AutoGPT [ 62], V oyager [ 57], RET-LLM [ 37], ChatDev [ 42], AutoGen [ 60], etc. These\\nagents exhibit diverse performances in aspects like memory function configuration, task planning\\ncapabilities, and modifications to the base Large Language Model (LLM).\\nAs for memory function configuration, recent multi-agent systems (e.g., AutoGen [ 60], MetaGPT [ 19],\\nChatDev [ 42], AutoGPT [ 62]) adopted a shared memory pool for top-level information synchro-\\nnization, with different technical implementation solutions. Specifically, MetaGPT [ 19] use sub/pub\\nmechanisms, AutoGen [ 60] use dialogue delivery. The single-agent systems (e.g., V oyager [ 57],\\nGITM [ 74], ChatDB [ 21]) have consensus capabilities (reading, writing, and thinking), combining\\nlong-term and short-term memory. In this context, long-term memory structures mostly use historical\\ndecision sequences, memory streams, or vector databases, while short-term memory is primarily\\nbased on contextual information.\\nWith regard to task planning and utility usage capabilities. The technical solutions for planning\\ncapabilities range from prompts and chain mechanisms to standard operating procedures (SOP).\\nSome models, such as Generative Agents and MetaGPT [ 19], enhance execution accuracy and\\ncontrollability through refined planning and structured output, while models like CAMEL [ 29] and\\nViperGPT [ 51] focus on using historical information and context to guide the behavior of the Agents.\\nAutoGPT [ 62] and AutoGen [ 60] models emphasize dialogue and cooperation among multiple Agents\\nto accomplish more complex tasks.\\nWhen it comes to modifications of the base model (LLM), The majority of agents (e.g., CAMEL [ 29],\\nViperGPT [ 51], AutoGPT [ 62], ChatDev [ 42], MetaGPT [ 19], AutoGen [ 60]) did not modify the\\n\\nAgent\\nUserAesopAgentI want a video for this story: In an ancient forest, a tiny dragon discovered its fiery breath, kindling a spark of courage that set ablaze its destiny, becoming the legendary guardian of the realm.\\nscript generationimagegenerationvideoassemblyvideo for this story: \\nttsgenerationvideoclippingcharacterdesignscriptgeneration\\naction1-shot1:Image description:In the heart of an ancient forest …narration: In a dimly-lit corner of……imagecreatingimageediting\\nvoice description:deeply resonant and formidable     generate tts：\\nframe1frame2frame3RAGDatabaseFigure 1: Overview of our AesopAgent. This system would convert the user story proposal into a\\nvideo assembled with images, audio, narration, and special effects. The video generation workflow\\nsuggested by AesopAgent, utilizing agent-based approaches and RAG techniques, encompasses script\\ngeneration, image generation, and video assembly.\\nproduction requires the cooperation of various modules, so gaps remain in image expressiveness,\\nnarrative quality, and user engagement. To bridge these gaps, we propose AesopAgent, an agent-\\ndriven evolutionary system on story-to-video production. With agent technology, we could effectively\\nharness advanced visual and innovative narrative generation technologies to tackle the complex video\\ngeneration task, thereby creating videos with compelling narratives and attractive visual effects. Our\\nsystem could understand user intentions and identify suitable AI utilities to fulfill user requirements.\\nThis feature enables users to effortlessly employ AI capabilities.\\nAesopAgent utilizes agent-based approaches and RAG techniques, coupled with the incorporation\\nof expert insights, to facilitate an iterative evolutionary process that results in an efficient workflow.\\nThis workflow creates high-quality videos from user story proposals automatically. As illustrated in\\nFigure 1, upon receiving a user’s “Dragon Story” proposal, AesopAgent employs the well-designed\\nworkflow by agents implemented with RAGs, including script generation, image generation, and\\nvideo assembly, ultimately generates a high-quality dragon story video.\\nIn this paper, our system features a comprehensive process orchestration, including a Horizontal Layer\\nand a Utility Layer. The Horizontal Layer systematically executes high-level strategic management\\nand optimization in the video generation workflow continuously and iteratively. The Utility Layer\\nprovides a suite of fully functional utilities for each task of the workflow, ensuring the effective\\nexecution of the video generation process. In the Horizontal Layer, we introduce two types of\\nRetrieval-Augmented Generation (RAG) techniques[ 11]: Knowledge-RAG and Experience-RAG\\n(denoted as K-RAG and E-RAG). E-RAG collects expert experience, and K-RAG leverages existing\\nprofessional knowledge. Our system constructs and updates the database of E-RAG and K-RAG\\nwith the guidance of experts, leading to our evolution ability. The Utility Layer assembles multiple\\nutilities based on each unit’s fundamental capabilities, optimizing their usage based on feedback to\\nensure stable, rational image composition in terms of character portrayal and style.\\nThe effectiveness of our system is evaluated in three key points: storytelling, image expressiveness,\\nand user engagement. A comparative analysis with systems like ComicAI [ 15] and Artflow [ 14]\\nshows that AesopAgent achieves state-of-the-art visual storytelling ability, excelling in coherence,\\n2\\n\\ngenerating the shot with actions (e.g., walking, dialogues involving multiple characters, and driving\\na car), all of which are more vivid compared with NUWA-XL [ 64]. In the story “The Cat and the\\nDog”, our method successfully captures the playfulness and friendship between the cat and dog,\\nwhereas AutoStory [ 59] simply places the cat and the dog in the same frame without capturing their\\ninteraction.\\n5 Conclusion\\nWe propose AesopAgent, an agent system capable of converting user story proposals into videos.\\nThis system consists of two primary components: the Horizontal Layer and the Utility Layer. In\\nthe Horizontal Layer, we iteratively construct E-RAG and K-RAG based on expert experience and\\nprofessional knowledge, leading a RAG-based evolutionary system . This approach facilitates efficient\\ntask workflow orchestration , prompt optimization, and utilities usage. The Utility Layer focuses\\non providing utilities to ensure consistent image generation through image composition rationality,\\nconsistency across multiple characters, and image style consistency. Furthermore, AesopAgent\\nassembles images, audio, and narration into videos enhanced with special effects. Experiments\\ndemonstrate that it is feasible to use agent techniques to optimize workflow design and utilities usage\\nfor complex tasks like video generation. The generated scripts and images of our AesopAgent obtain\\nhigher scores than other similar systems, such as ComicAI [ 15] and Artflow [ 14]. And its overall\\nnarrative capability surpasses the previous research, including NUWA-XL [ 64] and AutoStory [ 59].\\nOur system can be further integrated with additional downstream AI utilitise (e.g., Gen-2 [ 9] and\\nSora [ 40]) to meet the requirements of individual users. Our future work aims to meet a broader\\nrange of user requirements, such as scripts of various themes and genres, a greater diversity of styles,\\nand user-defined characters. This endeavor could even extend to assist the movie generation. At the\\nsame time, we will develop a more intelligent agent system, which can better understand the user’s\\nintention and complete the user’s complex tasks quickly and conveniently.\\nReferences\\n[1]Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse H. Engel, Mauro Verzetti, Antoine\\nCaillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matthew Sharifi,\\nNeil Zeghidour, and Christian Havnø Frank. MusicLM: Generating music from text. CoRR ,\\nabs/2301.11325, 2023.\\n[2]James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang,\\nJuntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions.\\n[3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,\\nArvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel\\nHerbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler,\\nJeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott\\nGray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya\\nSutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle,\\nM. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information\\nProcessing Systems , volume 33, pages 1877–1901. Curran Associates, Inc., 2020.\\n[4]Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, and Nanyun Peng. Character-centric\\nstory visualization via visual planning and token alignment. In Yoav Goldberg, Zornitsa\\nKozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods\\nin Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December\\n7-11, 2022 , pages 8259–8272. Association for Computational Linguistics, 2022.\\n[5]Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. Lift yourself up:\\nRetrieval-augmented text generation with self memory, 2023.')]\n"
          ]
        }
      ]
    }
  ]
}