gradio_docs_rag_hfchattool / sources /agents-and-tool-usage.json
Csplk's picture
updated gradio sources
c840e14
{"guide": {"name": "agents-and-tool-usage", "category": "chatbots", "pretty_category": "Chatbots", "guide_index": 3, "absolute_index": 27, "pretty_name": "Agents And Tool Usage", "content": "# Building a UI for an LLM Agent\n\n\n\n\nThe Gradio Chatbot can natively display intermediate thoughts and tool usage. This makes it perfect for creating UIs for LLM agents. This guide will show you how.\n\n## The metadata key\n\nIn addition to the `content` and `role` keys, the messages dictionary accepts a `metadata` key. At present, the `metadata` key accepts a dictionary with a single key called `title`. \nIf you specify a `title` for the message, it will be displayed in a collapsible box.\n\nHere is an example, were we display the agent's thought to use a weather API tool to answer the user query.\n\n```python\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(type=\"messages\",\n value=[{\"role\": \"user\", \"content\": \"What is the weather in San Francisco?\"},\n {\"role\": \"assistant\", \"content\": \"I need to use the weather API tool\",\n \"metadata\": {\"title\": \"\ud83e\udde0 Thinking\"}}]\n )\n```\n\n![simple-metadat-chatbot](https://github.com/freddyaboulton/freddyboulton/assets/41651716/3941783f-6835-4e5e-89a6-03f850d9abde)\n\n\n## A real example using transformers.agents\n\nWe'll create a Gradio application simple agent that has access to a text-to-image tool.\n <p class='tip'>\n <span class=\"inline-flex\" style=\"align-items: baseline\">\n <svg class=\"self-center w-5 h-5 mx-1\" xmlns=\"http://www.w3.org/2000/svg\" width=\"800px\" height=\"800px\" viewBox=\"0 0 24 24\" fill=\"currentColor\">\n <path fill-rule=\"evenodd\" clip-rule=\"evenodd\" d=\"M9.25 18.7089C9.25 18.2894 9.58579 17.9494 10 17.9494H14C14.4142 17.9494 14.75 18.2894 14.75 18.7089C14.75 19.1283 14.4142 19.4684 14 19.4684H10C9.58579 19.4684 9.25 19.1283 9.25 18.7089ZM9.91667 21.2405C9.91667 20.821 10.2525 20.481 10.6667 20.481H13.3333C13.7475 20.481 14.0833 20.821 14.0833 21.2405C14.0833 21.66 13.7475 22 13.3333 22H10.6667C10.2525 22 9.91667 21.66 9.91667 21.2405Z\"/>\n <path d=\"M7.41058 13.8283L8.51463 14.8807C8.82437 15.1759 9 15.5875 9 16.0182C9 16.6653 9.518 17.1899 10.157 17.1899H13.843C14.482 17.1899 15 16.6653 15 16.0182C15 15.5875 15.1756 15.1759 15.4854 14.8807L16.5894 13.8283C18.1306 12.3481 18.9912 10.4034 18.9999 8.3817L19 8.29678C19 4.84243 15.866 2 12 2C8.13401 2 5 4.84243 5 8.29678L5.00007 8.3817C5.00875 10.4034 5.86939 12.3481 7.41058 13.8283Z\"/>\n </svg>\n <span><strong>Tip:</strong></span>\n </span>\n Make sure you read the transformers agent [documentation](https://huggingface.co/docs/transformers/en/agents) first\n </p>\n \n\nWe'll start by importing the necessary classes from transformers and gradio. \n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom transformers import load_tool, ReactCodeAgent, HfEngine\nfrom utils import stream_from_transformers_agent\n\n# Import tool from Hub\nimage_generation_tool = load_tool(\"m-ric/text-to-image\")\n\n\nllm_engine = HfEngine(\"meta-llama/Meta-Llama-3-70B-Instruct\")\n# Initialize the agent with both tools\nagent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine)\n```\n\nThen we'll build the UI. The bulk of the logic is handled by `stream_from_transformers_agent`. We won't cover it in this guide because it will soon be merged to transformers but you can see its source code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/utils.py).\n\n```python\ndef interact_with_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n for msg in stream_from_transformers_agent(agent, prompt):\n messages.append(msg)\n yield messages\n yield messages\n\n\nwith gr.Blocks() as demo:\n stored_message = gr.State([])\n chatbot = gr.Chatbot(label=\"Agent\",\n type=\"messages\",\n avatar_images=(None, \"https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png\"))\n text_input = gr.Textbox(lines=1, label=\"Chat Message\")\n text_input.submit(lambda s: (s, \"\"), [text_input], [stored_message, text_input]).then(interact_with_agent, [stored_message, chatbot], [chatbot])\n```\n\nYou can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py).\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\n## A real example using langchain agents\n\nWe'll create a UI for langchain agent that has access to a search engine.\n\nWe'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with\nthe following environment variables set - \n\n```\nSERPAPI_API_KEY=\nHF_TOKEN=\nOPENAI_API_KEY=\n```\n\n```python\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools\nfrom langchain_openai import ChatOpenAI\nfrom gradio import ChatMessage\nimport gradio as gr\n\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nmodel = ChatOpenAI(temperature=0, streaming=True)\n\ntools = load_tools([\"serpapi\"])\n\n# Get the prompt to use - you can modify this!\nprompt = hub.pull(\"hwchase17/openai-tools-agent\")\n# print(prompt.messages) -- to see the prompt\nagent = create_openai_tools_agent(\n model.with_config({\"tags\": [\"agent_llm\"]}), tools, prompt\n)\nagent_executor = AgentExecutor(agent=agent, tools=tools).with_config(\n {\"run_name\": \"Agent\"}\n)\n```\n\nThen we'll create the Gradio UI\n\n```python\nasync def interact_with_langchain_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n async for chunk in agent_executor.astream(\n {\"input\": prompt}\n ):\n if \"steps\" in chunk:\n for step in chunk[\"steps\"]:\n messages.append(ChatMessage(role=\"assistant\", content=step.action.log,\n metadata={\"title\": f\"\ud83d\udee0\ufe0f Used tool {step.action.tool}\"}))\n yield messages\n if \"output\" in chunk:\n messages.append(ChatMessage(role=\"assistant\", content=chunk[\"output\"]))\n yield messages\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"# Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png\",\n ),\n )\n input = gr.Textbox(lines=1, label=\"Chat Message\")\n input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2])\n\ndemo.launch()\n```\n\n![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67)\n\nThat's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).\n\n\n\n", "tags": ["LLM", "AGENTS", "CHAT"], "spaces": ["https://huggingface.co/spaces/gradio/agent_chatbot", "https://huggingface.co/spaces/gradio/langchain-agent"], "url": "/guides/agents-and-tool-usage/", "contributor": null}}