{"guide": {"name": "agents-and-tool-usage", "category": "chatbots", "pretty_category": "Chatbots", "guide_index": 3, "absolute_index": 27, "pretty_name": "Agents And Tool Usage", "content": "# Building a UI for an LLM Agent\n\n\n\n\nThe Gradio Chatbot can natively display intermediate thoughts and tool usage. This makes it perfect for creating UIs for LLM agents. This guide will show you how.\n\n## The metadata key\n\nIn addition to the `content` and `role` keys, the messages dictionary accepts a `metadata` key. At present, the `metadata` key accepts a dictionary with a single key called `title`. \nIf you specify a `title` for the message, it will be displayed in a collapsible box.\n\nHere is an example, were we display the agent's thought to use a weather API tool to answer the user query.\n\n```python\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(type=\"messages\",\n value=[{\"role\": \"user\", \"content\": \"What is the weather in San Francisco?\"},\n {\"role\": \"assistant\", \"content\": \"I need to use the weather API tool\",\n \"metadata\": {\"title\": \"\ud83e\udde0 Thinking\"}}]\n )\n```\n\n![simple-metadat-chatbot](https://github.com/freddyaboulton/freddyboulton/assets/41651716/3941783f-6835-4e5e-89a6-03f850d9abde)\n\n\n## A real example using transformers.agents\n\nWe'll create a Gradio application simple agent that has access to a text-to-image tool.\n

\n \n \n \n \n \n Tip:\n \n Make sure you read the transformers agent [documentation](https://huggingface.co/docs/transformers/en/agents) first\n

\n \n\nWe'll start by importing the necessary classes from transformers and gradio. \n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom transformers import load_tool, ReactCodeAgent, HfEngine\nfrom utils import stream_from_transformers_agent\n\n# Import tool from Hub\nimage_generation_tool = load_tool(\"m-ric/text-to-image\")\n\n\nllm_engine = HfEngine(\"meta-llama/Meta-Llama-3-70B-Instruct\")\n# Initialize the agent with both tools\nagent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine)\n```\n\nThen we'll build the UI. The bulk of the logic is handled by `stream_from_transformers_agent`. We won't cover it in this guide because it will soon be merged to transformers but you can see its source code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/utils.py).\n\n```python\ndef interact_with_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n for msg in stream_from_transformers_agent(agent, prompt):\n messages.append(msg)\n yield messages\n yield messages\n\n\nwith gr.Blocks() as demo:\n stored_message = gr.State([])\n chatbot = gr.Chatbot(label=\"Agent\",\n type=\"messages\",\n avatar_images=(None, \"https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png\"))\n text_input = gr.Textbox(lines=1, label=\"Chat Message\")\n text_input.submit(lambda s: (s, \"\"), [text_input], [stored_message, text_input]).then(interact_with_agent, [stored_message, chatbot], [chatbot])\n```\n\nYou can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py).\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\n## A real example using langchain agents\n\nWe'll create a UI for langchain agent that has access to a search engine.\n\nWe'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with\nthe following environment variables set - \n\n```\nSERPAPI_API_KEY=\nHF_TOKEN=\nOPENAI_API_KEY=\n```\n\n```python\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools\nfrom langchain_openai import ChatOpenAI\nfrom gradio import ChatMessage\nimport gradio as gr\n\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nmodel = ChatOpenAI(temperature=0, streaming=True)\n\ntools = load_tools([\"serpapi\"])\n\n# Get the prompt to use - you can modify this!\nprompt = hub.pull(\"hwchase17/openai-tools-agent\")\n# print(prompt.messages) -- to see the prompt\nagent = create_openai_tools_agent(\n model.with_config({\"tags\": [\"agent_llm\"]}), tools, prompt\n)\nagent_executor = AgentExecutor(agent=agent, tools=tools).with_config(\n {\"run_name\": \"Agent\"}\n)\n```\n\nThen we'll create the Gradio UI\n\n```python\nasync def interact_with_langchain_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n async for chunk in agent_executor.astream(\n {\"input\": prompt}\n ):\n if \"steps\" in chunk:\n for step in chunk[\"steps\"]:\n messages.append(ChatMessage(role=\"assistant\", content=step.action.log,\n metadata={\"title\": f\"\ud83d\udee0\ufe0f Used tool {step.action.tool}\"}))\n yield messages\n if \"output\" in chunk:\n messages.append(ChatMessage(role=\"assistant\", content=chunk[\"output\"]))\n yield messages\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"# Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png\",\n ),\n )\n input = gr.Textbox(lines=1, label=\"Chat Message\")\n input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2])\n\ndemo.launch()\n```\n\n![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67)\n\nThat's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).\n\n\n\n", "tags": ["LLM", "AGENTS", "CHAT"], "spaces": ["https://huggingface.co/spaces/gradio/agent_chatbot", "https://huggingface.co/spaces/gradio/langchain-agent"], "url": "/guides/agents-and-tool-usage/", "contributor": null}}