{"guide": {"name": "creating-a-custom-chatbot-with-blocks", "category": "chatbots", "pretty_category": "Chatbots", "guide_index": 2, "absolute_index": 26, "pretty_name": "Creating A Custom Chatbot With Blocks", "content": "# How to Create a Custom Chatbot with Gradio Blocks\n\n\n\n\n## Introduction\n\n**Important Note**: if you are getting started, we recommend using the `gr.ChatInterface` to create chatbots -- its a high-level abstraction that makes it possible to create beautiful chatbot applications fast, often with a single line of code. [Read more about it here](/guides/creating-a-chatbot-fast).\n\nThis tutorial will show how to make chatbot UIs from scratch with Gradio's low-level Blocks API. This will give you full control over your Chatbot UI. You'll start by first creating a a simple chatbot to display text, a second one to stream text responses, and finally a chatbot that can handle media files as well. The chatbot interface that we create will look something like this:\n\n
\n \n \n Tip:\n \n For better type hinting and auto-completion in your IDE, you can use the `gr.ChatMessage` dataclass:\n
\n \n\n```python\nfrom gradio import ChatMessage\n\ndef chat_function(message, history):\n history.append(ChatMessage(role=\"user\", content=message))\n history.append(ChatMessage(role=\"assistant\", content=\"Hello, how can I help you?\"))\n return history\n```\n\n## Add Streaming to your Chatbot\n\nThere are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that:\n\n```python\nimport gradio as gr\nimport random\nimport time\n\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(type=\"messages\")\n msg = gr.Textbox()\n clear = gr.Button(\"Clear\")\n\n def user(user_message, history: list):\n return \"\", history + [{\"role\": \"user\", \"content\": user_message}]\n\n def bot(history: list):\n bot_message = random.choice([\"How are you?\", \"I love you\", \"I'm very hungry\"])\n history.append({\"role\": \"assistant\", \"content\": \"\"})\n for character in bot_message:\n history[-1]['content'] += character\n time.sleep(0.05)\n yield history\n\n msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(\n bot, chatbot, chatbot\n )\n clear.click(lambda: None, None, chatbot, queue=False)\n\ndemo.launch()\n\n```\n\nYou'll notice that when a user submits their message, we now _chain_ two event events with `.then()`:\n\n1. The first method `user()` updates the chatbot with the user message and clears the input field. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `{\"role\": \"user\", \"content\": user_message}`.\n\n2. The second method, `bot()` updates the chatbot history with the bot's response. Finally, we construct the message character by character and `yield` the intermediate outputs as they are being constructed. Gradio automatically turns any function with the `yield` keyword [into a streaming output interface](/guides/key-features/#iterative-outputs).\n\n\nOf course, in practice, you would replace `bot()` with your own more complex function, which might call a pretrained model or an API, to generate a response.\n\n\n## Adding Markdown, Images, Audio, or Videos\n\nThe `gr.Chatbot` component supports a subset of markdown including bold, italics, and code. For example, we could write a function that responds to a user's message, with a bold **That's cool!**, like this:\n\n```py\ndef bot(history):\n response = {\"role\": \"assistant\", \"content\": \"**That's cool!**\"}\n history.append(response)\n return history\n```\n\nIn addition, it can handle media files, such as images, audio, and video. You can use the `MultimodalTextbox` component to easily upload all types of media files to your chatbot. To pass in a media file, we must pass in the file a dictionary with a `path` key pointing to a local file and an `alt_text` key. The `alt_text` is optional, so you can also just pass in a tuple with a single element `{\"path\": \"filepath\"}`, like this:\n\n```python\ndef add_message(history, message):\n for x in message[\"files\"]:\n history.append({\"role\": \"user\", \"content\": {\"path\": x}})\n if message[\"text\"] is not None:\n history.append({\"role\": \"user\", \"content\": message[\"text\"]})\n return history, gr.MultimodalTextbox(value=None, interactive=False, file_types=[\"image\"])\n```\n\nPutting this together, we can create a _multimodal_ chatbot with a multimodal textbox for a user to submit text and media files. The rest of the code looks pretty much the same as before:\n\n```python\nimport gradio as gr\nimport time\n\n# Chatbot demo with multimodal input (text, markdown, LaTeX, code blocks, image, audio, & video). Plus shows support for streaming text.\n\n\ndef print_like_dislike(x: gr.LikeData):\n print(x.index, x.value, x.liked)\n\n\ndef add_message(history, message):\n for x in message[\"files\"]:\n history.append({\"role\": \"user\", \"content\": {\"path\": x}})\n if message[\"text\"] is not None:\n history.append({\"role\": \"user\", \"content\": message[\"text\"]})\n return history, gr.MultimodalTextbox(value=None, interactive=False)\n\n\ndef bot(history: list):\n response = \"**That's cool!**\"\n history.append({\"role\": \"assistant\", \"content\": \"\"})\n for character in response:\n history[-1][\"content\"] += character\n time.sleep(0.05)\n yield history\n\n\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(elem_id=\"chatbot\", bubble_full_width=False, type=\"messages\")\n\n chat_input = gr.MultimodalTextbox(\n interactive=True,\n file_count=\"multiple\",\n placeholder=\"Enter message or upload file...\",\n show_label=False,\n )\n\n chat_msg = chat_input.submit(\n add_message, [chatbot, chat_input], [chatbot, chat_input]\n )\n bot_msg = chat_msg.then(bot, chatbot, chatbot, api_name=\"bot_response\")\n bot_msg.then(lambda: gr.MultimodalTextbox(interactive=True), None, [chat_input])\n\n chatbot.like(print_like_dislike, None, None, like_user_message=True)\n\ndemo.launch()\n\n```\n