shukdevdatta123's picture
Update README.md
b31fd48 verified
metadata
title: Multi Modal Omni Chatbot
emoji: 🐠
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 5.20.1
app_file: app.py
pinned: false
license: mit
short_description: A multimodal chatbot that supports both text and image chat.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference


Building a Multimodal Chatbot with Gradio and OpenAI

In recent years, the field of artificial intelligence (AI) has seen an exciting leap in multimodal capabilities. Multimodal systems can understand and generate multiple types of input — like text and images — to provide richer, more dynamic responses. One such example is a multimodal chatbot that can process both text and image inputs using the OpenAI API.

In this article, we’ll walk through how to create a multimodal chatbot using Gradio and the OpenAI API that allows users to input both text and images, interact with the model, and receive insightful responses.

Key Components

Before we dive into the code, let's break down the core components of this chatbot:

  • Gradio: A simple, open-source Python library for building UIs for machine learning models. It allows you to quickly create and deploy interfaces for any ML model, including those that take images, text, or audio as input.

  • OpenAI API: This is the engine behind our chatbot. OpenAI provides models like gpt-3.5, gpt-4, and specialized image models such as o1 for handling multimodal tasks (image and text inputs).

  • Python and PIL: To handle image preprocessing, we use Python's PIL (Python Imaging Library) to convert uploaded images into a format that can be passed into the OpenAI model.

The Chatbot Overview

The chatbot can take two main types of input:

  1. Text Input: Ask a question or give a prompt to the model.
  2. Image Input: Upload an image, and the model will interpret the image and provide a response based on its content.

The interface offers the user the ability to adjust two main settings:

  • Reasoning Effort: This controls how complex or detailed the assistant’s answers should be. The options are low, medium, and high.
  • Model Choice: Users can select between two models: o1 (optimized for image input) and o3-mini (focused on text input).

The interface is simple, intuitive, and interactive, with the chat history displayed on the side.

Step-by-Step Code Explanation

1. Set Up Gradio UI

Gradio makes it easy to create beautiful interfaces for your AI models. We start by defining a custom interface with the following components:

  • Textbox for OpenAI API Key: Users provide their OpenAI API key to authenticate their request.
  • Image Upload and Text Input Fields: Users can choose to upload an image or input text.
  • Dropdowns for Reasoning Effort and Model Selection: Choose the complexity of the responses and the model to use.
  • Submit and Clear Buttons: These trigger the logic to process user inputs and clear chat history, respectively.
with gr.Blocks(css=custom_css) as demo:
    gr.Markdown("""
        <div class="gradio-header">
            <h1>Multimodal Chatbot (Text + Image)</h1>
            <h3>Interact with a chatbot using text or image inputs</h3>
        </div>
    """)
    
    # User inputs and chat history
    openai_api_key = gr.Textbox(label="Enter OpenAI API Key", type="password", placeholder="sk-...", interactive=True)
    image_input = gr.Image(label="Upload an Image", type="pil")
    input_text = gr.Textbox(label="Enter Text Question", placeholder="Ask a question or provide text", lines=2)
    
    # Reasoning effort and model selection
    reasoning_effort = gr.Dropdown(label="Reasoning Effort", choices=["low", "medium", "high"], value="medium")
    model_choice = gr.Dropdown(label="Select Model", choices=["o1", "o3-mini"], value="o1")
    
    submit_btn = gr.Button("Ask!", elem_id="submit-btn")
    clear_btn = gr.Button("Clear History", elem_id="clear-history")
    
    # Chat history display
    chat_history = gr.Chatbot()

2. Handle Image and Text Inputs

The function generate_response processes both image and text inputs by sending them to OpenAI’s API. If an image is uploaded, it gets converted into a base64 string so it can be sent as part of the prompt.

For text inputs, the prompt is directly passed to the model.

def generate_response(input_text, image, openai_api_key, reasoning_effort="medium", model_choice="o1"):
    openai.api_key = openai_api_key
    
    if image:
        image_info = get_base64_string_from_image(image)
        input_text = f"data:image/png;base64,{image_info}"

    if model_choice == "o1":
        messages = [{"role": "user", "content": [{"type": "image_url", "image_url": {"url": input_text}}]}]
    elif model_choice == "o3-mini":
        messages = [{"role": "user", "content": [{"type": "text", "text": input_text}]}]
    
    # API request
    response = openai.ChatCompletion.create(
        model=model_choice,
        messages=messages,
        reasoning_effort=reasoning_effort,
        max_completion_tokens=2000
    )
    return response["choices"][0]["message"]["content"]

3. Image-to-Base64 Conversion

To ensure the image is properly formatted, we convert it into a base64 string. This string can then be embedded directly into the OpenAI request. This conversion is handled by the get_base64_string_from_image function.

def get_base64_string_from_image(pil_image):
    buffered = io.BytesIO()
    pil_image.save(buffered, format="PNG")
    img_bytes = buffered.getvalue()
    base64_str = base64.b64encode(img_bytes).decode("utf-8")
    return base64_str

4. Chat History and Interaction

The chat history is stored and displayed using Gradio’s gr.Chatbot. Each time the user submits a question or image, the conversation history is updated, showing both user and assistant responses in an easy-to-read format.

def chatbot(input_text, image, openai_api_key, reasoning_effort, model_choice, history=[]):
    response = generate_response(input_text, image, openai_api_key, reasoning_effort, model_choice)
    history.append((f"User: {input_text}", f"Assistant: {response}"))
    return "", history

5. Clear History Function

To reset the conversation, we include a simple function that clears the chat history when the "Clear History" button is clicked.

def clear_history():
    return "", []

6. Custom CSS for Styling

To ensure a visually appealing interface, custom CSS is applied. The design includes animations for chat messages and custom button styles to make the interaction smoother.

/* Custom CSS for the chat interface */
.gradio-container { ... }
.gradio-header { ... }
.gradio-chatbot { ... }

7. Launch the Interface

Finally, we call the create_interface() function to launch the Gradio interface. This allows users to start interacting with the chatbot by uploading images, entering text, and receiving responses based on the selected model and reasoning effort.

if __name__ == "__main__":
    demo = create_interface()
    demo.launch()

Conclusion

This multimodal chatbot can handle both text and image inputs, offering a rich conversational experience. By combining the power of Gradio for building intuitive UIs and OpenAI’s powerful models for natural language processing and image recognition, this application demonstrates how to seamlessly integrate multiple forms of input into a single, easy-to-use interface.

Feel free to try it out yourself and experiment with different settings, including reasoning effort and model selection. Whether you're building a customer support bot or an image-based query system, this framework provides a flexible foundation for creating powerful, multimodal applications.