groq-moa / README.md
dromerosm's picture
Update README.md
d2d0f4b verified
metadata
title: Groq MoA - Mixture of Agents
emoji: 💻
colorFrom: blue
colorTo: gray
sdk: docker
pinned: false
license: apache-2.0
app_port: 8051
short_description: Deployment of the skapadia3214/groq-moa repo

Repo deployment

skapadia3214/groq-moa

All of the following information comes directly from this repo. Only the available models have been modified to incorporate the latest Llama3.1 models.

Mixture-of-Agents Demo Powered by Groq

This Streamlit application showcases the Mixture of Agents (MOA) architecture proposed by Together AI, powered by Groq LLMs. It allows users to interact with a configurable multi-agent system for enhanced AI-driven conversations.

MOA Architecture Source: Adaptation of Together AI Blog - Mixture of Agents

Acknowledgements

  • Groq for providing the underlying language models
  • Together AI for proposing the Mixture of Agents architecture and providing the conceptual image
  • Streamlit for the web application framework

Citation

This project implements the Mixture-of-Agents architecture proposed in the following paper:

@article{wang2024mixture,
  title={Mixture-of-Agents Enhances Large Language Model Capabilities},
  author={Wang, Junlin and Wang, Jue and Athiwaratkun, Ben and Zhang, Ce and Zou, James},
  journal={arXiv preprint arXiv:2406.04692},
  year={2024}
}

For more information about the Mixture-of-Agents concept, please refer to the original research paper and the Together AI blog post.


App Structure

The app is structured into several main sections:

  1. Configuration Sidebar:

    • Allows users to select the main model, number of cycles, and main model temperature.
    • Provides a JSON editor for configuring layer agents.
    • Includes a button to apply recommended configurations.
  2. Main Chat Interface:

    • Displays the chat history and allows users to input questions.
    • Shows streamed responses from the MOA system, including intermediate outputs from layer agents.
  3. Current Configuration Display:

    • Shows the current MOA configuration in an expandable section.
  4. Information Sections:

    • Displays credits and related information.
    • Shows a visualization of the MOA workflow.

Configuration Options

  • Main Model: Select from a list of available Groq LLMs.
  • Number of Layers: Choose the number of intermediate processing layers (1-10).
  • Main Model Temperature: Set the temperature for the main model's output (0.0-1.0).
  • Layer Agent Configuration: Customize the behavior of each layer agent using a JSON editor.

How It Works

  1. The app initializes with default or previously set configurations.
  2. Users can update the configuration using the sidebar.
  3. Questions are input through the chat interface.
  4. The MOA system processes the input:
    • Layer agents process the input in parallel for each cycle.
    • The main model generates the final response.
  5. Responses are streamed back to the user, showing both intermediate and final outputs.

Notes

  • The app uses session state to maintain conversation history and configuration across interactions.
  • The MOA system is reset when the configuration is updated.