source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
text
stringlengths
11
14.2k
GitHub
autogen
autogen/website/docs/contributor-guide/file-bug-report.md
autogen
# File A Bug Report When you submit an issue to [GitHub](https://github.com/microsoft/autogen/issues), please do your best to follow these guidelines! This will make it a lot easier to provide you with good feedback: - The ideal bug report contains a short reproducible code snippet. This way anyone can try to reproduce the bug easily (see [this](https://stackoverflow.com/help/mcve) for more details). If your snippet is longer than around 50 lines, please link to a [gist](https://gist.github.com) or a GitHub repo. - If an exception is raised, please **provide the full traceback**. - Please include your **operating system type and version number**, as well as your **Python, autogen, scikit-learn versions**. The version of autogen can be found by running the following code snippet: ```python import autogen print(autogen.__version__) ``` - Please ensure all **code snippets and error messages are formatted in appropriate code blocks**. See [Creating and highlighting code blocks](https://help.github.com/articles/creating-and-highlighting-code-blocks) for more details.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
# Guidance for Maintainers
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
General - Be a member of the community and treat everyone as a member. Be inclusive. - Help each other and encourage mutual help. - Actively post and respond. - Keep open communication. - Identify good maintainer candidates from active contributors.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
Pull Requests - For new PR, decide whether to close without review. If not, find the right reviewers. One source to refer to is the roles on Discord. Another consideration is to ask users who can benefit from the PR to review it. - For old PR, check the blocker: reviewer or PR creator. Try to unblock. Get additional help when needed. - When requesting changes, make sure you can check back in time because it blocks merging. - Make sure all the checks are passed. - For changes that require running OpenAI tests, make sure the OpenAI tests pass too. Running these tests requires approval. - In general, suggest small PRs instead of a giant PR. - For documentation change, request snapshot of the compiled website, or compile by yourself to verify the format. - For new contributors who have not signed the contributing agreement, remind them to sign before reviewing. - For multiple PRs which may have conflict, coordinate them to figure out the right order. - Pay special attention to: - Breaking changes. Don’t make breaking changes unless necessary. Don’t merge to main until enough headsup is provided and a new release is ready. - Test coverage decrease. - Changes that may cause performance degradation. Do regression test when test suites are available. - Discourage **change to the core library** when there is an alternative.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
Issues and Discussions - For new issues, write a reply, apply a label if relevant. Ask on discord when necessary. For roadmap issues, apply the roadmap label and encourage community discussion. Mention relevant experts when necessary. - For old issues, provide an update or close. Ask on discord when necessary. Encourage PR creation when relevant. - Use “good first issue” for easy fix suitable for first-time contributors. - Use “task list” for issues that require multiple PRs. - For discussions, create an issue when relevant. Discuss on discord when appropriate.
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
# Tests Tests are automatically run via GitHub actions. There are two workflows: 1. [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) 1. [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml) The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use [`pytest.mark.skipif`](https://github.com/microsoft/autogen/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py#L19) to make them run in only when `openai` package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml). Make sure all tests pass, this is required for [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) checks to pass
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Running tests locally To run tests, install the [test] option: ```bash pip install -e."[test]" ``` Then you can run the tests from the `test` folder using the following command: ```bash pytest test ``` Tests for the `autogen.agentchat.contrib` module may be skipped automatically if the required dependencies are not installed. Please consult the documentation for each contrib module to see what dependencies are required. See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#testing) for how to run notebook tests.
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Skip flags for tests - `--skip-openai` for skipping tests that require access to OpenAI services. - `--skip-docker` for skipping tests that explicitly use docker - `--skip-redis` for skipping tests that require a Redis server For example, the following command will skip tests that require access to OpenAI and docker services: ```bash pytest test --skip-openai --skip-docker ```
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Coverage Any code you commit should not decrease coverage. To ensure your code maintains or increases coverage, use the following commands after installing the required test dependencies: ```bash pip install -e ."[test]" pytest test --cov-report=html ``` Pytest generated a code coverage report and created a htmlcov directory containing an index.html file and other related files. Open index.html in any web browser to visualize and navigate through the coverage data interactively. This interactive visualization allows you to identify uncovered lines and review coverage statistics for individual files.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
# Documentation
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
How to get a notebook rendered on the website See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
Build documentation locally 1\. To build and test documentation locally, first install [Node.js](https://nodejs.org/en/download/). For example, ```bash nvm install --lts ``` Then, install `yarn` and other required packages: ```bash npm install --global yarn pip install pydoc-markdown pyyaml termcolor ``` 2\. You also need to install quarto. Please click on the `Pre-release` tab from [this website](https://quarto.org/docs/download/) to download the latest version of `quarto` and install it. Ensure that the `quarto` version is `1.5.23` or higher. 3\. Finally, run the following commands to build: ```console cd website yarn install --frozen-lockfile --ignore-engines pydoc-markdown python process_notebooks.py render yarn start ``` The last command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
Build with Docker To build and test documentation within a docker container. Use the Dockerfile in the `dev` folder as described above to build your image: ```bash docker build -f .devcontainer/dev/Dockerfile -t autogen_dev_img https://github.com/microsoft/autogen.git#main ``` Then start the container like so, this will log you in and ensure that Docker port 3000 is mapped to port 8081 on your local machine ```bash docker run -it -p 8081:3000 -v `pwd`/autogen-newcode:newstuff/ autogen_dev_img bash ``` Once at the CLI in Docker run the following commands: ```bash cd website yarn install --frozen-lockfile --ignore-engines pydoc-markdown python process_notebooks.py render yarn start --host 0.0.0.0 --port 3000 ``` Once done you should be able to access the documentation at `http://127.0.0.1:8081/autogen`
GitHub
autogen
autogen/website/docs/contributor-guide/pre-commit.md
autogen
# Pre-commit Run `pre-commit install` to install pre-commit into your git hooks. Before you commit, run `pre-commit run` to check if you meet the pre-commit requirements. If you use Windows (without WSL) and can't commit after installing pre-commit, you can run `pre-commit uninstall` to uninstall the hook. In WSL or Linux this is supposed to work.
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
# Contributing to AutoGen The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable. Possible contributions include but not limited to: - Pushing patches. - Code review of pull requests. - Documentation, examples and test cases. - Readability improvement, e.g., improvement on docstr and comments. - Community participation in [issues](https://github.com/microsoft/autogen/issues), [discussions](https://github.com/microsoft/autogen/discussions), [discord](https://aka.ms/autogen-dc), and [twitter](https://twitter.com/pyautogen). - Tutorials, blog posts, talks that promote the project. - Sharing application scenarios and/or related research. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. If you are new to GitHub [here](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/) is a detailed help source on getting involved with development on GitHub. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Roadmaps To see what we are working on and what we plan to work on, please check our [Roadmap Issues](https://aka.ms/autogen-roadmap).
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Becoming a Reviewer There is currently no formal reviewer solicitation process. Current reviewers identify reviewers from active contributors. If you are willing to become a reviewer, you are welcome to let us know on discord.
GitHub
autogen
autogen/website/docs/ecosystem/pgvector.md
autogen
# PGVector [PGVector](https://github.com/pgvector/pgvector) is an open-source vector similarity search for Postgres. - [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/portkey.md
autogen
# Portkey Integration with AutoGen <img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Autogen.png?raw=true" alt="Portkey Metrics Visualization" width=70% /> [Portkey](https://portkey.ai) is a 2-line upgrade to make your AutoGen agents reliable, cost-efficient, and fast. Portkey adds 4 core production capabilities to any AutoGen agent: 1. Routing to 200+ LLMs 2. Making each LLM call more robust 3. Full-stack tracing & cost, performance analytics 4. Real-time guardrails to enforce behavior
GitHub
autogen
autogen/website/docs/ecosystem/portkey.md
autogen
Getting Started 1. **Install Required Packages:** 2. ```bash pip install -qU autogen-agentchat~=0.2 portkey-ai ``` **Configure AutoGen with Portkey:** ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders config = [ { "api_key": "OPENAI_API_KEY", "model": "gpt-3.5-turbo", "base_url": PORTKEY_GATEWAY_URL, "api_type": "openai", "default_headers": createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="openai", ) } ] ``` Generate your API key in the [Portkey Dashboard](https://app.portkey.ai/). And, that's it! With just this, you can start logging all of your AutoGen requests and make them reliable. 3. **Let's Run your Agent** ``` python import autogen # Create user proxy agent, coder, product manager user_proxy = autogen.UserProxyAgent( name="User_proxy", system_message="A human admin who will give the idea and run the code provided by Coder.", code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"}, human_input_mode="ALWAYS", ) coder = autogen.AssistantAgent( name="Coder", system_message = "You are a Python developer who is good at developing games. You work with Product Manager.", llm_config={"config_list": config}, ) # Create groupchat groupchat = autogen.GroupChat( agents=[user_proxy, coder], messages=[]) manager = autogen.GroupChatManager(groupchat=groupchat, llm_config={"config_list": config}) # Start the conversation user_proxy.initiate_chat( manager, message="Build a classic & basic pong game with 2 players in python") ``` <br> Here’s the output from your Agent’s run on Portkey's dashboard<br> <img src=https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true width=70%" alt="Portkey Dashboard" />
GitHub
autogen
autogen/website/docs/ecosystem/portkey.md
autogen
Key Features Portkey offers a range of advanced features to enhance your AutoGen agents. Here’s an overview | Feature | Description | |---------|-------------| | 🌐 [Multi-LLM Integration](#interoperability) | Access 200+ LLMs with simple configuration changes | | 🛡️ [Enhanced Reliability](#reliability) | Implement fallbacks, load balancing, retries, and much more | | 📊 [Advanced Metrics](#metrics) | Track costs, tokens, latency, and 40+ custom metrics effortlessly | | 🔍 [Detailed Traces and Logs](#comprehensive-logging) | Gain insights into every agent action and decision | | 🚧 [Guardrails](#guardrails) | Enforce agent behavior with real-time checks on inputs and outputs | | 🔄 [Continuous Optimization](#continuous-improvement) | Capture user feedback for ongoing agent improvements | | 💾 [Smart Caching](#caching) | Reduce costs and latency with built-in caching mechanisms | | 🔐 [Enterprise-Grade Security](#security-and-compliance) | Set budget limits and implement fine-grained access controls |
GitHub
autogen
autogen/website/docs/ecosystem/portkey.md
autogen
Colab Notebook For a hands-on example of integrating Portkey with Autogen, check out our notebook<br> <br>[![Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://git.new/Portkey-Autogen) .
GitHub
autogen
autogen/website/docs/ecosystem/portkey.md
autogen
Advanced Features ### Interoperability Easily switch between **200+ LLMs** by changing the `provider` and API key in your configuration. #### Example: Switching from OpenAI to Azure OpenAI ```python config = [ { "api_key": "api-key", "model": "gpt-3.5-turbo", "base_url": PORTKEY_GATEWAY_URL, "api_type": "openai", "default_headers": createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="azure-openai", virtual_key="AZURE_VIRTUAL_KEY" ) } ] ``` Note: AutoGen messages will go through Portkey's AI Gateway following OpenAI's API signature. Some language models may not work properly because messages need to be in a specific role order. ### Reliability Implement fallbacks, load balancing, and automatic retries to make your agents more resilient. ```python { "strategy": { "mode": "fallback" # Options: "loadbalance" or "fallback" }, "targets": [ { "provider": "openai", "api_key": "openai-api-key", "override_params": { "top_k": "0.4", "max_tokens": "100" } }, { "provider": "anthropic", "api_key": "anthropic-api-key", "override_params": { "top_p": "0.6", "model": "claude-3-5-sonnet-20240620" } } ] } ``` Learn more about [Portkey Config object here](https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/configs). Be Careful to Load-Balance/Fallback to providers that don't support tool calling when the request contains a function call. ### Metrics Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need. <details> <summary><b>Portkey's Observability Dashboard</b></summary> <img src=https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true width=70%" alt="Portkey Dashboard" /> </details> ### Comprehensive Logging Access detailed logs and traces of agent activities, function calls, and errors. Filter logs based on multiple parameters for in-depth analysis. <details> <summary><b>Traces</b></summary> <img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Logging Interface" width=70% /> </details> <details> <summary><b>Logs</b></summary> <img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Metrics Visualization" width=70% /> </details> ### Guardrails AutoGen agents, while powerful, can sometimes produce unexpected or undesired outputs. Portkey's Guardrails feature helps enforce agent behavior in real-time, ensuring your AutoGen agents operate within specified parameters. Verify both the **inputs** to and *outputs* from your agents to ensure they adhere to specified formats and content guidelines. Learn more about Portkey's Guardrails [here](https://docs.portkey.ai/product/guardrails) ### Continuous Improvement Capture qualitative and quantitative user feedback on your requests to continuously enhance your agent performance. ### Caching Reduce costs and latency with Portkey's built-in caching system. ```python portkey_config = { "cache": { "mode": "semantic" # Options: "simple" or "semantic" } } ``` ### Security and Compliance Set budget limits on provider API keys and implement fine-grained user roles and permissions for both your application and the Portkey APIs.
GitHub
autogen
autogen/website/docs/ecosystem/portkey.md
autogen
Additional Resources - [📘 Portkey Documentation](https://docs.portkey.ai) - [🐦 Twitter](https://twitter.com/portkeyai) - [💬 Discord Community](https://discord.gg/JHPt4C7r) - [📊 Portkey App](https://app.portkey.ai) For more information on using these features and setting up your Config, please refer to the [Portkey documentation](https://docs.portkey.ai).
GitHub
autogen
autogen/website/docs/ecosystem/composio.md
autogen
# Composio ![Composio Example](img/ecosystem-composio.png) Composio empowers AI agents to seamlessly connect with external tools, Apps, and APIs to perform actions and receive triggers. With built-in support for AutoGen, Composio enables the creation of highly capable and adaptable AI agents that can autonomously execute complex tasks and deliver personalized experiences. - [Composio + AutoGen Documentation with Code Examples](https://docs.composio.dev/framework/autogen)
GitHub
autogen
autogen/website/docs/ecosystem/ollama.md
autogen
# Ollama ![Ollama Example](img/ecosystem-ollama.png) [Ollama](https://ollama.com/) allows the users to run open-source large language models, such as Llama 2, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage. - [Ollama + AutoGen instruction](https://ollama.ai/blog/openai-compatibility)
GitHub
autogen
autogen/website/docs/ecosystem/microsoft-fabric.md
autogen
# Microsoft Fabric ![Fabric Example](img/ecosystem-fabric.png) [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. In this notenook, we give a simple example for using AutoGen in Microsoft Fabric. - [Microsoft Fabric + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_microsoft_fabric.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/llamaindex.md
autogen
# Llamaindex ![Llamaindex Example](img/ecosystem-llamaindex.png) [Llamaindex](https://www.llamaindex.ai/) allows the users to create Llamaindex agents and integrate them in autogen conversation patterns. - [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/databricks.md
autogen
# Databricks ![Databricks Data Intelligence Platform](img/ecosystem-databricks.png) The [Databricks Data Intelligence Platform ](https://www.databricks.com/product/data-intelligence-platform) allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. This example demonstrates how to use AutoGen alongside Databricks Foundation Model APIs and open-source LLM DBRX. - [Databricks + AutoGen Code Examples](/docs/notebooks/agentchat_databricks_dbrx)
GitHub
autogen
autogen/website/docs/ecosystem/mem0.md
autogen
# Mem0: Long-Term Memory and Personalization for Agents <img src="https://github.com/mem0ai/mem0/blob/main/docs/images/mem0-bg.png?raw=true" alt="Mem0 logo" style="width: 40%;" /> [Mem0 Platform](https://www.mem0.ai/) provides a smart, self-improving memory layer for Large Language Models (LLMs), enabling developers to create personalized AI experiences that evolve with each user interaction. At a high level, Mem0 Platform offers comprehensive memory management, self-improving memory capabilities, cross-platform consistency, and centralized memory control for AI applications. For more info, check out the [Mem0 Platform Documentation](https://docs.mem0.ai). | | | | ---------------------------------------- | ----------------------------------------------------------------- | | 🧠 **Comprehensive Memory Management** | Manage long-term, short-term, semantic, and episodic memories | | 🔄 **Self-Improving Memory** | Adaptive system that learns from user interactions | | 🌐 **Cross-Platform Consistency** | Unified user experience across various AI platforms | | 🎛️ **Centralized Memory Control** | Effortless storage, updating, and deletion of memories | | 🚀 **Simplified Development** | API-first approach for streamlined integration | <details open> <summary><b><u>Activity Dashboard</u></b></summary> <a href="https://app.mem0.ai/"> <img src="https://github.com/mem0ai/mem0/blob/main/docs/images/platform/activity.png?raw=true" style="width: 70%;" alt="Activity Dashboard"/> </a> </details>
GitHub
autogen
autogen/website/docs/ecosystem/mem0.md
autogen
Installation Mem0 Platform works seamlessly with various AI applications. 1. **Sign Up:** Create an account at [Mem0 Platform](https://app.mem0.ai/) 2. **Generate API Key:** Create an API key in your Mem0 dashboard 3. **Install Mem0 SDK:** ```bash pip install mem0ai ``` 4. **Configure Your Environment:** Add your API key to your environment variables ``` MEM0_API_KEY=<YOUR_MEM0_API_KEY> ``` 5. **Initialize Mem0:** ```python from mem0ai import MemoryClient memory = MemoryClient(api_key=os.getenv("MEM0_API_KEY")) ``` After initializing Mem0, you can start using its memory management features in your AI application.
GitHub
autogen
autogen/website/docs/ecosystem/mem0.md
autogen
Features - **Long-term Memory**: Store and retrieve information persistently across sessions - **Short-term Memory**: Manage temporary information within a single interaction - **Semantic Memory**: Organize and retrieve conceptual knowledge - **Episodic Memory**: Store and recall specific events or experiences - **Self-Improving System**: Continuously refine understanding based on user interactions
GitHub
autogen
autogen/website/docs/ecosystem/mem0.md
autogen
Common Use Cases - Personalized Learning Assistants - Customer Support AI Agents - Healthcare Assistants - Virtual Companions
GitHub
autogen
autogen/website/docs/ecosystem/mem0.md
autogen
Mem0 Platform Examples ### AutoGen with Mem0 Example This example demonstrates how to use Mem0 with AutoGen to create a conversational AI system with memory capabilities. ```python import os from autogen import ConversableAgent from mem0 import MemoryClient # Set up environment variables os.environ["OPENAI_API_KEY"] = "your_openai_api_key" os.environ["MEM0_API_KEY"] = "your_mem0_api_key" # Initialize Agent and Memory agent = ConversableAgent( "chatbot", llm_config={"config_list": [{"model": "gpt-4", "api_key": os.environ.get("OPENAI_API_KEY")}]}, code_execution_config=False, function_map=None, human_input_mode="NEVER", ) memory = MemoryClient(api_key=os.environ.get("MEM0_API_KEY")) # Insert a conversation into memory conversation = [ { "role": "assistant", "content": "Hi, I'm Best Buy's chatbot!\n\nThanks for being a My Best Buy TotalTM member.\n\nWhat can I help you with?" }, { "role": "user", "content": "Seeing horizontal lines on our tv. TV model: Sony - 77\" Class BRAVIA XR A80K OLED 4K UHD Smart Google TV" }, ] memory.add(messages=conversation, user_id="customer_service_bot") # Agent Inference data = "Which TV am I using?" relevant_memories = memory.search(data, user_id="customer_service_bot") flatten_relevant_memories = "\n".join([m["memory"] for m in relevant_memories]) prompt = f"""Answer the user question considering the memories. Memories: {flatten_relevant_memories} \n\n Question: {data} """ reply = agent.generate_reply(messages=[{"content": prompt, "role": "user"}]) print("Reply :", reply) # Multi Agent Conversation manager = ConversableAgent( "manager", system_message="You are a manager who helps in resolving customer issues.", llm_config={"config_list": [{"model": "gpt-4", "temperature": 0, "api_key": os.environ.get("OPENAI_API_KEY")}]}, human_input_mode="NEVER" ) customer_bot = ConversableAgent( "customer_bot", system_message="You are a customer service bot who gathers information on issues customers are facing.", llm_config={"config_list": [{"model": "gpt-4", "temperature": 0, "api_key": os.environ.get("OPENAI_API_KEY")}]}, human_input_mode="NEVER" ) data = "What appointment is booked?" relevant_memories = memory.search(data, user_id="customer_service_bot") flatten_relevant_memories = "\n".join([m["memory"] for m in relevant_memories]) prompt = f""" Context: {flatten_relevant_memories} \n\n Question: {data} """ result = manager.send(prompt, customer_bot, request_reply=True) ``` Access the complete code from this notebook: [Mem0 with AutoGen](https://colab.research.google.com/drive/1NZEwC9w6V2S6hYmK7l2SQ9jhQrG1uKk8?usp=sharing) This example showcases: 1. Setting up AutoGen agents and Mem0 memory 2. Adding a conversation to Mem0 memory 3. Using Mem0 to retrieve relevant memories for agent inference 4. Implementing a multi-agent conversation with memory-augmented context For more Mem0 examples, visit our [documentation](https://docs.mem0.ai/examples).
GitHub
autogen
autogen/website/docs/ecosystem/azure_cosmos_db.md
autogen
# Azure Cosmos DB > "OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service – one of the fastest-growing consumer apps ever – enabling high reliability and low maintenance." > – Satya Nadella, Microsoft chairman and chief executive officer Azure Cosmos DB is a fully managed [NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-nosql), [relational](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-relational), and [vector database](https://learn.microsoft.com/azure/cosmos-db/vector-database). It offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Your business continuity is assured with up to 99.999% availability backed by SLA. Your can simplify your application development by using this single database service for all your AI agent memory system needs, from [geo-replicated distributed cache](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to tracing/logging to [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database). Learn more about how Azure Cosmos DB enhances the performance of your [AI agent](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents). - [Try Azure Cosmos DB free](https://learn.microsoft.com/en-us/azure/cosmos-db/try-free) - [Use Azure Cosmos DB lifetime free tier](https://learn.microsoft.com/en-us/azure/cosmos-db/free-tier)
GitHub
autogen
autogen/website/docs/ecosystem/memgpt.md
autogen
# MemGPT ![MemGPT Example](img/ecosystem-memgpt.png) MemGPT enables LLMs to manage their own memory and overcome limited context windows. You can use MemGPT to create perpetual chatbots that learn about you and modify their own personalities over time. You can connect MemGPT to your own local filesystems and databases, as well as connect MemGPT to your own tools and APIs. The MemGPT + AutoGen integration allows you to equip any AutoGen agent with MemGPT capabilities. - [MemGPT + AutoGen Documentation with Code Examples](https://memgpt.readme.io/docs/autogen)
GitHub
autogen
autogen/website/docs/ecosystem/promptflow.md
autogen
# Promptflow Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. Refer to [Promptflow docs](https://microsoft.github.io/promptflow/) for more information. Quick links: - Why use Promptflow - [Link](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow) - Quick start guide - [Link](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html) - Sample application for Promptflow + AutoGen integration - [Link](https://github.com/microsoft/autogen/tree/main/samples/apps/promptflow-autogen)
GitHub
autogen
autogen/website/docs/ecosystem/promptflow.md
autogen
Sample Flow ![Sample Promptflow](./img/ecosystem-promptflow.png)
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
# Agent Monitoring and Debugging with AgentOps <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/logo/banner-badge.png?raw=true" style="width: 40%;" alt="AgentOps logo"/> [AgentOps](https://agentops.ai/?=autogen) provides session replays, metrics, and monitoring for AI agents. At a high level, AgentOps gives you the ability to monitor LLM calls, costs, latency, agent failures, multi-agent interactions, tool usage, session-wide statistics, and more. For more info, check out the [AgentOps Repo](https://github.com/AgentOps-AI/agentops). | | | | ------------------------------------- | ------------------------------------------------------------- | | 📊 **Replay Analytics and Debugging** | Step-by-step agent execution graphs | | 💸 **LLM Cost Management** | Track spend with LLM foundation model providers | | 🧪 **Agent Benchmarking** | Test your agents against 1,000+ evals | | 🔐 **Compliance and Security** | Detect common prompt injection and data exfiltration exploits | | 🤝 **Framework Integrations** | Native Integrations with CrewAI, AutoGen, & LangChain | <details open> <summary><b><u>Agent Dashboard</u></b></summary> <a href="https://app.agentops.ai?ref=gh"> <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/app_screenshots/overview.png?raw=true" style="width: 70%;" alt="Agent Dashboard"/> </a> </details> <details> <summary><b><u>Session Analytics</u></b></summary> <a href="https://app.agentops.ai?ref=gh"> <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/app_screenshots/session-overview.png?raw=true" style="width: 70%;" alt="Session Analytics"/> </a> </details> <details> <summary><b><u>Session Replays</u></b></summary> <a href="https://app.agentops.ai?ref=gh"> <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/app_screenshots/session-replay.png?raw=true" style="width: 70%;" alt="Session Replays"/> </a> </details>
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Installation AgentOps works seamlessly with applications built using Autogen. 1. **Install AgentOps** ```bash pip install agentops ``` 2. **Create an API Key:** Create a user API key here: [Create API Key](https://app.agentops.ai/settings/projects) 3. **Configure Your Environment:** Add your API key to your environment variables ``` AGENTOPS_API_KEY=<YOUR_AGENTOPS_API_KEY> ``` 4. **Initialize AgentOps** To start tracking all available data on Autogen runs, simply add two lines of code before implementing Autogen. ```python import agentops agentops.init() # Or: agentops.init(api_key="your-api-key-here") ``` After initializing AgentOps, Autogen will now start automatically tracking your agent runs.
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Features - **LLM Costs**: Track spend with foundation model providers - **Replay Analytics**: Watch step-by-step agent execution graphs - **Recursive Thought Detection**: Identify when agents fall into infinite loops - **Custom Reporting:** Create custom analytics on agent performance - **Analytics Dashboard:** Monitor high level statistics about agents in development and production - **Public Model Testing**: Test your agents against benchmarks and leaderboards - **Custom Tests:** Run your agents against domain specific tests - **Time Travel Debugging**: Save snapshots of session states to rewind and replay agent runs from chosen checkpoints. - **Compliance and Security**: Create audit logs and detect potential threats such as profanity and PII leaks - **Prompt Injection Detection**: Identify potential code injection and secret leaks
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Autogen + AgentOps examples * [AgentChat with AgentOps Notebook](/docs/notebooks/agentchat_agentops) * [More AgentOps Examples](https://docs.agentops.ai/v1/quickstart)
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Extra links - [🐦 Twitter](https://twitter.com/agentopsai/) - [📢 Discord](https://discord.gg/JHPt4C7r) - [🖇️ AgentOps Dashboard](https://app.agentops.ai/ref?=autogen) - [📙 Documentation](https://docs.agentops.ai/introduction)
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
# Docker Docker, an indispensable tool in modern software development, offers a compelling solution for AutoGen's setup. Docker allows you to create consistent environments that are portable and isolated from the host OS. With Docker, everything AutoGen needs to run, from the operating system to specific libraries, is encapsulated in a container, ensuring uniform functionality across different systems. The Dockerfiles necessary for AutoGen are conveniently located in the project's GitHub repository at [https://github.com/microsoft/autogen/tree/main/.devcontainer](https://github.com/microsoft/autogen/tree/main/.devcontainer). **Pre-configured DockerFiles**: The AutoGen Project offers pre-configured Dockerfiles for your use. These Dockerfiles will run as is, however they can be modified to suit your development needs. Please see the README.md file in autogen/.devcontainer - **autogen_base_img**: For a basic setup, you can use the `autogen_base_img` to run simple scripts or applications. This is ideal for general users or those new to AutoGen. - **autogen_full_img**: Advanced users or those requiring more features can use `autogen_full_img`. Be aware that this version loads ALL THE THINGS and thus is very large. Take this into consideration if you build your application off of it.
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 1: Install Docker - **General Installation**: Follow the [official Docker installation instructions](https://docs.docker.com/get-docker/). This is your first step towards a containerized environment, ensuring a consistent and isolated workspace for AutoGen. - **For Mac Users**: If you encounter issues with the Docker daemon, consider using [colima](https://smallsharpsoftwaretools.com/tutorials/use-colima-to-run-docker-containers-on-macos/). Colima offers a lightweight alternative to manage Docker containers efficiently on macOS.
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 2: Build a Docker Image AutoGen now provides updated Dockerfiles tailored for different needs. Building a Docker image is akin to setting the foundation for your project's environment: - **Autogen Basic**: Ideal for general use, this setup includes common Python libraries and essential dependencies. Perfect for those just starting with AutoGen. ```bash docker build -f .devcontainer/Dockerfile -t autogen_base_img https://github.com/microsoft/autogen.git#main ``` - **Autogen Advanced**: Advanced users or those requiring all the things that AutoGen has to offer `autogen_full_img` ```bash docker build -f .devcontainer/full/Dockerfile -t autogen_full_img https://github.com/microsoft/autogen.git#main ```
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 3: Run AutoGen Applications from Docker Image Here's how you can run an application built with AutoGen, using the Docker image: 1. **Mount Your Directory**: Use the Docker `-v` flag to mount your local application directory to the Docker container. This allows you to develop on your local machine while running the code in a consistent Docker environment. For example: ```bash docker run -it -v $(pwd)/myapp:/home/autogen/autogen/myapp autogen_base_img:latest python /home/autogen/autogen/myapp/main.py ``` Here, `$(pwd)/myapp` is your local directory, and `/home/autogen/autogen/myapp` is the path in the Docker container where your code will be located. 2. **Mount your code:** Now suppose you have your application built with AutoGen in a main script named `twoagent.py` ([example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py)) in a folder named `myapp`. With the command line below, you can mount your folder and run the application in Docker. ```python # Mount the local folder `myapp` into docker image and run the script named "twoagent.py" in the docker. docker run -it -v `pwd`/myapp:/myapp autogen_img:latest python /myapp/main_twoagent.py ``` 3. **Port Mapping**: If your application requires a specific port, use the `-p` flag to map the container's port to your host. For instance, if your app runs on port 3000 inside Docker and you want it accessible on port 8080 on your host machine: ```bash docker run -it -p 8080:3000 -v $(pwd)/myapp:/myapp autogen_base_img:latest python /myapp ``` In this command, `-p 8080:3000` maps port 3000 from the container to port 8080 on your local machine. 4. **Examples of Running Different Applications**: Here is the basic format of the docker run command. ```bash docker run -it -p {WorkstationPortNum}:{DockerPortNum} -v {WorkStation_Dir}:{Docker_DIR} {name_of_the_image} {bash/python} {Docker_path_to_script_to_execute} ``` - _Simple Script_: Run a Python script located in your local `myapp` directory. ```bash docker run -it -v `pwd`/myapp:/myapp autogen_base_img:latest python /myapp/my_script.py ``` - _Web Application_: If your application includes a web server running on port 5000. ```bash docker run -it -p 8080:5000 -v $(pwd)/myapp:/myapp autogen_base_img:latest ``` - _Data Processing_: For tasks that involve processing data stored in a local directory. ```bash docker run -it -v $(pwd)/data:/data autogen_base_img:latest python /myapp/process_data.py ```
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Additional Resources - Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/blob/main/.devcontainer/README.md) README. - For more information on Docker usage and best practices, refer to the [official Docker documentation](https://docs.docker.com). - Details on how to use the Dockerfile dev version can be found on the [Contributor Guide](/docs/contributor-guide/docker).
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
# Optional Dependencies
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
LLM Caching To use LLM caching with Redis, you need to install the Python package with the option `redis`: ```bash pip install "autogen-agentchat[redis]~=0.2" ``` See [LLM Caching](/docs/topics/llm-caching) for details.
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
IPython Code Executor To use the IPython code executor, you need to install the `jupyter-client` and `ipykernel` packages: ```bash pip install "autogen-agentchat[ipython]~=0.2" ``` To use the IPython code executor: ```python from autogen import UserProxyAgent proxy = UserProxyAgent(name="proxy", code_execution_config={"executor": "ipython-embedded"}) ```
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
blendsearch `pyautogen<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it. ```bash pip install "pyautogen[blendsearch]<0.2" ``` Example notebooks: [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
retrievechat AutoGen 0.2 supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. ```bash pip install "autogen-agentchat[retrievechat]" ``` *You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).* Alternatively AutoGen 0.2 also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it. ```bash pip install "autogen-agentchat[retrievechat-pgvector]~=0.2" ``` ```bash pip install "autogen-agentchat[retrievechat-qdrant]~=0.2" ``` RetrieveChat can handle various types of documents. By default, it can process plain text and PDF files, including formats such as 'txt', 'json', 'csv', 'tsv', 'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml' and 'pdf'. If you install [unstructured](https://unstructured-io.github.io/unstructured/installation/full_installation.html) (`pip install "unstructured[all-docs]"`), additional document types such as 'docx', 'doc', 'odt', 'pptx', 'ppt', 'xlsx', 'eml', 'msg', 'epub' will also be supported. You can find a list of all supported document types by using `autogen.retrieve_utils.TEXT_FORMATS`. Example notebooks: [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) [Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Teachability To use Teachability, please install AutoGen with the [teachable] option. ```bash pip install "autogen-agentchat[teachable]~=0.2" ``` Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Large Multimodal Model (LMM) Agents We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it. ```bash pip install "autogen-agentchat[lmm]~=0.2" ``` Example notebooks: [LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
mathchat `pyautogen<0.2` offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it. ```bash pip install "pyautogen[mathchat]<0.2" ``` Example notebooks: [Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Graph To use a graph in `GroupChat`, particularly for graph visualization, please install AutoGen with the [graph] option. ```bash pip install "autogen-agentchat[graph]~=0.2" ``` Example notebook: [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Long Context Handling AutoGen includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option: ```bash pip install "autogen-agentchat[long-context]~=0.2" ```
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
# AutoGen Studio - Getting Started [![PyPI version](https://badge.fury.io/py/autogenstudio.svg)](https://badge.fury.io/py/autogenstudio) [![Downloads](https://static.pepy.tech/badge/autogenstudio/week)](https://pepy.tech/project/autogenstudio) ![ARA](./img/ara_stockprices.png) AutoGen Studio is an low-code interface built to help you rapidly prototype AI agents, enhance them with skills, compose them into workflows and interact with them to accomplish tasks. It is built on top of the [AutoGen](https://microsoft.github.io/autogen) framework, which is a toolkit for building AI agents. Code for AutoGen Studio is on GitHub at [microsoft/autogen](https://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio) > **Note**: AutoGen Studio is meant to help you rapidly prototype multi-agent workflows and demonstrate an example of end user interfaces built with AutoGen. It is not meant to be a production-ready app. Developers are encouraged to use the AutoGen framework to build their own applications, implementing authentication, security and other features required for deployed applications. **Updates** - April 17: AutoGen Studio database layer is now rewritten to use [SQLModel](https://sqlmodel.tiangolo.com/) (Pydantic + SQLAlchemy). This provides entity linking (skills, models, agents and workflows are linked via association tables) and supports multiple [database backend dialects](https://docs.sqlalchemy.org/en/20/dialects/) supported in SQLAlchemy (SQLite, PostgreSQL, MySQL, Oracle, Microsoft SQL Server). The backend database can be specified with a `--database-uri` argument when running the application. For example, `autogenstudio ui --database-uri sqlite:///database.sqlite` for SQLite and `autogenstudio ui --database-uri postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. - March 12: Default directory for AutoGen Studio is now /home/<user>/.autogenstudio. You can also specify this directory using the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. `.env` files in that directory will be used to set environment variables for the app. ### Installation There are two ways to install AutoGen Studio - from PyPi or from source. We **recommend installing from PyPi** unless you plan to modify the source code. 1. **Install from PyPi** We recommend using a virtual environment (e.g., conda) to avoid conflicts with existing Python packages. With Python 3.10 or newer active in your virtual environment, use pip to install AutoGen Studio: ```bash pip install autogenstudio ``` 2. **Install from Source** > Note: This approach requires some familiarity with building interfaces in React. If you prefer to install from source, ensure you have Python 3.10+ and Node.js (version above 14.15.0) installed. Here's how you get started: - Clone the AutoGen Studio repository and install its Python dependencies: ```bash pip install -e . ``` - Navigate to the `samples/apps/autogen-studio/frontend` directory, install dependencies, and build the UI: ```bash npm install -g gatsby-cli npm install --global yarn cd frontend yarn install yarn build ``` For Windows users, to build the frontend, you may need alternative commands to build the frontend. ```bash gatsby clean && rmdir /s /q ..\\autogenstudio\\web\\ui 2>nul & (set \"PREFIX_PATH_VALUE=\" || ver>nul) && gatsby build --prefix-paths && xcopy /E /I /Y public ..\\autogenstudio\\web\\ui ``` ### Running the Application Once installed, run the web UI by entering the following in your terminal: ```bash autogenstudio ui --port 8081 ``` This will start the application on the specified port. Open your web browser and go to `http://localhost:8081/` to begin using AutoGen Studio. AutoGen Studio also takes several parameters to customize the application: - `--host <host>` argument to specify the host address. By default, it is set to `localhost`. Y - `--appdir <appdir>` argument to specify the directory where the app files (e.g., database and generated user files) are stored. By default, it is set to the a `.autogenstudio` directory in the user's home directory. - `--port <port>` argument to specify the port number. By default, it is set to `8080`. - `--reload` argument to enable auto-reloading of the server when changes are made to the code. By default, it is set to `False`. - `--database-uri` argument to specify the database URI. Example values include `sqlite:///database.sqlite` for SQLite and `postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. If this is not specified, the database URI defaults to a `database.sqlite` file in the `--appdir` directory. Now that you have AutoGen Studio installed and running, you are ready to explore its capabilities, including defining and modifying agent workflows, interacting with agents and sessions, and expanding agent skills. ### Capabilities / Roadmap Some of the capabilities supported by the app frontend include the following: - [x] Build / Configure agents (currently supports two agent workflows based on `UserProxyAgent` and `AssistantAgent`), modify their configuration (e.g. skills, temperature, model, agent system message, model etc) and compose them into workflows. - [x] Chat with agent workflows and specify tasks. - [x] View agent messages and output files in the UI from agent runs. - [x] Support for more complex agent workflows (e.g. `GroupChat` and `Sequential` workflows). - [x] Improved user experience (e.g., streaming intermediate model output, better summarization of agent responses, etc). Review project roadmap and issues [here](https://github.com/microsoft/autogen/issues/737) . Project Structure: - _autogenstudio/_ code for the backend classes and web api (FastAPI) - _frontend/_ code for the webui, built with Gatsby and TailwindCSS
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: - Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing) - Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/737) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted` - Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution. - Please review the autogenstudio dev branch here [dev branch](https://github.com/microsoft/autogen/tree/autogenstudio) and use as a base for your contribution. This way, your contribution will be aligned with the latest changes in the AutoGen Studio project. - Submit a pull request with your contribution! - If you are modifying AutoGen Studio, it has its own devcontainer. See instructions in `.devcontainer/README.md` to use it - Please use the tag `studio` for any issues, questions, and PRs related to Studio
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
A Note on Security AutoGen Studio is a research prototype and is not meant to be used in a production environment. Some baseline practices are encouraged e.g., using Docker code execution environment for your agents. However, other considerations such as rigorous tests related to jailbreaking, ensuring LLMs only have access to the right keys of data given the end user's permissions, and other security features are not implemented in AutoGen Studio. If you are building a production application, please use the AutoGen framework and implement the necessary security features.
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
Acknowledgements AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang).
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
# AutoGen Studio FAQs
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: How do I specify the directory where files(e.g. database) are stored? A: You can specify the directory where files are stored by setting the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database (default) and other files in the specified directory e.g. `/path/to/folder/database.sqlite`.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Where can I adjust the default skills, agent and workflow configurations? A: You can modify agent configurations directly from the UI or by editing the `init_db_samples` function in the `autogenstudio/database/utils.py` file which is used to initialize the database.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: If I want to reset the entire conversation with an agent, how do I go about it? A: To reset your conversation history, you can delete the `database.sqlite` file in the `--appdir` directory. This will reset the entire conversation history. To delete user files, you can delete the `files` directory in the `--appdir` directory.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Is it possible to view the output and messages generated by the agents during interactions? A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I use other models with AutoGen Studio? Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure". For other OSS models, we recommend using a server such as vllm, LMStudio, Ollama, to instantiate an openai compliant endpoint.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: The server starts but I can't access the UI A: If you are running the server on a remote machine (or a local machine that fails to resolve localhost correctly), you may need to specify the host address. By default, the host address is set to `localhost`. You can specify the host address using the `--host <host>` argument. For example, to start the server on port 8081 and local address such that it is accessible from other machines on the network, you can run the following command: ```bash autogenstudio ui --port 8081 --host 0.0.0.0 ```
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I export my agent workflows for use in a python app? Yes. In the Build view, you can click the export button to save your agent workflow as a JSON file. This file can be imported in a python application using the `WorkflowManager` class. For example: ```python from autogenstudio import WorkflowManager # load workflow from exported json workflow file. workflow_manager = WorkflowManager(workflow="path/to/your/workflow_.json") # run the workflow on a task task_query = "What is the height of the Eiffel Tower?. Dont write code, just respond to the question." workflow_manager.run(message=task_query) ```
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I deploy my agent workflows as APIs? Yes. You can launch the workflow as an API endpoint from the command line using the `autogenstudio` commandline tool. For example: ```bash autogenstudio serve --workflow=workflow.json --port=5000 ``` Similarly, the workflow launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I run AutoGen Studio in a Docker container? A: Yes, you can run AutoGen Studio in a Docker container. You can build the Docker image using the provided [Dockerfile](https://github.com/microsoft/autogen/blob/autogenstudio/samples/apps/autogen-studio/Dockerfile) and run the container using the following commands: ```bash FROM python:3.10 WORKDIR /code RUN pip install -U gunicorn autogenstudio RUN useradd -m -u 1000 user USER user ENV HOME=/home/user \ PATH=/home/user/.local/bin:$PATH \ AUTOGENSTUDIO_APPDIR=/home/user/app WORKDIR $HOME/app COPY --chown=user . $HOME/app CMD gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind "0.0.0.0:8081" ``` Using Gunicorn as the application server for improved performance is recommended. To run AutoGen Studio with Gunicorn, you can use the following command: ```bash gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind ```
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
# Using AutoGen Studio AutoGen Studio supports the declarative creation of an agent workflow and tasks can be specified and run in a chat interface for the agents to complete. The expected usage behavior is that developers can create skills and models, _attach_ them to agents, and compose agents into workflows that can be tested interactively in the chat interface.
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Building an Agent Workflow AutoGen Studio implements several entities that are ultimately composed into a workflow. ### Skills A skill is a python function that implements the solution to a task. In general, a good skill has a descriptive name (e.g. generate*images), extensive docstrings and good defaults (e.g., writing out files to disk for persistence and reuse). Skills can be \_associated with* or _attached to_ agent specifications. ![AutoGen Studio Skill Interface](./img/skill.png) ### Models A model refers to the configuration of an LLM. Similar to skills, a model can be attached to an agent specification. The AutoGen Studio interface supports multiple model types including OpenAI models (and any other model endpoint provider that supports the OpenAI endpoint specification), Azure OpenAI models and Gemini Models. ![AutoGen Studio Create new model](./img/model_new.png) ![AutoGen Studio Create new model](./img/model_openai.png) ### Agents An agent entity declaratively specifies properties for an AutoGen agent (mirrors most but not all of the members of a base AutoGen Conversable agent class). Currently `UserProxyAgent` and `AssistantAgent` and `GroupChat` agent abstractions are supported. ![AutoGen Studio Create new agent](./img/agent_new.png) ![AutoGen Studio Createan assistant agent](./img/agent_groupchat.png) Once agents have been created, existing models or skills can be _added_ to the agent. ![AutoGen Studio Add skills and models to agent](./img/agent_skillsmodel.png) ### Workflows An agent workflow is a specification of a set of agents (team of agents) that can work together to accomplish a task. AutoGen Studio supports two types of high level workflow patterns: #### Autonomous Chat : This workflow implements a paradigm where agents are defined and a chat is initiated between the agents to accomplish a task. AutoGen simplifies this into defining an `initiator` agent and a `receiver` agent where the receiver agent is selected from a list of previously created agents. Note that when the receiver is a `GroupChat` agent (i.e., contains multiple agents), the communication pattern between those agents is determined by the `speaker_selection_method` parameter in the `GroupChat` agent configuration. ![AutoGen Studio Autonomous Chat Workflow](./img/workflow_chat.png) #### Sequential Chat This workflow allows users to specify a list of `AssistantAgent` agents that are executed in sequence to accomplish a task. The runtime behavior here follows the following pattern: at each step, each `AssistantAgent` is _paired_ with a `UserProxyAgent` and chat initiated between this pair to process the input task. The result of this exchange is summarized and provided to the next `AssistantAgent` which is also paired with a `UserProxyAgent` and their summarized result is passed to the next `AssistantAgent` in the sequence. This continues until the last `AssistantAgent` in the sequence is reached. ![AutoGen Studio Sequential Workflow](./img/workflow_sequential.png) <!-- ``` Plot a chart of NVDA and TESLA stock price YTD. Save the result to a file named nvda_tesla.png ``` The agent workflow responds by _writing and executing code_ to create a python program to generate the chart with the stock prices. > Note than there could be multiple turns between the `AssistantAgent` and the `UserProxyAgent` to produce and execute the code in order to complete the task. ![ARA](./img/ara_stockprices.png) > Note: You can also view the debug console that generates useful information to see how the agents are interacting in the background. --> <!-- - Build: Users begin by constructing their workflows. They may incorporate previously developed skills/models into agents within the workflow. User's can immediately test their workflows in the the same view or in a saved session in the playground. - Playground: Users can start a new session, select an agent workflow, and engage in a "chat" with this agent workflow. It is important to note the significant differences between a traditional chat with a Large Language Model (LLM) and a chat with a group of agents. In the former, the response is typically a single formatted reply, while in the latter, it consists of a history of conversations among the agents.
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Entities and Concepts -->
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Testing an Agent Workflow AutoGen Studio allows users to interactively test workflows on tasks and review resulting artifacts (such as images, code, and documents). ![AutoGen Studio Test Workflow](./img/workflow_test.png) Users can also review the “inner monologue” of agent workflows as they address tasks, and view profiling information such as costs associated with the run (such as number of turns, number of tokens etc.), and agent actions (such as whether tools were called and the outcomes of code execution). ![AutoGen Studio Profile Workflow Results](./img/workflow_profile.png)
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Exporting Agent Workflows Users can download the skills, agents, and workflow configurations they create as well as share and reuse these artifacts. AutoGen Studio also offers a seamless process to export workflows and deploy them as application programming interfaces (APIs) that can be consumed in other applications deploying workflows as APIs. ### Export Workflow AutoGen Studio allows you to export a selected workflow as a JSON configuration file. Build -> Workflows -> (On workflow card) -> Export ![AutoGen Studio Export Workflow](./img/workflow_export.png) ### Using AutoGen Studio Workflows in a Python Application An exported workflow can be easily integrated into any Python application using the `WorkflowManager` class with just two lines of code. Underneath, the WorkflowManager rehydrates the workflow specification into AutoGen agents that are subsequently used to address tasks. ```python from autogenstudio import WorkflowManager # load workflow from exported json workflow file. workflow_manager = WorkflowManager(workflow="path/to/your/workflow_.json") # run the workflow on a task task_query = "What is the height of the Eiffel Tower?. Dont write code, just respond to the question." workflow_manager.run(message=task_query) ``` ### Deploying AutoGen Studio Workflows as APIs The workflow can be launched as an API endpoint from the command line using the autogenstudio commandline tool. ```bash autogenstudio serve --workflow=workflow.json --port=5000 ``` Similarly, the workflow launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
--- title: Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH authors: sonichi tags: [LLM, GPT, research] --- ![level 2 algebra](img/level2algebra.png) **TL;DR:** * **Just by tuning the inference parameters like model, number of responses, temperature etc. without changing any model weights or prompt, the baseline accuracy of untuned gpt-4 can be improved by 20% in high school math competition problems.** * **For easy problems, the tuned gpt-3.5-turbo model vastly outperformed untuned gpt-4 in accuracy (e.g., 90% vs. 70%) and cost efficiency. For hard problems, the tuned gpt-4 is much more accurate (e.g., 35% vs. 20%) and less expensive than untuned gpt-4.** * **AutoGen can help with model selection, parameter tuning, and cost-saving in LLM applications.** Large language models (LLMs) are powerful tools that can generate natural language texts for various applications, such as chatbots, summarization, translation, and more. GPT-4 is currently the state of the art LLM in the world. Is model selection irrelevant? What about inference parameters? In this blog post, we will explore how model and inference parameter matter in LLM applications, using a case study for [MATH](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html), a benchmark for evaluating LLMs on advanced mathematical problem solving. MATH consists of 12K math competition problems from AMC-10, AMC-12 and AIME. Each problem is accompanied by a step-by-step solution. We will use AutoGen to automatically find the best model and inference parameter for LLMs on a given task and dataset given an inference budget, using a novel low-cost search & pruning strategy. AutoGen currently supports all the LLMs from OpenAI, such as GPT-3.5 and GPT-4. We will use AutoGen to perform model selection and inference parameter tuning. Then we compare the performance and inference cost on solving algebra problems with the untuned gpt-4. We will also analyze how different difficulty levels affect the results.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Experiment Setup We use AutoGen to select between the following models with a target inference budget $0.02 per instance: - gpt-3.5-turbo, a relatively cheap model that powers the popular ChatGPT app - gpt-4, the state of the art LLM that costs more than 10 times of gpt-3.5-turbo We adapt the models using 20 examples in the train set, using the problem statement as the input and generating the solution as the output. We use the following inference parameters: - temperature: The parameter that controls the randomness of the output text. A higher temperature means more diversity but less coherence. We search for the optimal temperature in the range of [0, 1]. - top_p: The parameter that controls the probability mass of the output tokens. Only tokens with a cumulative probability less than or equal to top-p are considered. A lower top-p means more diversity but less coherence. We search for the optimal top-p in the range of [0, 1]. - max_tokens: The maximum number of tokens that can be generated for each output. We search for the optimal max length in the range of [50, 1000]. - n: The number of responses to generate. We search for the optimal n in the range of [1, 100]. - prompt: We use the template: "{problem} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed{{}}." where {problem} will be replaced by the math problem instance. In this experiment, when n > 1, we find the answer with highest votes among all the responses and then select it as the final answer to compare with the ground truth. For example, if n = 5 and 3 of the responses contain a final answer 301 while 2 of the responses contain a final answer 159, we choose 301 as the final answer. This can help with resolving potential errors due to randomness. We use the average accuracy and average inference cost as the metric to evaluate the performance over a dataset. The inference cost of a particular instance is measured by the price per 1K tokens and the number of tokens consumed.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Experiment Results The first figure in this blog post shows the average accuracy and average inference cost of each configuration on the level 2 Algebra test set. Surprisingly, the tuned gpt-3.5-turbo model is selected as a better model and it vastly outperforms untuned gpt-4 in accuracy (92% vs. 70%) with equal or 2.5 times higher inference budget. The same observation can be obtained on the level 3 Algebra test set. ![level 3 algebra](img/level3algebra.png) However, the selected model changes on level 4 Algebra. ![level 4 algebra](img/level4algebra.png) This time gpt-4 is selected as the best model. The tuned gpt-4 achieves much higher accuracy (56% vs. 44%) and lower cost than the untuned gpt-4. On level 5 the result is similar. ![level 5 algebra](img/level5algebra.png) We can see that AutoGen has found different optimal model and inference parameters for each subset of a particular level, which shows that these parameters matter in cost-sensitive LLM applications and need to be carefully tuned or adapted. An example notebook to run these experiments can be found at: https://github.com/microsoft/FLAML/blob/v1.2.1/notebook/autogen_chatgpt.ipynb. The experiments were run when AutoGen was a subpackage in FLAML.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Analysis and Discussion While gpt-3.5-turbo demonstrates competitive accuracy with voted answers in relatively easy algebra problems under the same inference budget, gpt-4 is a better choice for the most difficult problems. In general, through parameter tuning and model selection, we can identify the opportunity to save the expensive model for more challenging tasks, and improve the overall effectiveness of a budget-constrained system. There are many other alternative ways of solving math problems, which we have not covered in this blog post. When there are choices beyond the inference parameters, they can be generally tuned via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). The need for model selection, parameter tuning and cost saving is not specific to the math problems. The [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) project is an example where high cost can easily prevent a generic complex task to be accomplished as it needs many LLM inference calls.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
For Further Reading * [Research paper about the tuning technique](https://arxiv.org/abs/2303.04673) * [Documentation about inference tuning](/docs/Use-Cases/enhanced_inference) *Do you have any experience to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? Please join our [Discord](https://aka.ms/autogen-dc) server for discussion.*
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
--- title: Use AutoGen for Local LLMs authors: jialeliu tags: [LLM] --- **TL;DR:** We demonstrate how to use autogen for local LLM application. As an example, we will initiate an endpoint using [FastChat](https://github.com/lm-sys/FastChat) and perform inference on [ChatGLMv2-6b](https://github.com/THUDM/ChatGLM2-6B).
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Preparations ### Clone FastChat FastChat provides OpenAI-compatible APIs for its supported models, so you can use FastChat as a local drop-in replacement for OpenAI APIs. However, its code needs minor modification in order to function properly. ```bash git clone https://github.com/lm-sys/FastChat.git cd FastChat ``` ### Download checkpoint ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. ChatGLM2-6B is its second-generation version. Before downloading from HuggingFace Hub, you need to have Git LFS [installed](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage). ```bash git clone https://huggingface.co/THUDM/chatglm2-6b ```
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Initiate server First, launch the controller ```bash python -m fastchat.serve.controller ``` Then, launch the model worker(s) ```bash python -m fastchat.serve.model_worker --model-path chatglm2-6b ``` Finally, launch the RESTful API server ```bash python -m fastchat.serve.openai_api_server --host localhost --port 8000 ``` Normally this will work. However, if you encounter error like [this](https://github.com/lm-sys/FastChat/issues/1641), commenting out all the lines containing `finish_reason` in `fastchat/protocol/api_protocol.py` and `fastchat/protocol/openai_api_protocol.py` will fix the problem. The modified code looks like: ```python class CompletionResponseChoice(BaseModel): index: int text: str logprobs: Optional[int] = None # finish_reason: Optional[Literal["stop", "length"]] class CompletionResponseStreamChoice(BaseModel): index: int text: str logprobs: Optional[float] = None # finish_reason: Optional[Literal["stop", "length"]] = None ```
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Interact with model using `oai.Completion` (requires openai<1) Now the models can be directly accessed through openai-python library as well as `autogen.oai.Completion` and `autogen.oai.ChatCompletion`. ```python from autogen import oai # create a text completion request response = oai.Completion.create( config_list=[ { "model": "chatglm2-6b", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", # just a placeholder } ], prompt="Hi", ) print(response) # create a chat completion request response = oai.ChatCompletion.create( config_list=[ { "model": "chatglm2-6b", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", } ], messages=[{"role": "user", "content": "Hi"}] ) print(response) ``` If you would like to switch to different models, download their checkpoints and specify model path when launching model worker(s).
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
interacting with multiple local LLMs If you would like to interact with multiple LLMs on your local machine, replace the `model_worker` step above with a multi model variant: ```bash python -m fastchat.serve.multi_model_worker \ --model-path lmsys/vicuna-7b-v1.3 \ --model-names vicuna-7b-v1.3 \ --model-path chatglm2-6b \ --model-names chatglm2-6b ``` The inference code would be: ```python from autogen import oai # create a chat completion request response = oai.ChatCompletion.create( config_list=[ { "model": "chatglm2-6b", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", }, { "model": "vicuna-7b-v1.3", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", } ], messages=[{"role": "user", "content": "Hi"}] ) print(response) ```
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
For Further Reading * [Documentation](/docs/Getting-Started) about `autogen`. * [Documentation](https://github.com/lm-sys/FastChat) about FastChat.
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
<!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. -->
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
Related issue number <!-- For example: "Closes #1234" -->
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
Checks - [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed.
GitHub
autogen
autogen/.github/ISSUE_TEMPLATE.md
autogen
### Description <!-- A clear and concise description of the issue or feature request. --> ### Environment - AutoGen version: <!-- Specify the AutoGen version (e.g., v0.2.0) --> - Python version: <!-- Specify the Python version (e.g., 3.8) --> - Operating System: <!-- Specify the OS (e.g., Windows 10, Ubuntu 20.04) --> ### Steps to Reproduce (for bugs) <!-- Provide detailed steps to reproduce the issue. Include code snippets, configuration files, or any other relevant information. --> 1. Step 1 2. Step 2 3. ... ### Expected Behavior <!-- Describe what you expected to happen. --> ### Actual Behavior <!-- Describe what actually happened. Include any error messages, stack traces, or unexpected behavior. --> ### Screenshots / Logs (if applicable) <!-- If relevant, include screenshots or logs that help illustrate the issue. --> ### Additional Information <!-- Include any additional information that might be helpful, such as specific configurations, data samples, or context about the environment. --> ### Possible Solution (if you have one) <!-- If you have suggestions on how to address the issue, provide them here. --> ### Is this a Bug or Feature Request? <!-- Choose one: Bug | Feature Request --> ### Priority <!-- Choose one: High | Medium | Low --> ### Difficulty <!-- Choose one: Easy | Moderate | Hard --> ### Any related issues? <!-- If this is related to another issue, reference it here. --> ### Any relevant discussions? <!-- If there are any discussions or forum threads related to this issue, provide links. --> ### Checklist <!-- Please check the items that you have completed --> - [ ] I have searched for similar issues and didn't find any duplicates. - [ ] I have provided a clear and concise description of the issue. - [ ] I have included the necessary environment details. - [ ] I have outlined the steps to reproduce the issue. - [ ] I have included any relevant logs or screenshots. - [ ] I have indicated whether this is a bug or a feature request. - [ ] I have set the priority and difficulty levels. ### Additional Comments <!-- Any additional comments or context that you think would be helpful. -->
GitHub
autogen
autogen/notebook/contributing.md
autogen
# Contributing
GitHub
autogen
autogen/notebook/contributing.md
autogen
How to get a notebook displayed on the website In the notebook metadata set the `tags` and `description` `front_matter` properties. For example: ```json { "...": "...", "metadata": { "...": "...", "front_matter": { "tags": ["code generation", "debugging"], "description": "Use conversable language learning model agents to solve tasks and provide automatic feedback through a comprehensive example of writing, executing, and debugging Python code to compare stock price changes." } } } ``` **Note**: Notebook metadata can be edited by opening the notebook in a text editor (Or "Open With..." -> "Text Editor" in VSCode) The `tags` field is a list of tags that will be used to categorize the notebook. The `description` field is a brief description of the notebook.
GitHub
autogen
autogen/notebook/contributing.md
autogen
Best practices for authoring notebooks The following points are best practices for authoring notebooks to ensure consistency and ease of use for the website. - The Colab button will be automatically generated on the website for all notebooks where it is missing. Going forward, it is recommended to not include the Colab button in the notebook itself. - Ensure the header is a `h1` header, - `#` - Don't put anything between the yaml and the header ### Consistency for installation and LLM config You don't need to explain in depth how to install AutoGen. Unless there are specific instructions for the notebook just use the following markdown snippet: `````` ````{=mdx} :::info Requirements Install `autogen-agentchat`: ```bash pip install autogen-agentchat~=0.2 ``` For more information, please refer to the [installation guide](/docs/installation/). ::: ```` `````` Or if extras are needed: `````` ````{=mdx} :::info Requirements Some extra dependencies are needed for this notebook, which can be installed via pip: ```bash pip install autogen-agentchat[retrievechat]~=0.2 flaml[automl] ``` For more information, please refer to the [installation guide](/docs/installation/). ::: ```` `````` When specifying the config list, to ensure consistency it is best to use approximately the following code: ```python import autogen config_list = autogen.config_list_from_json( env_or_file="OAI_CONFIG_LIST", ) ``` Then after the code cell where this is used, include the following markdown snippet: `````` ````{=mdx} :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: ```` ``````
GitHub
autogen
autogen/notebook/contributing.md
autogen
Testing Notebooks can be tested by running: ```sh python website/process_notebooks.py test ``` This will automatically scan for all notebooks in the notebook/ and website/ dirs. To test a specific notebook pass its path: ```sh python website/process_notebooks.py test notebook/agentchat_logging.ipynb ``` Options: - `--timeout` - timeout for a single notebook - `--exit-on-first-fail` - stop executing further notebooks after the first one fails ### Skip tests If a notebook needs to be skipped then add to the notebook metadata: ```json { "...": "...", "metadata": { "skip_test": "REASON" } } ```
GitHub
autogen
autogen/notebook/contributing.md
autogen
Metadata fields All possible metadata fields are as follows: ```json { "...": "...", "metadata": { "...": "...", "front_matter": { "tags": "List[str] - List of tags to categorize the notebook", "description": "str - Brief description of the notebook", }, "skip_test": "str - Reason for skipping the test. If present, the notebook will be skipped during testing", "skip_render": "str - Reason for skipping rendering the notebook. If present, the notebook will be left out of the website.", "extra_files_to_copy": "List[str] - List of files to copy to the website. The paths are relative to the notebook directory", } } ```
GitHub
autogen
autogen/dotnet/README.md
autogen
### AutoGen for .NET [![dotnet-ci](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) > [!NOTE] > Nightly build is available at: > - ![Static Badge](https://img.shields.io/badge/public-blue?style=flat) ![Static Badge](https://img.shields.io/badge/nightly-yellow?style=flat) ![Static Badge](https://img.shields.io/badge/github-grey?style=flat): https://nuget.pkg.github.com/microsoft/index.json > - ![Static Badge](https://img.shields.io/badge/public-blue?style=flat) ![Static Badge](https://img.shields.io/badge/nightly-yellow?style=flat) ![Static Badge](https://img.shields.io/badge/myget-grey?style=flat): https://www.myget.org/F/agentchat/api/v3/index.json > - ![Static Badge](https://img.shields.io/badge/internal-blue?style=flat) ![Static Badge](https://img.shields.io/badge/nightly-yellow?style=flat) ![Static Badge](https://img.shields.io/badge/azure_devops-grey?style=flat) : https://devdiv.pkgs.visualstudio.com/DevDiv/_packaging/AutoGen/nuget/v3/index.json Firstly, following the [installation guide](./website/articles/Installation.md) to install AutoGen packages. Then you can start with the following code snippet to create a conversable agent and chat with it. ```csharp using AutoGen; using AutoGen.OpenAI; var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY") ?? throw new Exception("Please set OPENAI_API_KEY environment variable."); var gpt35Config = new OpenAIConfig(openAIKey, "gpt-3.5-turbo"); var assistantAgent = new AssistantAgent( name: "assistant", systemMessage: "You are an assistant that help user to do some tasks.", llmConfig: new ConversableAgentConfig { Temperature = 0, ConfigList = [gpt35Config], }) .RegisterPrintMessage(); // register a hook to print message nicely to console // set human input mode to ALWAYS so that user always provide input var userProxyAgent = new UserProxyAgent( name: "user", humanInputMode: ConversableAgent.HumanInputMode.ALWAYS) .RegisterPrintMessage(); // start the conversation await userProxyAgent.InitiateChatAsync( receiver: assistantAgent, message: "Hey assistant, please do me a favor.", maxRound: 10); ``` #### Samples You can find more examples under the [sample project](https://github.com/microsoft/autogen/tree/dotnet/dotnet/sample/AutoGen.BasicSamples). #### Functionality - ConversableAgent - [x] function call - [x] code execution (dotnet only, powered by [`dotnet-interactive`](https://github.com/dotnet/interactive)) - Agent communication - [x] Two-agent chat - [x] Group chat - [ ] Enhanced LLM Inferences - Exclusive for dotnet - [x] Source generator for type-safe function definition generation #### Update log ##### Update on 0.0.11 (2024-03-26) - Add link to Discord channel in nuget's readme.md - Document improvements ##### Update on 0.0.10 (2024-03-12) - Rename `Workflow` to `Graph` - Rename `AddInitializeMessage` to `SendIntroduction` - Rename `SequentialGroupChat` to `RoundRobinGroupChat` ##### Update on 0.0.9 (2024-03-02) - Refactor over @AutoGen.Message and introducing `TextMessage`, `ImageMessage`, `MultiModalMessage` and so on. PR [#1676](https://github.com/microsoft/autogen/pull/1676) - Add `AutoGen.SemanticKernel` to support seamless integration with Semantic Kernel - Move the agent contract abstraction to `AutoGen.Core` package. The `AutoGen.Core` package provides the abstraction for message type, agent and group chat and doesn't contain dependencies over `Azure.AI.OpenAI` or `Semantic Kernel`. This is useful when you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies. - Move `GPTAgent`, `OpenAIChatAgent` and all openai-dependencies to `AutoGen.OpenAI` ##### Update on 0.0.8 (2024-02-28) - Fix [#1804](https://github.com/microsoft/autogen/pull/1804) - Streaming support for IAgent [#1656](https://github.com/microsoft/autogen/pull/1656) - Streaming support for middleware via `MiddlewareStreamingAgent` [#1656](https://github.com/microsoft/autogen/pull/1656) - Graph chat support with conditional transition workflow [#1761](https://github.com/microsoft/autogen/pull/1761) - AutoGen.SourceGenerator: Generate `FunctionContract` from `FunctionAttribute` [#1736](https://github.com/microsoft/autogen/pull/1736) ##### Update on 0.0.7 (2024-02-11) - Add `AutoGen.LMStudio` to support comsume openai-like API from LMStudio local server ##### Update on 0.0.6 (2024-01-23) - Add `MiddlewareAgent` - Use `MiddlewareAgent` to implement existing agent hooks (RegisterPreProcess, RegisterPostProcess, RegisterReply) - Remove `AutoReplyAgent`, `PreProcessAgent`, `PostProcessAgent` because they are replaced by `MiddlewareAgent` ##### Update on 0.0.5 - Simplify `IAgent` interface by removing `ChatLLM` Property - Add `GenerateReplyOptions` to `IAgent.GenerateReplyAsync` which allows user to specify or override the options when generating reply ##### Update on 0.0.4 - Move out dependency of Semantic Kernel - Add type `IChatLLM` as connector to LLM ##### Update on 0.0.3 - In AutoGen.SourceGenerator, rename FunctionAttribution to FunctionAttribute - In AutoGen, refactor over ConversationAgent, UserProxyAgent, and AssistantAgent ##### Update on 0.0.2 - update Azure.OpenAI.AI to 1.0.0-beta.12 - update Semantic kernel to 1.0.1
GitHub
autogen
autogen/dotnet/website/README.md
autogen
## How to build and run the website ### Prerequisites - dotnet 7.0 or later ### Build Firstly, go to autogen/dotnet folder and run the following command to build the website: ```bash dotnet tool restore dotnet tool run docfx website/docfx.json --serve ``` After the command is executed, you can open your browser and navigate to `http://localhost:8080` to view the website.