source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
text
stringlengths
11
14.2k
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-support-more-messages.md
autogen
By default, @AutoGen.OpenAI.OpenAIChatAgent only supports the @AutoGen.Core.IMessage<T> type where `T` is original request or response message from `Azure.AI.OpenAI`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage and so on, you can register the agent with @AutoGen.OpenAI.OpenAIChatRequestMessageConnector. The @AutoGen.OpenAI.OpenAIChatRequestMessageConnector will convert the message from AutoGen built-in message types to `Azure.AI.OpenAI.ChatRequestMessage` and vice versa. import the required namespaces: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=register_openai_chat_message_connector)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Ollama/Chat-with-llava.md
autogen
This sample shows how to use @AutoGen.Ollama.OllamaAgent to chat with LLaVA model. To run this example, you need to have an Ollama server running aside and have `llava:latest` model installed. For how to setup an Ollama server, please refer to [Ollama](https://ollama.com/). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs) ### Step 1: Install AutoGen.Ollama First, install the AutoGen.Ollama package using the following command: ```bash dotnet add package AutoGen.Ollama ``` For how to install from nightly build, please refer to [Installation](../Installation.md). ### Step 2: Add using statement [!code-csharp[](../../../sample/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs?name=Using)] ### Step 3: Create @AutoGen.Ollama.OllamaAgent [!code-csharp[](../../../sample/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs?name=Create_Ollama_Agent)] ### Step 4: Start MultiModal Chat LLaVA is a multimodal model that supports both text and image inputs. In this step, we create an image message along with a question about the image. [!code-csharp[](../../../sample/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs?name=Send_Message)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Ollama/Chat-with-llama.md
autogen
This example shows how to use @AutoGen.Ollama.OllamaAgent to connect to Ollama server and chat with LLaVA model. To run this example, you need to have an Ollama server running aside and have `llama3:latest` model installed. For how to setup an Ollama server, please refer to [Ollama](https://ollama.com/). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.Ollama.Sample/Chat_With_LLaMA.cs) ### Step 1: Install AutoGen.Ollama First, install the AutoGen.Ollama package using the following command: ```bash dotnet add package AutoGen.Ollama ``` For how to install from nightly build, please refer to [Installation](../Installation.md). ### Step 2: Add using statement [!code-csharp[](../../../sample/AutoGen.Ollama.Sample/Chat_With_LLaMA.cs?name=Using)] ### Step 3: Create and chat @AutoGen.Ollama.OllamaAgent In this step, we create an @AutoGen.Ollama.OllamaAgent and connect it to the Ollama server. [!code-csharp[](../../../sample/AutoGen.Ollama.Sample/Chat_With_LLaMA.cs?name=Create_Ollama_Agent)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/Use-kernel-plugin-in-other-agents.md
autogen
In semantic kernel, a kernel plugin is a collection of kernel functions that can be invoked during LLM calls. Semantic kernel provides a list of built-in plugins, like [core plugins](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Plugins/Plugins.Core), [web search plugin](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Plugins/Plugins.Web) and many more. You can also create your own plugins and use them in semantic kernel. Kernel plugins greatly extend the capabilities of semantic kernel and can be used to perform various tasks like web search, image search, text summarization, etc. `AutoGen.SemanticKernel` provides a middleware called @AutoGen.SemanticKernel.KernelPluginMiddleware that allows you to use semantic kernel plugins in other AutoGen agents like @AutoGen.OpenAI.OpenAIChatAgent. The following example shows how to define a simple plugin with a single `GetWeather` function and use it in @AutoGen.OpenAI.OpenAIChatAgent. > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs) ### Step 1: add using statement [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Using)] ### Step 2: create plugin In this step, we create a simple plugin with a single `GetWeather` function that takes a location as input and returns the weather information for that location. [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Create_plugin)] ### Step 3: create OpenAIChatAgent and use the plugin In this step, we firstly create a @AutoGen.SemanticKernel.KernelPluginMiddleware and register the previous plugin with it. The `KernelPluginMiddleware` will load the plugin and make the functions available for use in other agents. Followed by creating an @AutoGen.OpenAI.OpenAIChatAgent and register it with the `KernelPluginMiddleware`. [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Use_plugin)] ### Step 4: chat with OpenAIChatAgent In this final step, we start the chat with the @AutoGen.OpenAI.OpenAIChatAgent by asking the weather in Seattle. The `OpenAIChatAgent` will use the `GetWeather` function from the plugin to get the weather information for Seattle. [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Send_message)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/AutoGen-SemanticKernel-Overview.md
autogen
## AutoGen.SemanticKernel Overview AutoGen.SemanticKernel is a package that provides seamless integration with Semantic Kernel. It provides the following agents: - @AutoGen.SemanticKernel.SemanticKernelAgent: A slim wrapper agent over `Kernel` that only support original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message type, register the agent with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. - @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent: A slim wrapper agent over `Microsoft.SemanticKernel.Agents.ChatCompletionAgent`. AutoGen.SemanticKernel also provides the following middleware: - @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector: A connector that convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. At the current stage, it only supports conversation between @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage and @AutoGen.Core.MultiModalMessage. Function call message type like @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage are not supported yet. - @AutoGen.SemanticKernel.KernelPluginMiddleware: A middleware that allows you to use semantic kernel plugins in other AutoGen agents like @AutoGen.OpenAI.OpenAIChatAgent. ### Get start with AutoGen.SemanticKernel To get start with AutoGen.SemanticKernel, firstly, follow the [installation guide](../Installation.md) to make sure you add the AutoGen feed correctly. Then add `AutoGen.SemanticKernel` package to your project file. ```xml <ItemGroup> <PackageReference Include="AutoGen.SemanticKernel" Version="AUTOGEN_VERSION" /> </ItemGroup> ```
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelAgent-support-more-messages.md
autogen
@AutoGen.SemanticKernel.SemanticKernelAgent only supports the original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, you can register the agent with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. The @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector will convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. > [!NOTE] > At the current stage, @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector only supports conversation for the followng built-in @AutoGen.Core.IMessage > - @AutoGen.Core.TextMessage > - @AutoGen.Core.ImageMessage > - @AutoGen.Core.MultiModalMessage > > Function call message type like @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage are not supported yet. [!code-csharp[](../../../sample/AutoGen.BasicSamples/CodeSnippet/SemanticKernelCodeSnippet.cs?name=register_semantic_kernel_chat_message_content_connector)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelChatAgent-simple-chat.md
autogen
`AutoGen.SemanticKernel` provides built-in support for `ChatCompletionAgent` via @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent. By default the @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent only supports the original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, you can register the agent with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. The @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector will convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. The following step-by-step example shows how to create an @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent and chat with it: > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs). ### Step 1: add using statement [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Using)] ### Step 2: create kernel [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Create_Kernel)] ### Step 3: create ChatCompletionAgent [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Create_ChatCompletionAgent)] ### Step 4: create @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent In this step, we create an @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent and register it with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. The @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector will convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Create_SemanticKernelChatCompletionAgent)] ### Step 5: chat with @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent [!code-csharp[](../../../sample/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Send_Message)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelAgent-simple-chat.md
autogen
You can chat with @AutoGen.SemanticKernel.SemanticKernelAgent using both streaming and non-streaming methods and use native `ChatMessageContent` type via `IMessage<ChatMessageContent>`. The following example shows how to create an @AutoGen.SemanticKernel.SemanticKernelAgent and chat with it using non-streaming method: [!code-csharp[](../../../sample/AutoGen.BasicSamples/CodeSnippet/SemanticKernelCodeSnippet.cs?name=create_semantic_kernel_agent)] @AutoGen.SemanticKernel.SemanticKernelAgent also supports streaming chat via @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*. [!code-csharp[](../../../sample/AutoGen.BasicSamples/CodeSnippet/SemanticKernelCodeSnippet.cs?name=create_semantic_kernel_agent_streaming)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Image-chat-with-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent for image chat with Gemini model. To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs) ### Step 1: Install AutoGen.Gemini First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini ``` ### Step 2: Add using statement [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs?name=Using)] ### Step 3: Create a Gemini agent [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs?name=Create_Gemini_Agent)] ### Step 4: Send image to Gemini [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs?name=Send_Image_Request)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Chat-with-vertex-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent to connect to Vertex AI Gemini API and chat with Gemini model. To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs) > [!NOTE] > What's the difference between Google AI Gemini and Vertex AI Gemini? > > Gemini is a series of large language models developed by Google. You can use it either from Google AI API or Vertex AI API. If you are relatively new to Gemini and wants to explore the feature and build some prototype for your chatbot app, Google AI APIs (with Google AI Studio) is a fast way to get started. While your app and idea matures and you'd like to leverage more MLOps tools that streamline the usage, deployment, and monitoring of models, you can move to Google Cloud Vertex AI which provides Gemini APIs along with many other features. Basically, to help you productionize your app. ([reference](https://stackoverflow.com/questions/78007243/utilizing-gemini-through-vertex-ai-or-through-google-generative-ai)) ### Step 1: Install AutoGen.Gemini First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini ``` ### Step 2: Add using statement [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs?name=Using)] ### Step 3: Create a Gemini agent [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs?name=Create_Gemini_Agent)] ### Step 4: Chat with Gemini [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs?name=Chat_With_Vertex_Gemini)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Chat-with-google-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent to connect to Google AI Gemini and chat with Gemini model. To run this example, you need to have a Google AI Gemini API key. For how to get a Google Gemini API key, please refer to [Google Gemini](https://gemini.google.com/). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs) > [!NOTE] > What's the difference between Google AI Gemini and Vertex AI Gemini? > > Gemini is a series of large language models developed by Google. You can use it either from Google AI API or Vertex AI API. If you are relatively new to Gemini and wants to explore the feature and build some prototype for your chatbot app, Google AI APIs (with Google AI Studio) is a fast way to get started. While your app and idea matures and you'd like to leverage more MLOps tools that streamline the usage, deployment, and monitoring of models, you can move to Google Cloud Vertex AI which provides Gemini APIs along with many other features. Basically, to help you productionize your app. ([reference](https://stackoverflow.com/questions/78007243/utilizing-gemini-through-vertex-ai-or-through-google-generative-ai)) ### Step 1: Install AutoGen.Gemini First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini ``` ### Step 2: Add using statement [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs?name=Using)] ### Step 3: Create a Gemini agent [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs?name=Create_Gemini_Agent)] ### Step 4: Chat with Gemini [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs?name=Chat_With_Google_Gemini)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Function-call-with-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent to make function call. This example is modified from [gemini-api function call example](https://ai.google.dev/gemini-api/docs/function-calling) To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.Gemini.Sample/Function_Call_With_Gemini.cs) ### Step 1: Install AutoGen.Gemini and AutoGen.SourceGenerator First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini dotnet add package AutoGen.SourceGenerator ``` The AutoGen.SourceGenerator package is required to generate the @AutoGen.Core.FunctionContract. For more information, please refer to [Create-type-safe-function-call](../Create-type-safe-function-call.md) ### Step 2: Add using statement [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Using)] ### Step 3: Create `MovieFunction` [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=MovieFunction)] ### Step 4: Create a Gemini agent [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Create_Gemini_Agent)] ### Step 5: Single turn function call [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Single_turn)] ### Step 6: Multi-turn function call [!code-csharp[](../../../sample/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Multi_turn)]
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Overview.md
autogen
# AutoGen.Gemini Overview AutoGen.Gemini is a package that provides seamless integration with Google Gemini. It provides the following agent: - @AutoGen.Gemini.GeminiChatAgent: The agent that connects to Google Gemini or Vertex AI Gemini. It supports chat, multi-modal chat, and function call. AutoGen.Gemini also provides the following middleware: - @AutoGen.Gemini.GeminiMessageConnector: The middleware that converts the Gemini message to AutoGen built-in message type.
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Overview.md
autogen
Examples You can find more examples under the [gemini sample project](https://github.com/microsoft/autogen/tree/main/dotnet/sample/AutoGen.Gemini.Sample)
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
# AutoGen.Net 0.0.16 Release Notes We are excited to announce the release of **AutoGen.Net 0.0.16**. This release includes several new features, bug fixes, improvements, and important updates. Below are the detailed release notes: **[Milestone: AutoGen.Net 0.0.16](https://github.com/microsoft/autogen/milestone/4)**
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
๐Ÿ“ฆ New Features 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - Replaced `IStreamingMessage` and `IStreamingMessage<T>` with `IMessage` and `IMessage<T>`. 2. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issues/3014)) - Added a new tutorial to the website for integrating ollama with LiteLLM for function calls. 3. **Add ReAct sample** ([#2978](https://github.com/microsoft/autogen/issues/2978)) - Added a new sample demonstrating the ReAct pattern. 4. **Support tools Anthropic Models** ([#2771](https://github.com/microsoft/autogen/issues/2771)) - Introduced support for tools like `AnthropicClient`, `AnthropicClientAgent`, and `AnthropicMessageConnector`. 5. **Propose Orchestrator for managing group chat/agentic workflow** ([#2695](https://github.com/microsoft/autogen/issues/2695)) - Introduced a customizable orchestrator interface for managing group chats and agent workflows. 6. **Run Agent as Web API** ([#2519](https://github.com/microsoft/autogen/issues/2519)) - Introduced the ability to start an OpenAI-chat-compatible web API from an arbitrary agent.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
๐Ÿ› Bug Fixes 1. **SourceGenerator doesn't work when function's arguments are empty** ([#2976](https://github.com/microsoft/autogen/issues/2976)) - Fixed an issue where the SourceGenerator failed when function arguments were empty. 2. **Add content field in ToolCallMessage** ([#2975](https://github.com/microsoft/autogen/issues/2975)) - Added a content property in `ToolCallMessage` to handle text content returned by the OpenAI model during tool calls. 3. **AutoGen.SourceGenerator doesnโ€™t encode `"` in structural comments** ([#2872](https://github.com/microsoft/autogen/issues/2872)) - Fixed an issue where structural comments containing `"` were not properly encoded, leading to compilation errors.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
๐Ÿš€ Improvements 1. **Sample update - Add getting-start samples for BasicSample project** ([#2859](https://github.com/microsoft/autogen/issues/2859)) - Re-organized the `AutoGen.BasicSample` project to include only essential getting-started examples, simplifying complex examples. 2. **Graph constructor should consider null transitions** ([#2708](https://github.com/microsoft/autogen/issues/2708)) - Updated the Graph constructor to handle cases where transitionsโ€™ values are null.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
โš ๏ธ API-Breakchange 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - **Migration guide:** Deprecating `IStreamingMessage` will introduce breaking changes, particularly for `IStreamingAgent` and `IStreamingMiddleware`. Replace all `IStreamingMessage` and `IStreamingMessage<T>` with `IMessage` and `IMessage<T>`.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
๐Ÿ“š Document Update 1. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issues/3014)) - Added a tutorial to the website for using ollama with LiteLLM. Thank you to all the contributors for making this release possible. We encourage everyone to upgrade to AutoGen.Net 0.0.16 to take advantage of these new features and improvements. If you encounter any issues or have any feedback, please let us know. Happy coding! ๐Ÿš€
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
# AutoGen.Net 0.0.17 Release Notes
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
๐ŸŒŸ What's New 1. **.NET Core Target Framework Support** ([#3203](https://github.com/microsoft/autogen/issues/3203)) - ๐Ÿš€ Added support for .NET Core to ensure compatibility and enhanced performance of AutoGen packages across different platforms. 2. **Kernel Support in Interactive Service Constructor** ([#3181](https://github.com/microsoft/autogen/issues/3181)) - ๐Ÿง  Enhanced the Interactive Service to accept a kernel in its constructor, facilitating usage in notebook environments. 3. **Constructor Options for OpenAIChatAgent** ([#3126](https://github.com/microsoft/autogen/issues/3126)) - โš™๏ธ Added new constructor options for `OpenAIChatAgent` to allow full control over chat completion flags/options. 4. **Step-by-Step Execution for Group Chat** ([#3075](https://github.com/microsoft/autogen/issues/3075)) - ๐Ÿ› ๏ธ Introduced an `IAsyncEnumerable` extension API to run group chat step-by-step, enabling developers to observe internal processes or implement early stopping mechanisms.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
๐Ÿš€ Improvements 1. **Cancellation Token Addition in Graph APIs** ([#3111](https://github.com/microsoft/autogen/issues/3111)) - ๐Ÿ”„ Added cancellation tokens to async APIs in the `AutoGen.Core.Graph` class to follow best practices and enhance the control flow.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
โš ๏ธ API Breaking Changes 1. **FunctionDefinition Generation Stopped in Source Generator** ([#3133](https://github.com/microsoft/autogen/issues/3133)) - ๐Ÿ›‘ Stopped generating `FunctionDefinition` from `Azure.AI.OpenAI` in the source generator to eliminate unnecessary package dependencies. Migration guide: - โžก๏ธ Use `ToOpenAIFunctionDefinition()` extension from `AutoGen.OpenAI` for generating `FunctionDefinition` from `AutoGen.Core.FunctionContract`. - โžก๏ธ Use `FunctionContract` for metadata such as function name or parameters. 2. **Namespace Renaming for AutoGen.WebAPI** ([#3152](https://github.com/microsoft/autogen/issues/3152)) - โœ๏ธ Renamed the namespace of `AutoGen.WebAPI` from `AutoGen.Service` to `AutoGen.WebAPI` to maintain consistency with the project name. 3. **Semantic Kernel Version Update** ([#3118](https://github.com/microsoft/autogen/issues/3118)) - ๐Ÿ“ˆ Upgraded the Semantic Kernel version to 1.15.1 for enhanced functionality and performance improvements. This might introduce break change for those who use a lower-version semantic kernel.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
๐Ÿ“š Documentation 1. **Consume AutoGen.Net Agent in AG Studio** ([#3142](https://github.com/microsoft/autogen/issues/3142)) - Added detailed documentation on using AutoGen.Net Agent as a model in AG Studio, including examples of starting an OpenAI chat backend and integrating third-party OpenAI models. 2. **Middleware Overview Documentation Errors Fixed** ([#3129](https://github.com/microsoft/autogen/issues/3129)) - Corrected logic and compile errors in the example code provided in the Middleware Overview documentation to ensure it runs without issues. --- We hope you enjoy the new features and improvements in AutoGen.Net 0.0.17! If you encounter any issues or have feedback, please open a new issue on our [GitHub repository](https://github.com/microsoft/autogen/issues).
GitHub
autogen
autogen/dotnet/website/release_note/0.1.0.md
autogen
# ๐ŸŽ‰ Release Notes: AutoGen.Net 0.1.0 ๐ŸŽ‰
GitHub
autogen
autogen/dotnet/website/release_note/0.1.0.md
autogen
๐Ÿ“ฆ New Packages 1. **Add AutoGen.AzureAIInference Package** - **Issue**: [.Net][Feature Request] [#3323](https://github.com/microsoft/autogen/issues/3323) - **Description**: The new `AutoGen.AzureAIInference` package includes the `ChatCompletionClientAgent`.
GitHub
autogen
autogen/dotnet/website/release_note/0.1.0.md
autogen
โœจ New Features 1. **Enable Step-by-Step Execution for Two Agent Chat API** - **Issue**: [.Net][Feature Request] [#3339](https://github.com/microsoft/autogen/issues/3339) - **Description**: The `AgentExtension.SendAsync` now returns an `IAsyncEnumerable`, allowing conversations to be driven step by step, similar to how `GroupChatExtension.SendAsync` works. 2. **Support Python Code Execution in AutoGen.DotnetInteractive** - **Issue**: [.Net][Feature Request] [#3316](https://github.com/microsoft/autogen/issues/3316) - **Description**: `dotnet-interactive` now supports Jupyter kernel connection, allowing Python code execution in `AutoGen.DotnetInteractive`. 3. **Support Prompt Cache in Claude** - **Issue**: [.Net][Feature Request] [#3359](https://github.com/microsoft/autogen/issues/3359) - **Description**: Claude now supports prompt caching, which dramatically lowers the bill if the cache is hit. Added the corresponding option in the Claude client.
GitHub
autogen
autogen/dotnet/website/release_note/0.1.0.md
autogen
๐Ÿ› Bug Fixes 1. **GroupChatExtension.SendAsync Doesnโ€™t Terminate Chat When `IOrchestrator` Returns Null as Next Agent** - **Issue**: [.Net][Bug] [#3306](https://github.com/microsoft/autogen/issues/3306) - **Description**: Fixed an issue where `GroupChatExtension.SendAsync` would continue until the max_round is reached even when `IOrchestrator` returns null as the next speaker. 2. **InitializedMessages Are Added Repeatedly in GroupChatExtension.SendAsync Method** - **Issue**: [.Net][Bug] [#3268](https://github.com/microsoft/autogen/issues/3268) - **Description**: Fixed an issue where initialized messages from group chat were being added repeatedly in every iteration of the `GroupChatExtension.SendAsync` API. 3. **Remove `Azure.AI.OpenAI` Dependency from `AutoGen.DotnetInteractive`** - **Issue**: [.Net][Feature Request] [#3273](https://github.com/microsoft/autogen/issues/3273) - **Description**: Fixed an issue by removing the `Azure.AI.OpenAI` dependency from `AutoGen.DotnetInteractive`, simplifying the package and reducing dependencies.
GitHub
autogen
autogen/dotnet/website/release_note/0.1.0.md
autogen
๐Ÿ“„ Documentation Updates 1. **Add Function Comparison Page Between Python AutoGen and AutoGen.Net** - **Issue**: [.Net][Document] [#3184](https://github.com/microsoft/autogen/issues/3184) - **Description**: Added comparative documentation for features between AutoGen and AutoGen.Net across various functionalities and platform supports.
GitHub
autogen
autogen/dotnet/website/release_note/0.2.1.md
autogen
๏ปฟ# Release Notes for AutoGen.Net v0.2.1 ๐Ÿš€
GitHub
autogen
autogen/dotnet/website/release_note/0.2.1.md
autogen
New Features ๐ŸŒŸ - **Support for OpenAi o1-preview** : Added support for OpenAI o1-preview model ([#3522](https://github.com/microsoft/autogen/issues/3522))
GitHub
autogen
autogen/dotnet/website/release_note/0.2.1.md
autogen
Example ๐Ÿ“š - **OpenAI o1-preview**: [Connect_To_OpenAI_o1_preview](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Connect_To_OpenAI_o1_preview.cs)
GitHub
autogen
autogen/dotnet/website/release_note/0.2.0.md
autogen
# Release Notes for AutoGen.Net v0.2.0 ๐Ÿš€
GitHub
autogen
autogen/dotnet/website/release_note/0.2.0.md
autogen
New Features ๐ŸŒŸ - **OpenAI Structural Format Output**: Added support for structural output format in the OpenAI integration. You can check out the example [here](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Structural_Output.cs) ([#3482](https://github.com/microsoft/autogen/issues/3482)). - **Structural Output Configuration**: Introduced a property for overriding the structural output schema when generating replies with `GenerateReplyOption` ([#3436](https://github.com/microsoft/autogen/issues/3436)).
GitHub
autogen
autogen/dotnet/website/release_note/0.2.0.md
autogen
Bug Fixes ๐Ÿ› - **Fixed Error Code 500**: Resolved an issue where an error occurred when the message history contained multiple different tool calls with the `name` field ([#3437](https://github.com/microsoft/autogen/issues/3437)).
GitHub
autogen
autogen/dotnet/website/release_note/0.2.0.md
autogen
Improvements ๐Ÿ”ง - **Leverage OpenAI V2.0 in AutoGen.OpenAI package**: The `AutoGen.OpenAI` package now uses OpenAI v2.0, providing improved functionality and performance. In the meantime, the original `AutoGen.OpenAI` is still available and can be accessed by `AutoGen.OpenAI.V1`. This allows users who prefer to continue to use `Azure.AI.OpenAI v1` package in their project. ([#3193](https://github.com/microsoft/autogen/issues/3193)). - **Deprecation of GPTAgent**: `GPTAgent` has been deprecated in favor of `OpenAIChatAgent` and `OpenAIMessageConnector` ([#3404](https://github.com/microsoft/autogen/issues/3404)).
GitHub
autogen
autogen/dotnet/website/release_note/0.2.0.md
autogen
Documentation ๐Ÿ“š - **Tool Call Instructions**: Added detailed documentation on using tool calls with `ollama` and `OpenAIChatAgent` ([#3248](https://github.com/microsoft/autogen/issues/3248)). ### Migration Guides ๐Ÿ”„ #### For the Deprecation of `GPTAgent` ([#3404](https://github.com/microsoft/autogen/issues/3404)): **Before:** ```csharp var agent = new GPTAgent(...); ``` **After:** ```csharp var agent = new OpenAIChatAgent(...) .RegisterMessageConnector(); ``` #### For Using Azure.AI.OpenAI v2.0 ([#3193](https://github.com/microsoft/autogen/issues/3193)): **Previous way of creating `OpenAIChatAgent`:** ```csharp var openAIClient = new OpenAIClient(apiKey); var openAIClientAgent = new OpenAIChatAgent( openAIClient: openAIClient, model: "gpt-4o-mini", // Other parameters... ); ``` **New way of creating `OpenAIChatAgent`:** ```csharp var openAIClient = new OpenAIClient(apiKey); var openAIClientAgent = new OpenAIChatAgent( chatClient: openAIClient.GetChatClient("gpt-4o-mini"), // Other parameters... ); ```
GitHub
autogen
autogen/dotnet/website/release_note/update.md
autogen
##### Update on 0.0.15 (2024-06-13) Milestone: [AutoGen.Net 0.0.15](https://github.com/microsoft/autogen/milestone/3) ###### Highlights - [Issue 2851](https://github.com/microsoft/autogen/issues/2851) `AutoGen.Gemini` package for Gemini support. Examples can be found [here](https://github.com/microsoft/autogen/tree/main/dotnet/sample/AutoGen.Gemini.Sample) ##### Update on 0.0.14 (2024-05-28) ###### New features - [Issue 2319](https://github.com/microsoft/autogen/issues/2319) Add `AutoGen.Ollama` package for Ollama support. Special thanks to @iddelacruz for the effort. - [Issue 2608](https://github.com/microsoft/autogen/issues/2608) Add `AutoGen.Anthropic` package for Anthropic support. Special thanks to @DavidLuong98 for the effort. - [Issue 2647](https://github.com/microsoft/autogen/issues/2647) Add `ToolCallAggregateMessage` for function call middleware. ###### API Breaking Changes - [Issue 2648](https://github.com/microsoft/autogen/issues/2648) Deprecate `Message` type. - [Issue 2649](https://github.com/microsoft/autogen/issues/2649) Deprecate `Workflow` type. ###### Bug Fixes - [Issue 2735](https://github.com/microsoft/autogen/issues/2735) Fix tool call issue in AutoGen.Mistral package. - [Issue 2722](https://github.com/microsoft/autogen/issues/2722) Fix parallel funciton call in function call middleware. - [Issue 2633](https://github.com/microsoft/autogen/issues/2633) Set up `name` field in `OpenAIChatMessageConnector` - [Issue 2660](https://github.com/microsoft/autogen/issues/2660) Fix dotnet interactive restoring issue when system language is Chinese - [Issue 2687](https://github.com/microsoft/autogen/issues/2687) Add `global::` prefix to generated code to avoid conflict with user-defined types. ##### Update on 0.0.13 (2024-05-09) ###### New features - [Issue 2593](https://github.com/microsoft/autogen/issues/2593) Consume SK plugins in Agent. - [Issue 1893](https://github.com/microsoft/autogen/issues/1893) Support inline-data in ImageMessage - [Issue 2481](https://github.com/microsoft/autogen/issues/2481) Introduce `ChatCompletionAgent` to `AutoGen.SemanticKernel` ###### API Breaking Changes - [Issue 2470](https://github.com/microsoft/autogen/issues/2470) Update the return type of `IStreamingAgent.GenerateStreamingReplyAsync` from `Task<IAsyncEnumerable<IStreamingMessage>>` to `IAsyncEnumerable<IStreamingMessage>` - [Issue 2470](https://github.com/microsoft/autogen/issues/2470) Update the return type of `IStreamingMiddleware.InvokeAsync` from `Task<IAsyncEnumerable<IStreamingMessage>>` to `IAsyncEnumerable<IStreamingMessage>` - Mark `RegisterReply`, `RegisterPreProcess` and `RegisterPostProcess` as obsolete. You can replace them with `RegisterMiddleware` ###### Bug Fixes - Fix [Issue 2609](https://github.com/microsoft/autogen/issues/2609) Constructor of conversableAgentConfig does not accept LMStudioConfig as ConfigList ##### Update on 0.0.12 (2024-04-22) - Add AutoGen.Mistral package to support Mistral.AI models ##### Update on 0.0.11 (2024-04-10) - Add link to Discord channel in nuget's readme.md - Document improvements - In `AutoGen.OpenAI`, update `Azure.AI.OpenAI` to 1.0.0-beta.15 and add support for json mode and deterministic output in `OpenAIChatAgent` [Issue #2346](https://github.com/microsoft/autogen/issues/2346) - In `AutoGen.SemanticKernel`, update `SemanticKernel` package to 1.7.1 - [API Breaking Change] Rename `PrintMessageMiddlewareExtension.RegisterPrintFormatMessageHook' to `PrintMessageMiddlewareExtension.RegisterPrintMessage`. ##### Update on 0.0.10 (2024-03-12) - Rename `Workflow` to `Graph` - Rename `AddInitializeMessage` to `SendIntroduction` - Rename `SequentialGroupChat` to `RoundRobinGroupChat` ##### Update on 0.0.9 (2024-03-02) - Refactor over @AutoGen.Message and introducing `TextMessage`, `ImageMessage`, `MultiModalMessage` and so on. PR [#1676](https://github.com/microsoft/autogen/pull/1676) - Add `AutoGen.SemanticKernel` to support seamless integration with Semantic Kernel - Move the agent contract abstraction to `AutoGen.Core` package. The `AutoGen.Core` package provides the abstraction for message type, agent and group chat and doesn't contain dependencies over `Azure.AI.OpenAI` or `Semantic Kernel`. This is useful when you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies. - Move `GPTAgent`, `OpenAIChatAgent` and all openai-dependencies to `AutoGen.OpenAI` ##### Update on 0.0.8 (2024-02-28) - Fix [#1804](https://github.com/microsoft/autogen/pull/1804) - Streaming support for IAgent [#1656](https://github.com/microsoft/autogen/pull/1656) - Streaming support for middleware via `MiddlewareStreamingAgent` [#1656](https://github.com/microsoft/autogen/pull/1656) - Graph chat support with conditional transition workflow [#1761](https://github.com/microsoft/autogen/pull/1761) - AutoGen.SourceGenerator: Generate `FunctionContract` from `FunctionAttribute` [#1736](https://github.com/microsoft/autogen/pull/1736) ##### Update on 0.0.7 (2024-02-11) - Add `AutoGen.LMStudio` to support comsume openai-like API from LMStudio local server ##### Update on 0.0.6 (2024-01-23) - Add `MiddlewareAgent` - Use `MiddlewareAgent` to implement existing agent hooks (RegisterPreProcess, RegisterPostProcess, RegisterReply) - Remove `AutoReplyAgent`, `PreProcessAgent`, `PostProcessAgent` because they are replaced by `MiddlewareAgent` ##### Update on 0.0.5 - Simplify `IAgent` interface by removing `ChatLLM` Property - Add `GenerateReplyOptions` to `IAgent.GenerateReplyAsync` which allows user to specify or override the options when generating reply ##### Update on 0.0.4 - Move out dependency of Semantic Kernel - Add type `IChatLLM` as connector to LLM ##### Update on 0.0.3 - In AutoGen.SourceGenerator, rename FunctionAttribution to FunctionAttribute - In AutoGen, refactor over ConversationAgent, UserProxyAgent, and AssistantAgent ##### Update on 0.0.2 - update Azure.OpenAI.AI to 1.0.0-beta.12 - update Semantic kernel to 1.0.1
GitHub
autogen
autogen/dotnet/nuget/NUGET.md
autogen
### About AutoGen for .NET `AutoGen for .NET` is the official .NET SDK for [AutoGen](https://github.com/microsoft/autogen). It enables you to create LLM agents and construct multi-agent workflows with ease. It also provides integration with popular platforms like OpenAI, Semantic Kernel, and LM Studio. ### Gettings started - Find documents and examples on our [document site](https://microsoft.github.io/autogen-for-net/) - Join our [Discord channel](https://discord.gg/pAbnFJrkgZ) to get help and discuss with the community - Report a bug or request a feature by creating a new issue in our [github repo](https://github.com/microsoft/autogen) - Consume the nightly build package from one of the [nightly build feeds](https://microsoft.github.io/autogen-for-net/articles/Installation.html#nighly-build)
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
## AutoGen.LMStudio This package provides support for consuming openai-like API from LMStudio local server.
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
Installation To use `AutoGen.LMStudio`, add the following package to your `.csproj` file: ```xml <ItemGroup> <PackageReference Include="AutoGen.LMStudio" Version="AUTOGEN_VERSION" /> </ItemGroup> ```
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
Usage ```csharp using AutoGen.LMStudio; var localServerEndpoint = "localhost"; var port = 5000; var lmStudioConfig = new LMStudioConfig(localServerEndpoint, port); var agent = new LMStudioAgent( name: "agent", systemMessage: "You are an agent that help user to do some tasks.", lmStudioConfig: lmStudioConfig) .RegisterPrintMessage(); // register a hook to print message nicely to console await agent.SendAsync("Can you write a piece of C# code to calculate 100th of fibonacci?"); ```
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
Update history ### Update on 0.0.7 (2024-02-11) - Add `LMStudioAgent` to support consuming openai-like API from LMStudio local server.
GitHub
autogen
autogen/dotnet/src/AutoGen.SourceGenerator/README.md
autogen
### AutoGen.SourceGenerator This package carries a source generator that adds support for type-safe function definition generation. Simply mark a method with `Function` attribute, and the source generator will generate a function definition and a function call wrapper for you. ### Get start First, add the following to your project file and set `GenerateDocumentationFile` property to true ```xml <PropertyGroup> <!-- This enables structural xml document support --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> ``` ```xml <ItemGroup> <PackageReference Include="AutoGen.SourceGenerator" /> </ItemGroup> ``` > Nightly Build feed: https://devdiv.pkgs.visualstudio.com/DevDiv/_packaging/AutoGen/nuget/v3/index.json Then, for the methods you want to generate function definition and function call wrapper, mark them with `Function` attribute: > Note: For the best of performance, try using primitive types for the parameters and return type. ```csharp // file: MyFunctions.cs using AutoGen; // a partial class is required // and the class must be public public partial class MyFunctions { /// <summary> /// Add two numbers. /// </summary> /// <param name="a">The first number.</param> /// <param name="b">The second number.</param> [Function] public Task<string> AddAsync(int a, int b) { return Task.FromResult($"{a} + {b} = {a + b}"); } } ``` The source generator will generate the following code based on the method signature and documentation. It helps you save the effort of writing function definition and keep it up to date with the actual method signature. ```csharp // file: MyFunctions.generated.cs public partial class MyFunctions { private class AddAsyncSchema { public int a {get; set;} public int b {get; set;} } public Task<string> AddAsyncWrapper(string arguments) { var schema = JsonSerializer.Deserialize<AddAsyncSchema>( arguments, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }); return AddAsync(schema.a, schema.b); } public FunctionDefinition AddAsyncFunction { get => new FunctionDefinition { Name = @"AddAsync", Description = """ Add two numbers. """, Parameters = BinaryData.FromObjectAsJson(new { Type = "object", Properties = new { a = new { Type = @"number", Description = @"The first number.", }, b = new { Type = @"number", Description = @"The second number.", }, }, Required = new [] { "a", "b", }, }, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }) }; } } ``` For more examples, please check out the following project - [AutoGen.BasicSamples](../sample/AutoGen.BasicSamples/) - [AutoGen.SourceGenerator.Tests](../../test/AutoGen.SourceGenerator.Tests/)
GitHub
autogen
autogen/samples/tools/finetuning/README.md
autogen
# Tools for fine-tuning the local models that power agents This directory aims to contain tools for fine-tuning the local models that power agents.
GitHub
autogen
autogen/samples/tools/finetuning/README.md
autogen
Fine tune a custom model client AutoGen supports the use of custom models to power agents [see blog post here](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models). This directory contains a tool to provide feedback to that model, that can be used to fine-tune the model. The creator of the Custom Model Client will have to decide what kind of data is going to be fed back and how it will be used to fine-tune the model. This tool is designed to be flexible and allow for a wide variety of feedback mechanisms. Custom Model Client will have follow the protocol client defined in `update_model.py` `UpdateableModelClient` which is a subclass of `ModelClient` and adds the following method: ```python def update_model( self, preference_data: List[Dict[str, Any]], inference_messages: List[Dict[str, Any]], **kwargs: Any ) -> Dict[str, Any]: """Optional method to learn from the preference data, if the model supports learning. Can be omitted. Learn from the preference data. Args: preference_data: The preference data. inference_messages: The messages that were used during inference between the agent that is being updated and another agent. **kwargs: other arguments. Returns: Dict of learning stats. """ ``` The function provided in the file `update_model.py` is called by passing these arguments: - the agent whose model is to be updated - the preference data - the agent whose conversation is being used to provide the inference messages The function will find the conversation thread that occurred between the "update agent" and the "other agent", and call the `update_model` method of the model client. It will return a dictionary containing the update stats, inference messages, and preference data: ```python { "update_stats": <the dictionary returned by the custom model client implementation>, "inference_messages": <message used for inference>, "preference_data": <the preference data passed in when update_model was called> } ``` **NOTES**: `inference_messages` will contain messages that were passed into the custom model client when `create` was called and a response was needed from the model. It is up to the author of the custom model client to decide which parts of the conversation are needed and how to use this data to fine-tune the model. If a conversation has been long-running before `update_model` is called, then the `inference_messages` will contain a conversation thread that was used for multiple inference steps. It is again up to the author of the custom model client to decide which parts of the conversation correspond to the preference data and how to use this data to fine-tune the model. An example of how to use this tool is shown below: ```python from finetuning.update_model import update_model assistant = AssistantAgent( "assistant", system_message="You are a helpful assistant.", human_input_mode="NEVER", llm_config={ "config_list": [<the config list containing the custom model>], }, ) assistant.register_model_client(model_client_cls=<TheCustomModelClientClass>) user_proxy = UserProxyAgent( "user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=1, code_execution_config=False, llm_config=False, ) res = user_proxy.initiate_chat(assistant, message="the message") response_content = res.summary # Evaluate the summary here and provide feedback. Pretending I am going to perform DPO on the response. # preference_data will be passed on as-is to the custom model client's update_model implementation # so it should be in the format that the custom model client expects and is completely up to the author of the custom model client preference_data = [("this is what the response should have been like", response_content)] update_model_stats = update_model(assistant, preference_data, user_proxy) ```
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
# WebArena Benchmark This directory helps run AutoGen agents on the [WebArena](https://arxiv.org/pdf/2307.13854.pdf) benchmark.
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
Installing WebArena WebArena can be installed by following the instructions from [WebArena's GitHub repository](git@github.com:web-arena-x/webarena.git) If using WebArena with AutoGen there is a clash on the versions of OpenAI and some code changes are needed in WebArena to be compatible with AutoGen's OpenAI version: - webarena's openai version is `openai==0.27.0` - autogen's openai version is: `openai>=1.3` Prior to installation, in the WebArena codebase, any file containing `openai.error` needs to be replaced with `openai`.
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
Running with AutoGen agents You can use the `run.py` file in the `webarena` directory to run WebArena with AutoGen. The OpenAI (or AzureOpenAI or other model) configuration can be setup via `OAI_CONFIG_LIST`. The config list will be filtered by whatever model is passed in the `--model` argument. e.g. of running `run.py`: ``` mkdir myresultdir python run.py --instruction_path agent/prompts/jsons/p_cot_id_actree_2s.json --test_start_idx 27 --test_end_idx 28 --model gpt-4 --result_dir myresultdir ``` The original `run.py` file has been modified to use AutoGen agents which are defined in the `webarena_agents.py` file.
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
References **WebArena: A Realistic Web Environment for Building Autonomous Agents**<br/> Zhou, Shuyan and Xu, Frank F and Zhu, Hao and Zhou, Xuhui and Lo, Robert and Sridhar, Abishek and Cheng, Xianyi and Bisk, Yonatan and Fried, Daniel and Alon, Uri and others<br/> [https://arxiv.org/pdf/2307.13854.pdf](https://arxiv.org/pdf/2307.13854.pdf)
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
# AutoGenBench AutoGenBench is a tool for repeatedly running a set of pre-defined AutoGen tasks in a setting with tightly-controlled initial conditions. With each run, AutoGenBench will start from a blank slate. The agents being evaluated will need to work out what code needs to be written, and what libraries or dependencies to install, to solve tasks. The results of each run are logged, and can be ingested by analysis or metrics scripts (such as `autogenbench tabulate`). By default, all runs are conducted in freshly-initialized docker containers, providing the recommended level of consistency and safety. AutoGenBench works with all AutoGen 0.1.*, and 0.2.* versions.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Technical Specifications If you are already an AutoGenBench pro, and want the full technical specifications, please review the [contributor's guide](CONTRIBUTING.md).
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Docker Requirement AutoGenBench also requires Docker (Desktop or Engine). **It will not run in GitHub codespaces**, unless you opt for native execution (with is strongly discouraged). To install Docker Desktop see [https://www.docker.com/products/docker-desktop/](https://www.docker.com/products/docker-desktop/).
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Installation and Setup **To get the most out of AutoGenBench, the `autogenbench` package should be installed**. At present, the easiest way to do this is to install it via `pip`: ``` pip install autogenbench ``` If you would prefer working from source code (e.g., for development, or to utilize an alternate branch), simply clone the [AutoGen](https://github.com/microsoft/autogen) repository, then install `autogenbench` via: ``` pip install -e autogen/samples/tools/autogenbench ``` After installation, you must configure your API keys. As with other AutoGen applications, AutoGenBench will look for the OpenAI keys in the OAI_CONFIG_LIST file in the current working directory, or the OAI_CONFIG_LIST environment variable. This behavior can be overridden using a command-line parameter described later. If you will be running multiple benchmarks, it is often most convenient to leverage the environment variable option. You can load your keys into the environment variable by executing: ``` export OAI_CONFIG_LIST=$(cat ./OAI_CONFIG_LIST) ``` If an OAI_CONFIG_LIST is *not* provided (by means of file or environment variable), AutoGenBench will use the OPENAI_API_KEY environment variable instead. For some benchmark scenarios, additional keys may be required (e.g., keys for the Bing Search API). These can be added to an `ENV.json` file in the current working folder. An example `ENV.json` file is provided below: ``` { "BING_API_KEY": "xxxyyyzzz" } ```
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
A Typical Session Once AutoGenBench and necessary keys are installed, a typical session will look as follows: ``` autogenbench clone HumanEval cd HumanEval autogenbench run Tasks/r_human_eval_two_agents.jsonl autogenbench tabulate results/r_human_eval_two_agents ``` Where: - `autogenbench clone HumanEval` downloads and expands the HumanEval benchmark scenario. - `autogenbench run Tasks/r_human_eval_two_agents.jsonl` runs the tasks defined in `Tasks/r_human_eval_two_agents.jsonl` - `autogenbench tablue results/r_human_eval_two_agents` tabulates the results of the run Each of these commands has extensive in-line help via: - `autogenbench --help` - `autogenbench clone --help` - `autogenbench run --help` - `autogenbench tabulate --help` **NOTE:** If you are running `autogenbench` from within the repository, you donโ€™t need to run `autogenbench clone`. Instead, navigate to the appropriate scenario folder (e.g., `scenarios/HumanEval`) and run the `Scripts/init_tasks.py` file. More details of each command are provided in the sections that follow.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Cloning Benchmarks To clone an existing benchmark, simply run: ``` autogenbench clone [BENCHMARK] ``` For example, ``` autogenbench clone HumanEval ``` To see which existing benchmarks are available to clone, run: ``` autogenbench clone --list ```
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Running AutoGenBench To run a benchmark (which executes the tasks, but does not compute metrics), simply execute: ``` cd [BENCHMARK] autogenbench run Tasks ``` For example, ``` cd HumanEval autogenbench run Tasks ``` The default is to run each task once. To run each scenario 10 times, use: ``` autogenbench run --repeat 10 Tasks ``` The `autogenbench` command-line tool allows a number of command-line arguments to control various parameters of execution. Type ``autogenbench -h`` to explore these options: ``` 'autogenbench run' will run the specified autogen scenarios for a given number of repetitions and record all logs and trace information. When running in a Docker environment (default), each run will begin from a common, tightly controlled, environment. The resultant logs can then be further processed by other scripts to produce metrics. positional arguments: scenario The JSONL scenario file to run. If a directory is specified, then all JSONL scenarios in the directory are run. (default: ./scenarios) options: -h, --help show this help message and exit -c CONFIG, --config CONFIG The environment variable name or path to the OAI_CONFIG_LIST (default: OAI_CONFIG_LIST). -r REPEAT, --repeat REPEAT The number of repetitions to run for each scenario (default: 1). -s SUBSAMPLE, --subsample SUBSAMPLE Run on a subsample of the tasks in the JSONL file(s). If a decimal value is specified, then run on the given proportion of tasks in each file. For example "0.7" would run on 70% of tasks, and "1.0" would run on 100% of tasks. If an integer value is specified, then randomly select *that* number of tasks from each specified JSONL file. For example "7" would run tasks, while "1" would run only 1 task from each specified JSONL file. (default: 1.0; which is 100%) -m MODEL, --model MODEL Filters the config_list to include only models matching the provided model name (default: None, which is all models). --requirements REQUIREMENTS The requirements file to pip install before running the scenario. -d DOCKER_IMAGE, --docker-image DOCKER_IMAGE The Docker image to use when running scenarios. Can not be used together with --native. (default: 'autogenbench:default', which will be created if not present) --native Run the scenarios natively rather than in docker. NOTE: This is not advisable, and should be done with great caution. ```
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Results By default, the AutoGenBench stores results in a folder hierarchy with the following template: ``./results/[scenario]/[task_id]/[instance_id]`` For example, consider the following folders: ``./results/default_two_agents/two_agent_stocks/0`` ``./results/default_two_agents/two_agent_stocks/1`` ... ``./results/default_two_agents/two_agent_stocks/9`` This folder holds the results for the ``two_agent_stocks`` task of the ``default_two_agents`` tasks file. The ``0`` folder contains the results of the first instance / run. The ``1`` folder contains the results of the second run, and so on. You can think of the _task_id_ as mapping to a prompt, or a unique set of parameters, while the _instance_id_ defines a specific attempt or run. Within each folder, you will find the following files: - *timestamp.txt*: records the date and time of the run, along with the version of the autogen-agentchat library installed - *console_log.txt*: all console output produced by Docker when running AutoGen. Read this like you would a regular console. - *[agent]_messages.json*: for each Agent, a log of their messages dictionaries - *./coding*: A directory containing all code written by AutoGen, and all artifacts produced by that code.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Contributing or Defining New Tasks or Benchmarks If you would like to develop -- or even contribute -- your own tasks or benchmarks, please review the [contributor's guide](CONTRIBUTING.md) for complete technical details.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
# Contributing to AutoGenBench As part of the broader AutoGen project, AutoGenBench welcomes community contributions. Contributions are subject to AutoGen's [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute), as well as a few additional AutoGenBench-specific requirements outlined here. You may also wish to develop your own private benchmark scenarios and the guidance in this document will help with such efforts as well. Below you will find the general requirements, followed by a detailed technical description.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
General Contribution Requirements We ask that all contributions to AutoGenBench adhere to the following: - Follow AutoGen's broader [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute) - All AutoGenBench benchmarks should live in a subfolder of `/samples/tools/autogenbench/scenarios` alongside `HumanEval`, `GAIA`, etc. - Benchmark scenarios should include a detailed README.md, in the root of their folder, describing the benchmark and providing citations where warranted. - Benchmark data (tasks, ground truth, etc.) should be downloaded from their original sources rather than hosted in the AutoGen repository (unless the benchmark is original, and the repository *is* the original source) - You can use the `Scripts/init_tasks.py` file to automate this download. - Basic scoring should be compatible with the `autogenbench tabulate` command (e.g., by outputting logs compatible with the default tabulation mechanism, or by providing a `Scripts/custom_tabulate.py` file) - If you wish your benchmark to be compatible with the `autogenbench clone` command, include a `MANIFEST.json` file in the root of your folder. These requirements are further detailed below, but if you simply copy the `HumanEval` folder, you will already be off to a great start.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Implementing and Running Benchmark Tasks At the core of any benchmark is a set of tasks. To implement tasks that are runnable by AutoGenBench, you must adhere to AutoGenBench's templating and scenario expansion algorithms, as outlined below. ### Task Definitions All tasks are stored in JSONL files (in subdirectories under `./Tasks`). Each line of a tasks file is a JSON object with the following schema: ``` { "id": string, "template": dirname, "substitutions" { "filename1": { "find_string1_1": replace_string1_1, "find_string1_2": replace_string1_2, ... "find_string1_M": replace_string1_N } "filename2": { "find_string2_1": replace_string2_1, "find_string2_2": replace_string2_2, ... "find_string2_N": replace_string2_N } } } ``` For example: ``` { "id": "two_agent_stocks_gpt4", "template": "default_two_agents", "substitutions": { "scenario.py": { "__MODEL__": "gpt-4", }, "prompt.txt": { "__PROMPT__": "Plot and save to disk a chart of NVDA and TESLA stock price YTD." } } } ``` In this example, the string `__MODEL__` will be replaced in the file `scenarios.py`, while the string `__PROMPT__` will be replaced in the `prompt.txt` file. The `template` field can also take on a list value, but this usage is considered advanced and is not described here. See the `autogenbench/run_cmd.py` code, or the `GAIA` benchmark tasks files for additional information about this option.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Task Instance Expansion Algorithm Once the tasks have been defined, as per above, they must be "instantiated" before they can be run. This instantiation happens automatically when the user issues the `autogenbench run` command and involves creating a local folder to share with Docker. Each instance and repetition gets its own folder along the path: `./results/[scenario]/[task_id]/[instance_id]`. For the sake of brevity we will refer to this folder as the `DEST_FOLDER`. The algorithm for populating the `DEST_FOLDER` is as follows: 1. Pre-populate DEST_FOLDER with all the basic starter files for running a scenario (found in `autogenbench/template`). 2. Recursively copy the template folder specified in the JSONL line to DEST_FOLDER (if the JSON `template` attribute points to a folder) If the JSONs `template` attribute instead points to a file, copy the file, but rename it to `scenario.py` 3. Apply any string replacements, as outlined in the prior section. 4. Write a run.sh file to DEST_FOLDER that will be executed by Docker when it is loaded. The `run.sh` is described below.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Scenario Execution Algorithm Once the task has been instantiated it is run (via run.sh). This script will execute the following steps: 1. If a file named `global_init.sh` is present, run it. 2. If a file named `scenario_init.sh` is present, run it. 3. Install the requirements.txt file (if running in Docker) 4. Run the task via `python scenario.py` 5. If the scenario.py exited cleanly (exit code 0), then print "SCENARIO.PY COMPLETE !#!#" 6. Clean up (delete cache, etc.) 7. If a file named `scenario_finalize.sh` is present, run it. 8. If a file named `global_finalize.sh` is present, run it. 9. echo "RUN COMPLETE !#!#", signaling that all steps completed. Notably, this means that scenarios can add custom init and teardown logic by including `scenario_init.sh` and `scenario_finalize.sh` files. At the time of this writing, the run.sh file is as follows: ```sh export AUTOGEN_TESTBED_SETTING="Docker" umask 000 # Run the global init script if it exists if [ -f global_init.sh ] ; then . ./global_init.sh fi # Run the scenario init script if it exists if [ -f scenario_init.sh ] ; then . ./scenario_init.sh fi # Run the scenario pip install -r requirements.txt python scenario.py EXIT_CODE=$? if [ $EXIT_CODE -ne 0 ]; then echo SCENARIO.PY EXITED WITH CODE: $EXIT_CODE !#!# else echo SCENARIO.PY COMPLETE !#!# fi # Clean up if [ -d .cache ] ; then rm -Rf .cache fi # Run the scenario finalize script if it exists if [ -f scenario_finalize.sh ] ; then . ./scenario_finalize.sh fi # Run the global finalize script if it exists if [ -f global_finalize.sh ] ; then . ./global_finalize.sh fi echo RUN.SH COMPLETE !#!# ``` Be warned that this listing is provided here for illustration purposes, and may vary over time. The source of truth are the `run.sh` files found in the ``./results/[taskset]/[task_id]/[instance_id]`` folders.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Integrating with the `tabulate` and `clone` commands. The above details are sufficient for defining and running tasks, but if you wish to support the `autogenbench tabulate` and `autogenbench clone` commands, a few additional steps are required. ### Tabulations If you wish to leverage the default tabulation logic, it is as simple as arranging your `scenario.py` file to output the string "ALL TESTS PASSED !#!#" to the console in the event that a task was solved correctly. If you wish to implement your own tabulation logic, simply create the file `Scripts/custom_tabulate.py` and include a `main(args)` method. Here, the `args` parameter will be provided by AutoGenBench, and is a drop-in replacement for `sys.argv`. In particular, `args[0]` will be the invocation command (similar to the executable or script name in `sys.argv`), and the remaining values (`args[1:]`) are the command line parameters. Should you provide a custom tabulation script, please implement `--help` and `-h` options for documenting your interface. The `scenarios/GAIA/Scripts/custom_tabulate.py` is a great example of custom tabulation. It also shows how you can reuse some components of the default tabulator to speed up development. ### Cloning If you wish your benchmark to be available via the `autogenbench clone` command, you will need to take three additional steps: #### Manifest First, provide a `MANIFEST.json` file in the root of your benchmark. An example is provided below, from which you can see the schema: ```json { "files": { "Templates/TwoAgents/prompt.txt": "Templates/TwoAgents/prompt.txt", "Templates/TwoAgents/coding/my_tests.py": "Templates/TwoAgents/coding/my_tests.py", "Templates/TwoAgents/scenario.py": "Templates/TwoAgents/scenario.py", "README.md": "README.md", "Scripts/init_tasks.py": "Scripts/init_tasks.py", "Scripts/custom_tabulate.py": "Scripts/custom_tabulate.py" } } ``` The keys of the `files` dictionary are local paths, relative to your benchmark's root directory. The values are relative paths in the AutoGen GitHub repository (relative to the folder where the MANIFEST.json file is located). In most cases, the keys and values will be identical. #### SCENARIOS dictionary Second, you must add an entry to the `scenarios` dictionary in `autogen/samples/tools/autogenbench/scenarios/MANIFEST.json`. #### Scripts/init_tasks.py Finally, you should provide an `Scripts/init_tasks.py` file, in your benchmark folder, and include a `main()` method therein. This method will be loaded and called automatically by `autogenbench clone` after all manifest files have been downloaded. This `init_tasks.py` script is a great place to download benchmarks from their original sources and convert them to the JSONL format required by AutoGenBench: - See `HumanEval/Scripts/init_tasks.py` for an example of how to expand a benchmark from an original GitHub repository. - See `GAIA/Scripts/init_tasks.py` for an example of how to expand a benchmark from `Hugging Face Hub`. - See `MATH/SCripts/init_tasks.py` for an example of how to expand a benchmark from an author-hosted website.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/AutoGPT/README.md
autogen
# AutoGPT Benchmark This scenario implements an older subset of the [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/tree/master/agbenchmark#readme) benchmark. Tasks were selected in November 2023, and may have since been deprecated. They are nonetheless useful for comparison and development.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/AutoGPT/README.md
autogen
Running the tasks ``` autogenbench run Tasks/autogpt__two_agents.jsonl autogenbench tabulate Results/autogpt__two_agents ```
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/Examples/README.md
autogen
# Example Tasks Various AutoGen example tasks. Unlike other benchmark tasks, these tasks have no automated evaluation.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/Examples/README.md
autogen
Running the tasks ``` autogenbench run Tasks/default_two_agents ``` Some tasks require a Bing API key. Edit the ENV.json file to provide a valid BING_API_KEY, or simply allow that task to fail (it is only required by one task).
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
# GAIA Benchmark This scenario implements the [GAIA](https://arxiv.org/abs/2311.12983) agent benchmark.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
Running the TwoAgents tasks Level 1 tasks: ```sh autogenbench run Tasks/gaia_test_level_1__two_agents.jsonl autogenbench tabulate Results/gaia_test_level_1__two_agents ``` Level 2 and 3 tasks are executed similarly.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
Running the SocietyOfMind tasks Running the SocietyOfMind tasks is similar to the TwoAgentTasks, but requires an `ENV.json` file with a working BING API key. This file should be located in the root current working directory from where you are running autogenbench, and should have at least the following contents: ```json { "BING_API_KEY": "Your_API_key" } ``` Once created, simply run: ```sh autogenbench run Tasks/gaia_test_level_1__soc.jsonl autogenbench tabulate Results/gaia_test_level_1__soc ``` And similarly for level 2 and 3.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
References **GAIA: a benchmark for General AI Assistants**<br/> Grรฉgoire Mialon, Clรฉmentine Fourrier, Craig Swift, Thomas Wolf, Yann LeCun, Thomas Scialom<br/> [https://arxiv.org/abs/2311.12983](https://arxiv.org/abs/2311.12983)
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md
autogen
# HumanEval Benchmark This scenario implements a modified version of the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark. Compared to the original benchmark, there are **two key differences** here: - A chat model rather than a completion model is used. - The agents get pass/fail feedback about their implementations, and can keep trying until they succeed or run out of tokens or turns.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md
autogen
Running the tasks ``` autogenbench run Tasks/human_eval_two_agents.jsonl autogenbench tabulate Results/human_eval_two_agents ``` For faster development and iteration, a reduced HumanEval set is available via `Tasks/r_human_eval_two_agents.jsonl`, and contains only 26 problems of varying difficulty.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md
autogen
References **Evaluating Large Language Models Trained on Code**<br/> Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba<br/> [https://arxiv.org/abs/2107.03374](https://arxiv.org/abs/2107.03374)
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
# MATH Benchmark This scenario implements the [MATH](https://arxiv.org/abs/2103.03874) benchmark.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
Running the tasks ``` autogenbench run Tasks/math_two_agents.jsonl autogenbench tabulate Results/math_two_agents ``` By default, only a small subset (17 of 5000) MATH problems are exposed. Edit `Scripts/init_tasks.py` to expose more tasks.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
Note on automated evaluation In this scenario, we adopted an automated evaluation pipeline (from [AutoGen](https://arxiv.org/abs/2308.08155) evaluation) that uses LLM to compare the results. Thus, the metric above is only an estimation of the agent's performance on math problems. We also find a similar practice of using LLM as judger for MATH dataset from the [Cumulative Reasoning](https://arxiv.org/abs/2308.04371) paper ([code](https://github.com/iiis-ai/cumulative-reasoning/blob/main/MATH/math-cr-4shot.py)). The static checking from MATH dataset requires an exact match ('comparing 2.0 and 2 results in False'). We haven't found an established way that accurately compares the answer, so human involvement is still needed to confirm the result. In AutoGen, the conversation will end at โ€œTERMINATEโ€ by default. To enable an automated way of answer extraction and evaluation, we prompt an LLM with 1. the given problem 2. the ground truth answer 3. the last response from the solver, to extract the answer and compare it with the ground truth answer. We evaluate the 17 problems for 3 times and go through these problems manually to check the answers. Compared with the automated result evaluation (the model is gpt-4-0613), we find that in 2/3 trials, the automated evaluation determined 1 correct answer as wrong (False Negative). This means 49/51 problems are evaluated correctly. We also went through 200 random sampled problems from whole dataset to check the results. There are 1 False Negative and 2 False Positives. We note that False Positive is also possible due to the hallucination of LLMs, and the variety of problems.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
References **Measuring Mathematical Problem Solving With the MATH Dataset**<br/> Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt<br/> [https://arxiv.org/abs/2103.03874](https://arxiv.org/abs/2103.03874) **AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation**<br/> Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang and Chi Wang<br/> [https://arxiv.org/abs/2308.08155](https://arxiv.org/abs/2308.08155) **Cumulative Reasoning with Large Language Models**<br/> Yifan Zhang, Jingqin Yang, Yang Yuan, Andrew Chi-Chih Yao<br/> [https://arxiv.org/abs/2308.04371](https://arxiv.org/abs/2308.04371)
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
# What is Promptflow Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. Refer to [Promptflow docs](https://microsoft.github.io/promptflow/) for more information. Quick links: - Why use Promptflow - [Link](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow) - Quick start guide - [Link](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html)
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Getting Started - Install required python packages ```bash cd samples/apps/promptflow-autogen pip install -r requirements.txt ``` - This example assumes a working Redis cache service to be available. You can get started locally using this [guide](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) or use your favorite managed service
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Chat flow Chat flow is designed for conversational application development, building upon the capabilities of standard flow and providing enhanced support for chat inputs/outputs and chat history management. With chat flow, you can easily create a chatbot that handles chat input and output.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Create connection for LLM tool to use You can follow these steps to create a connection required by a LLM tool. Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details. ```bash # Override keys with --set to avoid yaml file changes # Create Azure open ai connection pf connection create --file azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection # Create the custom connection for Redis Cache pf connection create -f custom_conn.yaml --set secrets.redis_url=<your-redis-connection-url> --name redis_connection_url # Sample redis connection string rediss://:PASSWORD@redis_host_name.redis.cache.windows.net:6380/0 ``` Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_connection` for Azure Open AI and `redis_connection_url` for redis. ```bash # show registered connection pf connection show --name open_ai_connection ``` Please refer to connections [document](https://promptflow.azurewebsites.net/community/local/manage-connections.html) and [example](https://github.com/microsoft/promptflow/tree/main/examples/connections) for more details.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Develop a chat flow The most important elements that differentiate a chat flow from a standard flow are **Chat Input**, **Chat History**, and **Chat Output**. - **Chat Input**: Chat input refers to the messages or queries submitted by users to the chatbot. Effectively handling chat input is crucial for a successful conversation, as it involves understanding user intentions, extracting relevant information, and triggering appropriate responses. - **Chat History**: Chat history is the record of all interactions between the user and the chatbot, including both user inputs and AI-generated outputs. Maintaining chat history is essential for keeping track of the conversation context and ensuring the AI can generate contextually relevant responses. Chat History is a special type of chat flow input, that stores chat messages in a structured format. - NOTE: Currently the sample flows do not send chat history messages to agent workflow. - **Chat Output**: Chat output refers to the AI-generated messages that are sent to the user in response to their inputs. Generating contextually appropriate and engaging chat outputs is vital for a positive user experience. A chat flow can have multiple inputs, but Chat History and Chat Input are required inputs in chat flow.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Interact with chat flow Promptflow supports interacting via vscode or via Promptflow CLI provides a way to start an interactive chat session for chat flow. Customer can use below command to start an interactive chat session: ```bash pf flow test --flow <flow_folder> --interactive ```
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Autogen State Flow [Autogen State Flow](./autogen_stateflow.py) contains stateflow example shared at [StateFlow](https://microsoft.github.io/autogen/blog/2024/02/29/StateFlow/) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output of Prompflow is `summary` message from group chat.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Agent Nested Chat [Autogen Nested Chat](./agentchat_nestedchat.py) contains Scenario 1 of nested chat example shared at [Nested Chats](https://microsoft.github.io/autogen/docs/notebooks/agentchat_nestedchat) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output of Prompflow is `summary` message from group chat.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Redis for Data cache and Interim Messages Autogen supports Redis for [data caching](https://microsoft.github.io/autogen/docs/reference/cache/redis_cache/) and since redis supports a pub-subs model as well, this Promptflow example is configured for all agent callbacks to send messages to a Redis channel. This is optional feature but is essential for long running workflows and provides access to interim messages for your frontend. NOTE: Currently Promtpflow only support [SSE](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) for streaming data and does not support websockets. NOTE: In multi user chat bot environment please make necessary changes to send messages to corresponding channel.
GitHub
autogen
autogen/samples/apps/cap/README.md
autogen
# Composable Actor Platform (CAP) for AutoGen
GitHub
autogen
autogen/samples/apps/cap/README.md
autogen
I just want to run the remote AutoGen agents! *Python Instructions (Windows, Linux, MacOS):* 0) cd py 1) pip install -r autogencap/requirements.txt 2) python ./demo/App.py 3) Choose (5) and follow instructions to run standalone Agents 4) Choose other options for other demos *Demo Notes:* 1) Options involving AutoGen require OAI_CONFIG_LIST. AutoGen python requirements: 3.8 <= python <= 3.11 2) For option 2, type something in and see who receives the message. Quit to quit. 3) To view any option that displays a chart (such as option 4), you will need to disable Docker code execution. You can do this by setting the environment variable `AUTOGEN_USE_DOCKER` to `False`. *Demo Reference:* ``` Select the Composable Actor Platform (CAP) demo app to run: (enter anything else to quit) 1. Hello World 2. Complex Agent (e.g. Name or Quit) 3. AutoGen Pair 4. AutoGen GroupChat 5. AutoGen Agents in different processes 6. List Actors in CAP (Registry) Enter your choice (1-6): ```
GitHub
autogen
autogen/samples/apps/cap/README.md
autogen
What is Composable Actor Platform (CAP)? AutoGen is about Agents and Agent Orchestration. CAP extends AutoGen to allows Agents to communicate via a message bus. CAP, therefore, deals with the space between these components. CAP is a message based, actor platform that allows actors to be composed into arbitrary graphs. Actors can register themselves with CAP, find other agents, construct arbitrary graphs, send and receive messages independently and many, many, many other things. ```python # CAP Platform network = LocalActorNetwork() # Register an agent network.register(GreeterAgent()) # Tell agents to connect to other agents network.connect() # Get a channel to the agent greeter_link = network.lookup_agent("Greeter") # Send a message to the agent greeter_link.send_txt_msg("Hello World!") # Cleanup greeter_link.close() network.disconnect() ``` ### Check out other demos in the `py/demo` directory. We show the following: ### 1) Hello World shown above 2) Many CAP Actors interacting with each other 3) A pair of interacting AutoGen Agents wrapped in CAP Actors 4) CAP wrapped AutoGen Agents in a group chat 5) Two AutoGen Agents running in different processes and communicating through CAP 6) List all registered agents in CAP ### Coming soon. Stay tuned! ### 1) AutoGen integration to list all registered agents
GitHub
autogen
autogen/samples/apps/cap/TODO.md
autogen
- ~~Pretty print debug_logs~~ - ~~colors~~ - ~~messages to oai should be condensed~~ - ~~remove orchestrator in scenario 4 and have the two actors talk to each other~~ - ~~pass a complex multi-part message~~ - ~~protobuf for messages~~ - ~~make changes to autogen to enable scenario 3 to work with CAN~~ - ~~make groupchat work~~ - ~~actors instead of agents~~ - clean up for PR into autogen - ~~Create folder structure under Autogen examples~~ - ~~CAN -> CAP (Composable Actor Protocol)~~ - CAP actor lookup should use zmq - Add min C# actors & reorganize - Hybrid GroupChat with C# ProductManager - C++ Msg Layer - Rust Msg Layer - Node Msg Layer - Java Msg Layer - Investigate a standard logging framework that supports color in windows - structlog?
GitHub
autogen
autogen/samples/apps/cap/c#/Readme.md
autogen
Coming soon...
GitHub
autogen
autogen/samples/apps/cap/py/README.md
autogen
# Composable Actor Platform (CAP) for AutoGen
GitHub
autogen
autogen/samples/apps/cap/py/README.md
autogen
I just want to run the remote AutoGen agents! *Python Instructions (Windows, Linux, MacOS):* pip install autogencap 1) AutoGen require OAI_CONFIG_LIST. AutoGen python requirements: 3.8 <= python <= 3.11 ```
GitHub
autogen
autogen/samples/apps/cap/py/README.md
autogen
What is Composable Actor Platform (CAP)? AutoGen is about Agents and Agent Orchestration. CAP extends AutoGen to allows Agents to communicate via a message bus. CAP, therefore, deals with the space between these components. CAP is a message based, actor platform that allows actors to be composed into arbitrary graphs. Actors can register themselves with CAP, find other agents, construct arbitrary graphs, send and receive messages independently and many, many, many other things. ```python # CAP Library from autogencap.ComponentEnsemble import ComponentEnsemble from autogencap.Actor import Actor # A simple Agent class GreeterAgent(Actor): def __init__(self): super().__init__( agent_name="Greeter", description="This is the greeter agent, who knows how to greet people.") # Prints out the message it receives def on_txt_msg(self, msg): print(f"Greeter received: {msg}") return True ensemble = ComponentEnsemble() # Create an agent agent = GreeterAgent() # Register an agent ensemble.register(agent) # start message processing # call on_connect() on all Agents ensemble.connect() # Get a channel to the agent greeter_link = ensemble.find_by_name("Greeter") #Send a message to the agent greeter_link.send_txt_msg("Hello World!") # Cleanup greeter_link.close() ensemble.disconnect() ``` ### Check out other demos in the `py/demo` directory. We show the following: ### 1) Hello World shown above 2) Many CAP Actors interacting with each other 3) A pair of interacting AutoGen Agents wrapped in CAP Actors 4) CAP wrapped AutoGen Agents in a group chat 5) Two AutoGen Agents running in different processes and communicating through CAP 6) List all registered agents in CAP 7) Run Agent in user supplied message loop
GitHub
autogen
autogen/samples/apps/cap/node/Readme.md
autogen
Coming soon...
GitHub
autogen
autogen/samples/apps/cap/c++/Readme.md
autogen
Coming soon...