---
title: "Tools"
description: ""
icon: "wrench"
---

## Overview

Tools extend agent capabilities beyond text processing, enabling interaction with external systems and data sources. They act as specialized modules that extend the agent's abilities, allowing it to interact with external systems, access information, and execute actions in response to user queries.

<Note>
Supported in Python and TypeScript.
</Note>

Ready-to-use tools that provide immediate functionality for common agent tasks:

| Tool             | Description                                                                                         |
| :--------------- |:----------------------------------------------------------------------------------------------------|
| [MCP](#mcp)        | Discover and use tools exposed by arbitrary [MCP Server](https://modelcontextprotocol.io/examples)  |
| [Think](#think)    | Gives an agent a place to think                                                                                              |
| [Handoff](#handoff)    | Delegates a task to an expert agent                                                                                             |
| [OpenAPI](#openapi)  | Consume external APIs with an ease                                       |
| [OpenMeteo](#openmeteo)  | Retrieve weather information for specific locations and dates                                       |
| [DuckDuckGo](#duckduckgo) | Search for data on DuckDuckGo                                                                       |
| [Wikipedia](#wikipedia)  | Search for data on Wikipedia                                                                        |
| [VectorStoreSearch](#vector-store-search) | Search for documents in a vector database                         |
| [Python](#python)     | Let agent run an arbitrary Python code in a sandboxed environment                                                |
| [Sandbox](#sandbox)    | Run custom Python functions in a sandboxed environment                                              |

➕ [Request additional built-in tools](https://github.com/i-am-bee/beeai-framework/discussions)

<Tip>
Would you like to use a tool from LangChain? See the LangChain tool example in [Python](https://github.com/i-am-bee/beeai-framework/blob/main/python/examples/tools/langchain_tool.py).
or [TypeScript](https://github.com/i-am-bee/beeai-framework/blob/main/typescript/examples/tools/langchain.ts).
</Tip>

## Usage

### Basic usage

The simplest way to use a tool is to instantiate it directly and call its `run()` method with appropriate input:

<CodeGroup>

{/* <!-- embedme python/examples/tools/base.py --> */}
```py Python [expandable]
import asyncio
import datetime
import sys

from beeai_framework.errors import FrameworkError
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools.weather import OpenMeteoTool, OpenMeteoToolInput


async def main() -> None:
    tool = OpenMeteoTool()
    result = await tool.run(
        input=OpenMeteoToolInput(
            location_name="New York", start_date=datetime.date.today(), end_date=datetime.date.today()
        )
    ).middleware(GlobalTrajectoryMiddleware())
    print(result.get_text_content())


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/base.ts --> */}
```ts TypeScript [expandable]
import { OpenMeteoTool } from "beeai-framework/tools/weather/openMeteo";

const tool = new OpenMeteoTool();

const today = new Date().toISOString().split("T")[0];
const result = await tool.run({
  location: { name: "New York" },
  start_date: today,
  end_date: today,
});
console.log(result.getTextContent());

```

</CodeGroup>

### Advanced usage

Tools often support additional configuration options to customize their behavior:

<CodeGroup>

{/* <!-- embedme python/examples/tools/advanced.py --> */}
```py Python [expandable]
import asyncio
import datetime
import sys
import traceback

from beeai_framework.errors import FrameworkError
from beeai_framework.tools.weather import OpenMeteoTool, OpenMeteoToolInput


async def main() -> None:
    tool = OpenMeteoTool()
    result = await tool.run(
        input=OpenMeteoToolInput(
            location_name="New York",
            start_date=datetime.date.today(),
            end_date=datetime.date.today(),
            temperature_unit="celsius",
        )
    )
    print(result.get_text_content())


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/advanced.ts --> */}
```ts TypeScript [expandable]
import { OpenMeteoTool } from "beeai-framework/tools/weather/openMeteo";
import { UnconstrainedCache } from "beeai-framework/cache/unconstrainedCache";

const tool = new OpenMeteoTool({
  cache: new UnconstrainedCache(),
  retryOptions: {
    maxRetries: 3,
  },
});
console.log(tool.name); // OpenMeteo
console.log(tool.description); // Retrieve current, past, or future weather forecasts for a location.
console.log(tool.inputSchema()); // (zod/json schema)

await tool.cache.clear();

const today = new Date().toISOString().split("T")[0];
const result = await tool.run({
  location: { name: "New York" },
  start_date: today,
  end_date: today,
  temperature_unit: "celsius",
});
console.log(result.isEmpty()); // false
console.log(result.result); // prints raw data
console.log(result.getTextContent()); // prints data as text

```
</CodeGroup>

### Using tools with agents

The true power of tools emerges when integrating them with agents. Tools extend the agent's capabilities, allowing it to perform actions beyond text generation:

<CodeGroup>

{/* <!-- embedme python/examples/tools/agent.py --> */}
```py Python [expandable]
from beeai_framework.adapters.ollama import OllamaChatModel
from beeai_framework.agents.react import ReActAgent
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.weather import OpenMeteoTool

agent = ReActAgent(llm=OllamaChatModel("llama3.1"), tools=[OpenMeteoTool()], memory=UnconstrainedMemory())

```

{/* <!-- embedme typescript/examples/tools/agent.ts --> */}
```ts TypeScript [expandable]
import { ArXivTool } from "beeai-framework/tools/arxiv";
import { ReActAgent } from "beeai-framework/agents/react/agent";
import { UnconstrainedMemory } from "beeai-framework/memory/unconstrainedMemory";
import { OllamaChatModel } from "beeai-framework/adapters/ollama/backend/chat";

const agent = new ReActAgent({
  llm: new OllamaChatModel("llama3.1"),
  memory: new UnconstrainedMemory(),
  tools: [new ArXivTool()],
});

```

</CodeGroup>

<Heading level={3} id="tool-decorator">Creating a tool</Heading>

For simpler tools, you can use the `tool` decorator to quickly create a tool from a function:

{/* <!-- embedme python/examples/tools/decorator.py --> */}
```py Python [expandable]
import asyncio
import json
import sys
import traceback
from urllib.parse import quote

import requests

from beeai_framework.agents.requirement import RequirementAgent
from beeai_framework.backend import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.logger import Logger
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools import StringToolOutput, tool

logger = Logger(__name__)


# defining a tool using the `tool` decorator
@tool
def basic_calculator(expression: str) -> StringToolOutput:
    """
    A calculator tool that performs mathematical operations.

    Args:
        expression: The mathematical expression to evaluate (e.g., "2 + 3 * 4").

    Returns:
        The result of the mathematical expression
    """
    try:
        encoded_expression = quote(expression)
        math_url = f"https://newton.vercel.app/api/v2/simplify/{encoded_expression}"

        response = requests.get(
            math_url,
            headers={"Accept": "application/json"},
        )
        response.raise_for_status()

        return StringToolOutput(json.dumps(response.json()))
    except Exception as e:
        raise RuntimeError(f"Error evaluating expression: {e!s}") from Exception


async def main() -> None:
    # using the tool in an agent

    chat_model = ChatModel.from_name("ollama:granite4:micro")

    agent = RequirementAgent(llm=chat_model, tools=[basic_calculator], memory=UnconstrainedMemory())

    result = await agent.run("What is the square root of 36?", total_max_retries=10).middleware(
        GlobalTrajectoryMiddleware()
    )

    print(result.last_message.text)


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```

## Built-in Tools

<Heading level={3} id="duckduckgo" name="duckduckgo">DuckDuckGo search tool</Heading>

Use the DuckDuckGo search tool to retrieve real-time search results from across the internet, including news, current events, or content from specific websites or domains.

<CodeGroup>

{/* <!-- embedme python/examples/tools/duckduckgo.py --> */}
```py Python [expandable]
import asyncio
import sys
import traceback

from beeai_framework.agents.requirement import RequirementAgent
from beeai_framework.backend import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool


async def main() -> None:
    chat_model = ChatModel.from_name("ollama:granite4:micro")
    agent = RequirementAgent(llm=chat_model, tools=[DuckDuckGoSearchTool()], memory=UnconstrainedMemory())

    result = await agent.run("How tall is the mount Everest?")

    print(result.last_message.text)


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/custom/extending.ts --> */}
```ts TypeScript [expandable]
import { z } from "zod";
import {
  DuckDuckGoSearchTool,
  DuckDuckGoSearchToolSearchType as SafeSearchType,
} from "beeai-framework/tools/search/duckDuckGoSearch";

const searchTool = new DuckDuckGoSearchTool();

const customSearchTool = searchTool.extend(
  z.object({
    query: z.string(),
    safeSearch: z.boolean().default(true),
  }),
  (input, options) => {
    if (!options.search) {
      options.search = {};
    }
    options.search.safeSearch = input.safeSearch ? SafeSearchType.STRICT : SafeSearchType.OFF;

    return { query: input.query };
  },
);

const response = await customSearchTool.run(
  {
    query: "News in the world!",
    safeSearch: true,
  },
  {
    signal: AbortSignal.timeout(10_000),
  },
);
console.info(response);

```

</CodeGroup>

<Heading level={3} id="openmeteo">OpenMeteo weather tool</Heading>

Use the OpenMeteo tool to retrieve real-time weather forecasts including detailed information on temperature, wind speed, and precipitation. Access forecasts predicting weather up to 16 days in the future and archived forecasts for weather up to 30 days in the past. Ideal for obtaining up-to-date weather predictions and recent historical weather trends.

<CodeGroup>

{/* <!-- embedme python/examples/tools/openmeteo.py --> */}
```py Python [expandable]
import asyncio
import sys
import traceback

from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.weather import OpenMeteoTool


async def main() -> None:
    llm = ChatModel.from_name("ollama:llama3.1")
    agent = ReActAgent(llm=llm, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())

    result = await agent.run("What's the current weather in London?")

    print(result.last_message.text)


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```
{/* <!-- embedme typescript/examples/tools/base.ts --> */}
```ts TypeScript [expandable]
import { OpenMeteoTool } from "beeai-framework/tools/weather/openMeteo";

const tool = new OpenMeteoTool();

const today = new Date().toISOString().split("T")[0];
const result = await tool.run({
  location: { name: "New York" },
  start_date: today,
  end_date: today,
});
console.log(result.getTextContent());

```
</CodeGroup>

<Heading level={3} id="wikipedia">Wikipedia tool</Heading>

Use the Wikipedia tool to retrieve detailed information from Wikipedia.org on a wide range of topics, including famous individuals, locations, organizations, and historical events. Ideal for obtaining comprehensive overviews or specific details on well-documented subjects. May not be suitable for lesser-known or more recent topics. The information is subject to community edits which can be inaccurate.

<CodeGroup>
{/* <!-- embedme python/examples/tools/wikipedia.py --> */}
```py Python [expandable]
import asyncio
import sys
import traceback

from beeai_framework.errors import FrameworkError
from beeai_framework.tools.search.wikipedia import (
    WikipediaTool,
    WikipediaToolInput,
)


async def main() -> None:
    wikipedia_client = WikipediaTool({"full_text": True})
    tool_input = WikipediaToolInput(query="bee")
    result = await wikipedia_client.run(tool_input)
    print(result.get_text_content())


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/custom/piping.ts --> */}
```ts TypeScript [expandable]
import { WikipediaTool } from "beeai-framework/tools/search/wikipedia";
import { SimilarityTool } from "beeai-framework/tools/similarity";
import { splitString } from "beeai-framework/internals/helpers/string";
import { z } from "zod";

const wikipedia = new WikipediaTool();
const similarity = new SimilarityTool({
  maxResults: 5,
  provider: async (input) =>
    input.documents.map((document) => ({
      score: document.text
        .toLowerCase()
        .split(" ")
        .reduce((acc, word) => acc + (input.query.toLowerCase().includes(word) ? 1 : 0), 0),
    })),
});

const wikipediaWithSimilarity = wikipedia
  .extend(
    z.object({
      page: z.string().describe("Wikipedia page"),
      query: z.string().describe("Search query"),
    }),
    (newInput) => ({ query: newInput.page }),
  )
  .pipe(similarity, (input, output) => ({
    query: input.query,
    documents: output.results.flatMap((document) =>
      Array.from(splitString(document.fields.markdown ?? "", { size: 1000, overlap: 50 })).map(
        (chunk) => ({
          text: chunk,
          source: document,
        }),
      ),
    ),
  }));

const response = await wikipediaWithSimilarity.run({
  page: "JavaScript",
  query: "engine",
});
console.info(response);

```
</CodeGroup>

<Heading level={3} id="vector-store-search">Vector Store Search Tool</Heading>

Use the VectorStoreSearchTool to perform semantic search against pre-populated vector stores. This tool enables agents to retrieve relevant documents from knowledge bases using semantic similarity, making it ideal for RAG (Retrieval-Augmented Generation) applications and knowledge-based question answering.

<Note>
This tool requires a pre-populated vector store. Vector store population (loading and chunking documents) is typically handled offline in production applications.
</Note>

{/* <!-- embedme python/examples/agents/requirement/rag.py --> */}
```py Python [expandable]
import asyncio
import os

from beeai_framework.agents.requirement import RequirementAgent
from beeai_framework.backend import ChatModel
from beeai_framework.backend.document_loader import DocumentLoader
from beeai_framework.backend.embedding import EmbeddingModel
from beeai_framework.backend.text_splitter import TextSplitter
from beeai_framework.backend.vector_store import VectorStore
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools import Tool
from beeai_framework.tools.search.retrieval import VectorStoreSearchTool

POPULATE_VECTOR_DB = True
VECTOR_DB_PATH_4_DUMP = ""  # Set this path for persistency


async def setup_vector_store() -> VectorStore | None:
    """
    Setup vector store with BeeAI framework documentation.
    """
    embedding_model = EmbeddingModel.from_name("watsonx:ibm/slate-125m-english-rtrvr-v2", truncate_input_tokens=500)

    # Load existing vector store if available
    if VECTOR_DB_PATH_4_DUMP and os.path.exists(VECTOR_DB_PATH_4_DUMP):
        print(f"Loading vector store from: {VECTOR_DB_PATH_4_DUMP}")
        from beeai_framework.adapters.beeai.backend.vector_store import TemporalVectorStore

        preloaded_vector_store: VectorStore = TemporalVectorStore.load(
            path=VECTOR_DB_PATH_4_DUMP, embedding_model=embedding_model
        )
        return preloaded_vector_store

    # Create new vector store if population is enabled
    # NOTE: Vector store population is typically done offline in production applications
    if POPULATE_VECTOR_DB:
        # Load documentation about BeeAI agents - this serves as our knowledge base
        # for answering questions about the different types of agents available
        loader = DocumentLoader.from_name(
            name="langchain:UnstructuredMarkdownLoader", file_path="docs/modules/agents.mdx"
        )
        try:
            documents = await loader.load()
        except Exception as e:
            print(f"Failed to load documents: {e}")
            return None

        # Split documents into chunks
        text_splitter = TextSplitter.from_name(
            name="langchain:RecursiveCharacterTextSplitter", chunk_size=1000, chunk_overlap=200
        )
        documents = await text_splitter.split_documents(documents)
        print(f"Loaded {len(documents)} document chunks")

        # Create vector store and add documents
        vector_store = VectorStore.from_name(name="beeai:TemporalVectorStore", embedding_model=embedding_model)
        await vector_store.add_documents(documents=documents)
        print("Vector store populated with documents")

        return vector_store

    return None


async def main() -> None:
    """
    Example demonstrating RequirementAgent using VectorStoreSearchTool.

    The agent will use the vector store search tool to find relevant information
    about BeeAI framework agents and provide comprehensive answers.

    Note: In typical applications, you would use a pre-populated vector store
    rather than populating it at runtime. This example includes population
    logic for demonstration purposes only.
    """
    # Setup vector store with BeeAI documentation
    vector_store = await setup_vector_store()
    if vector_store is None:
        raise FileNotFoundError(
            "Failed to instantiate Vector Store. "
            "Either set POPULATE_VECTOR_DB=True to create a new one, or ensure the database file exists."
        )

    # Create the vector store search tool
    search_tool = VectorStoreSearchTool(vector_store=vector_store)

    # Alternative: Create search tool using dynamic loading
    # embedding_model = EmbeddingModel.from_name("watsonx:ibm/slate-125m-english-rtrvr-v2", truncate_input_tokens=500)
    # search_tool = VectorStoreSearchTool.from_name(
    #     name="beeai:TemporalVectorStore",
    #     embedding_model=embedding_model
    # )

    # Create RequirementAgent with the vector store search tool
    llm = ChatModel.from_name("ollama:llama3.1:8b")
    agent = RequirementAgent(
        llm=llm,
        memory=UnconstrainedMemory(),
        instructions=(
            "You are a helpful assistant that answers questions about the BeeAI framework. "
            "Use the vector store search tool to find relevant information from the documentation "
            "before providing your answer. Always search for information first, then provide a "
            "comprehensive response based on what you found."
        ),
        tools=[search_tool],
        # Log all tool calls to the console for easier debugging
        middlewares=[GlobalTrajectoryMiddleware(included=[Tool])],
    )

    query = "What types of agents are available in BeeAI?"
    response = await agent.run(query)
    print(f"query: {query}\nResponse: {response.last_message.text}")


if __name__ == "__main__":
    asyncio.run(main())

```

<Heading level={3} id="mcp">MCP Tool</Heading>

Leverage the Model Context Protocol (MCP) to define, initialize, and utilize tools on compatible MCP servers. These servers expose executable functionalities, enabling AI models to perform tasks such as computations, API calls, or system operations.


<AccordionGroup>
	<Accordion title="Using stdio transport">

		<CodeGroup>

		{/* <!-- embedme python/examples/tools/mcp/mcp_stdio.py --> */}
		```py Python [expandable]
		import asyncio
		import os
		
		from mcp import StdioServerParameters, stdio_client
		
		from beeai_framework.agents.requirement import RequirementAgent
		from beeai_framework.backend import ChatModel
		from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
		from beeai_framework.tools import Tool
		from beeai_framework.tools.mcp import MCPTool
		
		server_params = StdioServerParameters(
		    command="npx", args=["-y", "@modelcontextprotocol/server-filesystem", os.getcwd()]
		)
		
		
		async def main() -> None:
		    """Using a local MCP file server to explore the file system."""
		
		    client = stdio_client(server_params)
		    mcp_tools = await MCPTool.from_client(client)
		
		    agent = RequirementAgent(
		        llm=ChatModel.from_name("ollama:granite4:micro"),
		        tools=mcp_tools,
		        middlewares=[GlobalTrajectoryMiddleware(included=[Tool, ChatModel])],
		    )
		
		    prompt = "What's the current working directory?"
		    print(f"User: {prompt}")
		    response = await agent.run(prompt)
		    print(f"Agent: {response.last_message.text}")
		
		
		if __name__ == "__main__":
		    asyncio.run(main())
		
		```

		{/* <!-- embedme typescript/examples/tools/mcp.ts --> */}
		```ts TypeScript [expandable]
		import { Client } from "@modelcontextprotocol/sdk/client/index.js";
		import { MCPTool } from "beeai-framework/tools/mcp";
		import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
		import { ReActAgent } from "beeai-framework/agents/react/agent";
		import { UnconstrainedMemory } from "beeai-framework/memory/unconstrainedMemory";
		import { OllamaChatModel } from "beeai-framework/adapters/ollama/backend/chat";
		
		// Create MCP Client
		const client = new Client(
		  {
		    name: "test-client",
		    version: "1.0.0",
		  },
		  {
		    capabilities: {},
		  },
		);
		
		// Connect the client to any MCP server with tools capablity
		await client.connect(
		  new StdioClientTransport({
		    command: "npx",
		    args: ["-y", "@modelcontextprotocol/server-everything"],
		  }),
		);
		
		try {
		  // Server usually supports several tools, use the factory for automatic discovery
		  const tools = await MCPTool.fromClient(client);
		  const agent = new ReActAgent({
		    llm: new OllamaChatModel("llama3.1"),
		    memory: new UnconstrainedMemory(),
		    tools,
		  });
		  // @modelcontextprotocol/server-everything contains "add" tool
		  await agent.run({ prompt: "Find out how much is 4 + 7" }).observe((emitter) => {
		    emitter.on("update", async ({ data, update, meta }) => {
		      console.log(`Agent (${update.key}) 🤖 : `, update.value);
		    });
		  });
		} finally {
		  // Close the MCP connection
		  await client.close();
		}
		
		```
		</CodeGroup>

	</Accordion>

	<Accordion title="Using streamable-http transport">
		{/* <!-- embedme python/examples/tools/mcp/mcp_streamable_http.py --> */}
		```py Python [expandable]
		import asyncio
		
		from mcp.client.streamable_http import streamablehttp_client
		
		from beeai_framework.agents.requirement import RequirementAgent
		from beeai_framework.backend import ChatModel
		from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
		from beeai_framework.tools import Tool
		from beeai_framework.tools.mcp import MCPTool
		
		
		async def main() -> None:
		    """Using BeeAI Framework Documentation MCP Server to learn more."""
		
		    client = streamablehttp_client("https://framework.beeai.dev/mcp")
		    all_tools = await MCPTool.from_client(client)
		
		    agent = RequirementAgent(
		        llm=ChatModel.from_name("ollama:granite4:micro"),
		        tools=all_tools,
		        middlewares=[GlobalTrajectoryMiddleware(included=[Tool, ChatModel])],
		    )
		
		    prompt = "Why to use Requirement Agent?"
		    print(f"User: {prompt}")
		    response = await agent.run(prompt)
		    print(f"Agent: {response.last_message.text}")
		
		
		if __name__ == "__main__":
		    asyncio.run(main())
		
		```
	</Accordion>

	<Accordion title="Using sse transport">
		{/* <!-- embedme python/examples/tools/mcp/mcp_sse.py --> */}
		```py Python [expandable]
		import asyncio
		
		from mcp.client.sse import sse_client
		
		from beeai_framework.agents.requirement import RequirementAgent
		from beeai_framework.backend import ChatModel
		from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
		from beeai_framework.tools import Tool
		from beeai_framework.tools.mcp import MCPTool
		
		
		async def main() -> None:
		    """Using IBM Cloud MCP Server with BeeAI Framework.
		
		    ibmcloud login -r us-south --sso
		    ibmcloud --mcp-transport http://127.0.0.1:7777
		    """
		
		    all_ibm_tools = await MCPTool.from_client(sse_client("http://127.0.0.1:7777/sse"))
		    ibm_tools = [tool for tool in all_ibm_tools if tool.name in {"ibmcloud_account_show"}]
		
		    agent = RequirementAgent(
		        llm=ChatModel.from_name("ollama:granite3.3:8b"),
		        tools=[*ibm_tools],
		        instructions="Specify JSON as an output format for the tool calls if possible.",
		        middlewares=[GlobalTrajectoryMiddleware(included=[Tool, ChatModel])],
		    )
		
		    prompt = "Who is the owner of my IBM account?"
		    print(f"User: {prompt}")
		    response = await agent.run(prompt)
		    print(f"Agent: {response.last_message.text}")
		
		
		if __name__ == "__main__":
		    asyncio.run(main())
		
		```
	</Accordion>

</AccordionGroup>


<Tip>
	Check out the [MCP Slack integration tutorial](/guides/mcp-slackbot)
</Tip>

<Note>
	Currently, the client does not support Roots, Sampling, or Elicitation. Feel free to open a GitHub issue if needed.
</Note>

<Heading level={3} id="openapi">OpenAPI Tool</Heading>

Many APIs are described using the OpenAPI Specification, and a framework can read this specification to generate a set of tools that allow an agent to communicate with the API.

<CodeGroup>
	{/* <!-- embedme python/examples/tools/openapi.py --> */}
	```py Python [expandable]
	import asyncio
	import json
	import os
	import sys
	import traceback
	
	from aiofiles import open
	
	from beeai_framework.agents.requirement import RequirementAgent
	from beeai_framework.errors import FrameworkError
	from beeai_framework.tools.openapi import OpenAPITool
	
	
	async def main() -> None:
	    # Retrieve the schema
	    current_dir = os.path.dirname(__file__)
	    async with open(f"{current_dir}/assets/github_openapi.json") as file:
	        content = await file.read()
	        open_api_schema = json.loads(content)
	
	        # Create a tool for each operation in the schema
	        tools = OpenAPITool.from_schema(open_api_schema)
	        print(f"Retrieved {len(tools)} tools")
	        print("\n".join([t.name for t in tools]))
	
	        # Create an agent
	        agent = RequirementAgent(llm="ollama:granite4:micro", tools=tools)
	
	        # Run the agent
	        prompt = "How many repositories are in 'i-am-bee' org?"
	        print("User:", prompt)
	        response = await agent.run(prompt)
	        print("Agent 🤖 : ", response.last_message.text)
	
	
	if __name__ == "__main__":
	    try:
	        asyncio.run(main())
	    except FrameworkError as e:
	        traceback.print_exc()
	        sys.exit(e.explain())
	
	```

	{/* <!-- embedme typescript/examples/tools/openapi.ts --> */}
	```ts TypeScript [expandable]
	import "dotenv/config.js";
	import { ReActAgent } from "beeai-framework/agents/react/agent";
	import { TokenMemory } from "beeai-framework/memory/tokenMemory";
	import { OpenAPITool } from "beeai-framework/tools/openapi";
	import * as fs from "fs";
	import { dirname } from "node:path";
	import { fileURLToPath } from "node:url";
	import { ChatModel } from "beeai-framework/backend/chat";
	
	const llm = await ChatModel.fromName("ollama:llama3.1");
	
	const __dirname = dirname(fileURLToPath(import.meta.url));
	const openApiSchema = await fs.promises.readFile(
	  `${__dirname}/assets/github_openapi.json`,
	  "utf-8",
	);
	
	const agent = new ReActAgent({
	  llm,
	  memory: new TokenMemory(),
	  tools: [new OpenAPITool({ openApiSchema })],
	});
	
	const response = await agent
	  .run({ prompt: 'How many repositories are in "i-am-bee" org?' })
	  .observe((emitter) => {
	    emitter.on("update", async ({ data, update, meta }) => {
	      console.log(`Agent (${update.key}) 🤖 : `, update.value);
	    });
	  });
	
	console.log(`Agent 🤖 : `, response.result.text);
	
	```

</CodeGroup>


<Heading level={3} id="python">Python Tool</Heading>

The Python tool allows AI agents to execute Python code within a secure, sandboxed environment. This tool enables access to files that are either provided by the user or created during execution.

This enables agents to:
- Perform calculations and data analysis
- Create and modify files
- Process and transform user data
- Generate visualizations and reports
- And more

<Note>
This tool requires [beeai-code-interpreter](https://github.com/i-am-bee/bee-code-interpreter) to use.
Get started quickly with the BeeAI Framework starter template for [Python](https://github.com/i-am-bee/beeai-framework-py-starter)
or [TypeScript](https://github.com/i-am-bee/beeai-framework-py-starter).
</Note>

Key components:
- `LocalPythonStorage` – Handles where Python code is stored and run.
  - `local_working_dir` – A temporary folder where the code is saved before running.
  - `interpreter_working_dir` – The folder where the code actually runs, set by the `CODE_INTERPRETER_TMPDIR` setting.
- `PythonTool` – Connects to an external Python interpreter to run code.
  - `code_interpreter_url` – The web address where the code gets executed (default: `http://127.0.0.1:50081`).
  - `storage` -- Controls where the code is stored. By default, it saves files locally using `LocalPythonStorage`. You can set up a different storage option, like cloud storage, if needed.

<CodeGroup>

{/* <!-- embedme python/examples/tools/python_tool.py --> */}
```py Python [expandable]
import asyncio
import os
import sys
import tempfile
import traceback

from dotenv import load_dotenv

from beeai_framework.adapters.ollama import OllamaChatModel
from beeai_framework.agents.react import ReActAgent
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.code import LocalPythonStorage, PythonTool

# Load environment variables
load_dotenv()


async def main() -> None:
    llm = OllamaChatModel("llama3.1")
    storage = LocalPythonStorage(
        local_working_dir=tempfile.mkdtemp("code_interpreter_source"),
        # CODE_INTERPRETER_TMPDIR should point to where code interpreter stores it's files
        interpreter_working_dir=os.getenv("CODE_INTERPRETER_TMPDIR", "./tmp/code_interpreter_target"),
    )
    python_tool = PythonTool(
        code_interpreter_url=os.getenv("CODE_INTERPRETER_URL", "http://127.0.0.1:50081"),
        storage=storage,
    )
    agent = ReActAgent(llm=llm, tools=[python_tool], memory=UnconstrainedMemory())
    result = await agent.run("Calculate 5036 * 12856 and save the result to answer.txt").on(
        "update", lambda data, event: print(f"Agent 🤖 ({data.update.key}) : ", data.update.parsed_value)
    )
    print(result.last_message.text)

    result = await agent.run("Read the content of answer.txt?").on(
        "update", lambda data, event: print(f"Agent 🤖 ({data.update.key}) : ", data.update.parsed_value)
    )
    print(result.last_message.text)


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/custom/python.ts --> */}
```ts TypeScript [expandable]
import "dotenv/config";
import { CustomTool } from "beeai-framework/tools/custom";

const customTool = await CustomTool.fromSourceCode(
  {
    // Ensure the env exists
    url: process.env.CODE_INTERPRETER_URL!,
    env: { API_URL: "https://riddles-api.vercel.app/random" },
  },
  `import requests
import os
from typing import Optional, Union, Dict

def get_riddle() -> Optional[Dict[str, str]]:
  """
  Fetches a random riddle from the Riddles API.

  This function retrieves a random riddle and its answer. It does not accept any input parameters.

  Returns:
      Optional[Dict[str, str]]: A dictionary containing:
          - 'riddle' (str): The riddle question.
          - 'answer' (str): The answer to the riddle.
      Returns None if the request fails.
  """
  url = os.environ.get('API_URL')
  
  try:
      response = requests.get(url)
      response.raise_for_status() 
      return response.json() 
  except Exception as e:
      return None`,
);

```

</CodeGroup>

<Heading level={3} id="sandbox">Sandbox tool</Heading>

The Sandbox tool provides a way to define and run custom Python functions in a secure, sandboxed environment. It's ideal when you need to encapsulate specific functionality that can be called by the agent.

<Note>
This tool requires [beeai-code-interpreter](https://github.com/i-am-bee/bee-code-interpreter) to use.
Get started quickly with [beeai-framework-py-starter](https://github.com/i-am-bee/beeai-framework-py-starter).
</Note>

<CodeGroup>

{/* <!-- embedme python/examples/tools/custom/sandbox.py --> */}
```py Python [expandable]
import asyncio
import os
import sys
import traceback

from dotenv import load_dotenv

from beeai_framework.errors import FrameworkError
from beeai_framework.tools.code import SandboxTool

load_dotenv()


async def main() -> None:
    sandbox_tool = await SandboxTool.from_source_code(
        url=os.getenv("CODE_INTERPRETER_URL", "http://127.0.0.1:50081"),
        env={"API_URL": "https://riddles-api.vercel.app/random"},
        source_code="""
import requests
import os
from typing import Optional, Union, Dict

def get_riddle() -> Optional[Dict[str, str]]:
    '''
    Fetches a random riddle from the Riddles API.

    This function retrieves a random riddle and its answer. It does not accept any input parameters.

    Returns:
        Optional[Dict[str, str]]: A dictionary containing:
            - 'riddle' (str): The riddle question.
            - 'answer' (str): The answer to the riddle.
        Returns None if the request fails.
    '''
    url = os.environ.get('API_URL')

    try:
        response = requests.get(url)
        response.raise_for_status()
        return response.json()
    except Exception as e:
        return None
""",
    )

    result = await sandbox_tool.run({})

    print(result.get_text_content())


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/custom/python.ts --> */}
```ts TypeScript [expandable]
import "dotenv/config";
import { CustomTool } from "beeai-framework/tools/custom";

const customTool = await CustomTool.fromSourceCode(
  {
    // Ensure the env exists
    url: process.env.CODE_INTERPRETER_URL!,
    env: { API_URL: "https://riddles-api.vercel.app/random" },
  },
  `import requests
import os
from typing import Optional, Union, Dict

def get_riddle() -> Optional[Dict[str, str]]:
  """
  Fetches a random riddle from the Riddles API.

  This function retrieves a random riddle and its answer. It does not accept any input parameters.

  Returns:
      Optional[Dict[str, str]]: A dictionary containing:
          - 'riddle' (str): The riddle question.
          - 'answer' (str): The answer to the riddle.
      Returns None if the request fails.
  """
  url = os.environ.get('API_URL')
  
  try:
      response = requests.get(url)
      response.raise_for_status() 
      return response.json() 
  except Exception as e:
      return None`,
);

```
</CodeGroup>

<Tip>
Environmental variables can be overridden (or defined) in the following ways:

1. During the creation of a `SandboxTool`, either via the constructor or the factory function (`SandboxTool.from_source_code`).
2. By passing them directly as part of the options when invoking: `my_tool.run(..., env={ "MY_ENV": "MY_VALUE" })`.
</Tip>

<Note>
Only `PythonTool` can access files.
</Note>

---

## Creating custom tools

Custom tools allow you to build your own specialized tools to extend agent capabilities.

To create a new tool, implement the base `Tool` class. The framework provides flexible options for tool creation, from simple to complex implementations.

<Note>
Initiate the `Tool` by passing your own handler (function) with the `name`, `description` and `input schema`.
</Note>

### Basic custom tool

Here's an example of a simple custom tool that provides riddles:

<CodeGroup>

{/* <!-- embedme python/examples/tools/custom/base.py --> */}
```py Python [expandable]
import asyncio
import random
import sys
from typing import Any

from pydantic import BaseModel, Field

from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
from beeai_framework.errors import FrameworkError
from beeai_framework.tools import StringToolOutput, Tool, ToolRunOptions


class RiddleToolInput(BaseModel):
    riddle_number: int = Field(description="Index of riddle to retrieve.")


class RiddleTool(Tool[RiddleToolInput, ToolRunOptions, StringToolOutput]):
    name = "Riddle"
    description = "It selects a riddle to test your knowledge."
    input_schema = RiddleToolInput

    data = (
        "What has hands but can't clap?",
        "What has a face and two hands but no arms or legs?",
        "What gets wetter the more it dries?",
        "What has to be broken before you can use it?",
        "What has a head, a tail, but no body?",
        "The more you take, the more you leave behind. What am I?",
        "What goes up but never comes down?",
    )

    def __init__(self, options: dict[str, Any] | None = None) -> None:
        super().__init__(options)

    def _create_emitter(self) -> Emitter:
        return Emitter.root().child(
            namespace=["tool", "example", "riddle"],
            creator=self,
        )

    async def _run(
        self, input: RiddleToolInput, options: ToolRunOptions | None, context: RunContext
    ) -> StringToolOutput:
        index = input.riddle_number % (len(self.data))
        riddle = self.data[index]
        return StringToolOutput(result=riddle)


async def main() -> None:
    tool = RiddleTool()
    tool_input = RiddleToolInput(riddle_number=random.randint(0, len(RiddleTool.data)))
    result = await tool.run(tool_input)
    print(result)


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        sys.exit(e.explain())

```

{/* <!-- embedme typescript/examples/tools/custom/python.ts --> */}
```ts TypeScript [expandable]
import "dotenv/config";
import { CustomTool } from "beeai-framework/tools/custom";

const customTool = await CustomTool.fromSourceCode(
  {
    // Ensure the env exists
    url: process.env.CODE_INTERPRETER_URL!,
    env: { API_URL: "https://riddles-api.vercel.app/random" },
  },
  `import requests
import os
from typing import Optional, Union, Dict

def get_riddle() -> Optional[Dict[str, str]]:
  """
  Fetches a random riddle from the Riddles API.

  This function retrieves a random riddle and its answer. It does not accept any input parameters.

  Returns:
      Optional[Dict[str, str]]: A dictionary containing:
          - 'riddle' (str): The riddle question.
          - 'answer' (str): The answer to the riddle.
      Returns None if the request fails.
  """
  url = os.environ.get('API_URL')
  
  try:
      response = requests.get(url)
      response.raise_for_status() 
      return response.json() 
  except Exception as e:
      return None`,
);

```

</CodeGroup>

<Tip>
The input schema (`inputSchema`) processing can be asynchronous when needed for more complex validation or preprocessing.
</Tip>

<Tip>
For structured data responses, use `JSONToolOutput` or implement your own custom output type.
</Tip>

### Advanced custom tool

For more complex scenarios, you can implement tools with robust input validation, error handling, and structured outputs:

<CodeGroup>
{/* <!-- embedme python/examples/tools/custom/openlibrary.py --> */}
```py Python [expandable]
import asyncio
import sys
from typing import Any

import httpx
from pydantic import BaseModel, Field

from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
from beeai_framework.errors import FrameworkError
from beeai_framework.tools import JSONToolOutput, Tool, ToolError, ToolInputValidationError, ToolRunOptions


class OpenLibraryToolInput(BaseModel):
    title: str | None = Field(description="Title of book to retrieve.", default=None)
    olid: str | None = Field(description="Open Library number of book to retrieve.", default=None)
    subjects: str | None = Field(description="Subject of a book to retrieve.", default=None)


class OpenLibraryToolResult(BaseModel):
    preview_url: str
    info_url: str
    bib_key: str


class OpenLibraryToolOutput(JSONToolOutput[OpenLibraryToolResult]):
    pass


class OpenLibraryTool(Tool[OpenLibraryToolInput, ToolRunOptions, OpenLibraryToolOutput]):
    name = "OpenLibrary"
    description = """Provides access to a library of books with information about book titles,
        authors, contributors, publication dates, publisher and isbn."""
    input_schema = OpenLibraryToolInput

    def __init__(self, options: dict[str, Any] | None = None) -> None:
        super().__init__(options)

    def _create_emitter(self) -> Emitter:
        return Emitter.root().child(
            namespace=["tool", "example", "openlibrary"],
            creator=self,
        )

    async def _run(
        self, tool_input: OpenLibraryToolInput, options: ToolRunOptions | None, context: RunContext
    ) -> OpenLibraryToolOutput:
        key = ""
        value = ""
        input_vars = vars(tool_input)
        for val in input_vars:
            if input_vars[val] is not None:
                key = val
                value = input_vars[val]
                break
        else:
            raise ToolInputValidationError("All input values in OpenLibraryToolInput were empty.") from None

        async with httpx.AsyncClient() as client:
            response = await client.get(
                f"https://openlibrary.org/api/books?bibkeys={key}:{value}&jsmcd=data&format=json",
                headers={"Content-Type": "application/json", "Accept": "application/json"},
            )
            response.raise_for_status()

            result = response.json().get(f"{key}:{value}")
            if not result:
                raise ToolError(f"No book found with {key}={value}.")

            return OpenLibraryToolOutput(OpenLibraryToolResult.model_validate(result))


async def main() -> None:
    tool = OpenLibraryTool()
    tool_input = OpenLibraryToolInput(title="It")
    result = await tool.run(tool_input)
    print(result)


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except FrameworkError as e:
        sys.exit(e.explain())

```
{/* <!-- embedme typescript/examples/tools/custom/openLibrary.ts --> */}
```ts TypeScript [expandable]
import {
  BaseToolOptions,
  BaseToolRunOptions,
  Tool,
  ToolInput,
  JSONToolOutput,
  ToolError,
  ToolEmitter,
} from "beeai-framework/tools/base";
import { z } from "zod";
import { createURLParams } from "beeai-framework/internals/fetcher";
import { GetRunContext } from "beeai-framework/context";
import { Callback, Emitter } from "beeai-framework/emitter/emitter";

type ToolOptions = BaseToolOptions & { maxResults?: number };
type ToolRunOptions = BaseToolRunOptions;

export interface OpenLibraryResponse {
  numFound: number;
  start: number;
  numFoundExact: boolean;
  q: string;
  offset: number;
  docs: Record<string, any>[];
}

export class OpenLibraryToolOutput extends JSONToolOutput<OpenLibraryResponse> {
  isEmpty(): boolean {
    return !this.result || this.result.numFound === 0 || this.result.docs.length === 0;
  }
}

export class OpenLibraryTool extends Tool<OpenLibraryToolOutput, ToolOptions, ToolRunOptions> {
  name = "OpenLibrary";
  description =
    "Provides access to a library of books with information about book titles, authors, contributors, publication dates, publisher and isbn.";

  inputSchema() {
    return z
      .object({
        title: z.string(),
        author: z.string(),
        isbn: z.string(),
        subject: z.string(),
        place: z.string(),
        person: z.string(),
        publisher: z.string(),
      })
      .partial();
  }

  public readonly emitter: ToolEmitter<
    ToolInput<this>,
    OpenLibraryToolOutput,
    {
      beforeFetch: Callback<{ request: { url: string; options: RequestInit } }>;
      afterFetch: Callback<{ data: OpenLibraryResponse }>;
    }
  > = Emitter.root.child({
    namespace: ["tool", "search", "openLibrary"],
    creator: this,
  });

  static {
    this.register();
  }

  protected async _run(
    input: ToolInput<this>,
    _options: Partial<ToolRunOptions>,
    run: GetRunContext<this>,
  ) {
    const request = {
      url: `https://openlibrary.org?${createURLParams({
        searchon: input,
      })}`,
      options: { signal: run.signal } as RequestInit,
    };

    await run.emitter.emit("beforeFetch", { request });
    const response = await fetch(request.url, request.options);

    if (!response.ok) {
      throw new ToolError(
        "Request to Open Library API has failed!",
        [new Error(await response.text())],
        {
          context: { input },
        },
      );
    }

    const json: OpenLibraryResponse = await response.json();
    if (this.options.maxResults) {
      json.docs.length = this.options.maxResults;
    }

    await run.emitter.emit("afterFetch", { data: json });
    return new OpenLibraryToolOutput(json);
  }
}

```
</CodeGroup>

### Implementation guidelines

When creating custom tools, follow these key requirements:

**1. Implement the `Tool` class**

To create a custom tool, you need to extend the base `Tool` class and implement several required components. The output must be an implementation of the `ToolOutput` interface, such as `StringToolOutput` for text responses or `JSONToolOutput` for structured data.

**2. Create a descriptive name**

Your tool needs a clear, descriptive name that follows naming conventions:

```py Python [expandable]
name = "MyNewTool"
```

The name must only contain characters a-z, A-Z, 0-9, or one of - or _.

**3. Write an effective description**

The description is crucial as it determines when the agent uses your tool:

```py Python [expandable]
description = "Takes X action when given Y input resulting in Z output"
```

You should experiment with different natural language descriptions to ensure the tool is used in the correct circumstances. You can also include usage tips and guidance for the agent in the description, but its advisable to keep the description succinct in order to reduce the probability of conflicting with other tools, or adversely affecting agent behavior.

**4. Define a clear input schema**

Create a Pydantic model that defines the expected inputs with helpful descriptions:

```py Python [expandable]
class OpenMeteoToolInput(BaseModel):
    location_name: str = Field(description="The name of the location to retrieve weather information.")
    country: str | None = Field(description="Country name.", default=None)
    start_date: str | None = Field(
        description="Start date for the weather forecast in the format YYYY-MM-DD (UTC)", default=None
    )
    end_date: str | None = Field(
        description="End date for the weather forecast in the format YYYY-MM-DD (UTC)", default=None
    )
    temperature_unit: Literal["celsius", "fahrenheit"] = Field(
        description="The unit to express temperature", default="celsius"
    )
```
_Source: [/python/beeai_framework/tools/weather/openmeteo.py](https://github.com/i-am-bee/beeai-framework/blob/main/python/beeai_framework/tools/weather/openmeteo.py)_

The input schema is a required field used to define the format of the input to your tool. The agent will formalise the natural language input(s) it has received and structure them into the fields described in the tool's input. The input schema will be created based on the `MyNewToolInput` class. Keep your tool input schema simple and provide schema descriptions to help the agent to interpret fields.

**5. Implement the `_run()` method**

This method contains the core functionality of your tool, processing the input and returning the appropriate output.

```py Python [expandable]
def _run(self, input: OpenMeteoToolInput, options: Any = None) -> None:
    params = urlencode(self.get_params(input), doseq=True)
    logger.debug(f"Using OpenMeteo URL: https://api.open-meteo.com/v1/forecast?{params}")
    response = requests.get(
        f"https://api.open-meteo.com/v1/forecast?{params}",
        headers={"Content-Type": "application/json", "Accept": "application/json"},
    )
    response.raise_for_status()
    return StringToolOutput(json.dumps(response.json()))
```

_Source: [/python/beeai_framework/tools/weather/openmeteo.py](https://github.com/i-am-bee/beeai-framework/blob/main/python/beeai_framework/tools/weather/openmeteo.py)_

---

## Best practices

### 1. Data minimization

If your tool is providing data to the agent, try to ensure that the data is relevant and free of extraneous metatdata. Preprocessing data to improve relevance and minimize unnecessary data conserves agent memory, improving overall performance.

### 2. Provide hints

If your tool encounters an error that is fixable, you can return a hint to the agent; the agent will try to reuse the tool in the context of the hint. This can improve the agent's ability
to recover from errors.

### 3. Security and stability

When building tools, consider that the tool is being invoked by a somewhat unpredictable third party (the agent). You should ensure that sufficient guardrails are in place to prevent
adverse outcomes.

---

## Examples

<CardGroup cols={2}>
  <Card title="Python" icon="python" href="https://github.com/i-am-bee/beeai-framework/tree/main/python/examples/tools">
    Explore reference tool implementations in Python
  </Card>
  <Card title="TypeScript" icon="js" href="https://github.com/i-am-bee/beeai-framework/tree/main/typescript/examples/tools">
    Explore reference tool implementations in TypeScript
  </Card>
</CardGroup>
