---
title: "Requirement Agent"
icon: "microscope"
---

The `RequirementAgent` is a declarative AI agent implementation that provides predictable, controlled execution behavior across different language models through rule-based constraints. Language models vary significantly in their reasoning capabilities and tool-calling sophistication, but RequirementAgent normalizes these differences by enforcing consistent execution patterns regardless of the underlying model's strengths or weaknesses. Rules can be configured as strict or flexible as necessary, adapting to task requirements while ensuring consistent execution regardless of the underlying model's reasoning or tool-calling capabilities.

### Core Problems Addressed

**Traditional AI agents exhibit unpredictable behavior** in production environments:
- Execution inconsistency: Agents may skip critical steps, terminate prematurely, or use inappropriate tools
- Model variability: Different LLMs produce different execution patterns for the same task
- Debugging complexity: Non-deterministic behavior makes troubleshooting difficult
- Production reliability: Lack of guarantees makes agents unsuitable for critical workflows

### Value of `RequirementAgent`
RequirementAgent ensures consistent agent behavior through declarative rules that define when and how tools are used, delivering reliable agents that:
- Complete essential tasks systematically by enforcing proper execution sequences
- Validate data and results comprehensively through mandatory verification steps
- Select appropriate tools intelligently based on context and task requirements
- Execute efficiently and safely with built-in protection against infinite loops and runaway processes

## Quickstart

This example demonstrates how to create an agent with enforced tool execution order.

This agent will:
1. First use `ThinkTool` to reason about the request enabling a "Re-Act" pattern
2. Check weather using `OpenMeteoTool`, which it must call at least once but not consecutively
3. Search for events using `DuckDuckGoSearchTool` at least once
4. Provide recommendations based on the gathered information


<CodeGroup>

{/* <!-- embedme python/examples/agents/requirement/quickstart_requirement.py --> */}
```py Python [expandable]
import asyncio

from beeai_framework.agents.requirement import RequirementAgent
from beeai_framework.agents.requirement.requirements.conditional import (
    ConditionalRequirement,
)
from beeai_framework.backend import ChatModel
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool
from beeai_framework.tools.think import ThinkTool
from beeai_framework.tools.weather import OpenMeteoTool


# Create an agent that plans activities based on weather and events
async def main() -> None:
    agent = RequirementAgent(
        llm=ChatModel.from_name("ollama:granite4:micro"),
        tools=[
            ThinkTool(),  # to reason
            OpenMeteoTool(),  # retrieve weather data
            DuckDuckGoSearchTool(),  # search web
        ],
        instructions="Plan activities for a given destination based on current weather and events.",
        requirements=[
            # Force thinking first
            ConditionalRequirement(ThinkTool, force_at_step=1),
            # Search only after getting weather and at least once
            ConditionalRequirement(
                DuckDuckGoSearchTool, only_after=[OpenMeteoTool], min_invocations=1, max_invocations=2
            ),
            # Weather tool be used at least once but not consecutively
            ConditionalRequirement(OpenMeteoTool, consecutive_allowed=False, min_invocations=1, max_invocations=2),
        ],
    )
    # Run with execution logging
    response = await agent.run("What to do in Boston?").middleware(GlobalTrajectoryMiddleware())
    print(f"Final Answer: {response.last_message.text}")


if __name__ == "__main__":
    asyncio.run(main())

```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>


## How it Works

`RequirementAgent` operates on a simple principle: developers declare rules on specific tools using `ConditionalRequirement` objects, while the framework automatically handles all orchestration logic behind the scenes. The developer can modify agent behavior by adjusting rule parameters, not rewriting complex state management logic. This creates clear separation between business logic (rules) and execution control (framework-managed).

In `RequirementAgent`, **all capabilities (including data retrieval, web search, reasoning patterns, and `final_answer`) are implemented as tools** to ensure structured, reliable execution. Each `ConditionalRequirement` returns a `Rule` where each rule is bound to a single tool:

| Attribute      | Purpose                                                                                       | Value |
|----------------|-----------------------------------------------------------------------------------------------|-------|
| `target`       | Which tool the rule applies to for a given turn                                               | str   |
| `allowed`      | Whether the tool can be used for a given turn and is present in the system prompt             | bool  |
| `hidden`       | Whether the tool definition is visible to the agent for a given turn and in the system prompt | bool  |
| `prevent_stop` | Whether rule prevents the agent from terminating for a given turn                             | bool  |
| `forced`       | Whether tool must be invoked on a given turn                                                  | bool  |
| `reason`       | Optinally explain to the LLM why the given rule is applied                                    | str   |

When requirements generate conflicting rules, the system applies this precedence:
- **Forbidden overrides all**: If any requirement forbids a tool, that tool cannot be used.
- **Highest priority forced rule wins**: If multiple requirements force tools, the highest-priority requirement decides which tool is forced.
- **Prevention rules accumulate**: All `prevent_stop` rules apply simultaneously

### Execution Flow
1. **State Initialization**: Creates `RequirementAgentRunState` with `UnconstrainedMemory`, execution steps, and iteration tracking
2. **Requirements Processing**: `RequirementsReasoner` analyzes requirements and determines allowed tools, tool choice preferences, and termination conditions
3. **Request Creation**: Creates a structured request with `allowed_tools`, `tool_choice`, and `can_stop` flags based on current state and requirements. The system evaluates requirements before each LLM call to determine which tools to make available to the LLM
4. **LLM Interaction**: Calls language model with system message, conversation history, and constrained tool set
5. **Tool Execution**: Executes requested tools via `_run_tools`, handles errors, and updates conversation memory
6. **Cycle Detection**: `ToolCallChecker` prevents infinite loops by detecting repeated tool call patterns
7. **Iteration Control**: Continues until requirements are satisfied or maximum iterations reached

### Basic Rule Definition
Developers declare rules by creating `ConditionalRequirement` objects that target specific tools. The framework automatically handles all orchestration:

```py Python [expandable]
# Declare: agent must think before acting
ConditionalRequirement(ThinkTool, force_at_step=1)

# Declare: require weather check before web search
ConditionalRequirement(DuckDuckGoSearchTool, only_after=[OpenMeteoTool])

# Declare: prevent consecutive uses of same tool
ConditionalRequirement(OpenMeteoTool(), consecutive_allowed=False)
```

### Complete Parameter Reference

<CodeGroup>
```py Python [expandable]
ConditionalRequirement(
  target_tool, # Tool class, instance, or name (can also be specified as `target=...`)
  name="", # (optional) Name, useful for logging
  only_before=[...], # (optional) Disable target_tool after any of these tools are called
  only_after=[...], # (optional) Disable target_tool before all these tools are called
  force_after=[...], # (optional) Force target_tool execution immediately after any of these tools are called
  min_invocations=0, # (optional) Minimum times the tool must be called before agent can stop
  max_invocations=10, # (optional) Maximum times the tool can be called before being disabled
  force_at_step=1, # (optional) Step number at which the tool must be invoked
  only_success_invocations=True, # (optional) Whether 'force_at_step' counts only successful invocations
  priority=10, # (optional) Higher relative number means higher priority for requirement enforcement
  consecutive_allowed=True, # (optional) Whether the tool can be invoked twice in a row
  force_prevent_stop=False,  # (optional) If True, prevents the agent from giving a final answer when a forced target_tool call occurs.
  enabled=True, # (optional) Whether to skip this requirement’s execution
  custom_checks=[
     # (optional) Custom callbacks; all must pass for the tool to be used
    lambda state: any('weather' in msg.text for msg in state.memory.message if isinstance(msg, UserMessage)),
    lambda state: state.iteration > 0,
  ],
)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>


<Tip>
Start with a single requirement and add more as needed.
</Tip>


<Tip>
	Curious to see it in action?
	Explore our [interactive exercises](https://github.com/i-am-bee/beeai-framework/tree/main/python/examples/agents/requirement/exercises) to discover how the agent solves real problems step by step!
</Tip>


## Example Agents

### Forced Execution Order

This example forces the agent to use `ThinkTool` for reasoning followed by `DuckDuckGoSearchTool` to retrieve data. This trajectory ensures that even a small model can arrive at the correct answer by preventing it from skipping tool calls entirely.

<CodeGroup>
```py Python [expandable]
RequirementAgent(
  llm=ChatModel.from_name("ollama:granite3.3"),
  tools=[ThinkTool(), DuckDuckGoSearchTool()],
  requirements=[
      ConditionalRequirement(ThinkTool, force_at_step=1), # Force ThinkTool at the first step
      ConditionalRequirement(DuckDuckGoSearchTool, force_at_step=2), # Force DuckDuckGo at the second step
  ],
)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>

### Creating a ReAct Agent

A ReAct Agent (Reason and Act) follows this trajectory:

```text
Think -> Use a tool -> Think -> Use a tool -> Think -> ... -> End
```

You can achieve this by forcing the execution of the `Think` tool after every tool:

<CodeGroup>
```py Python [expandable]
RequirementAgent(
  llm=ChatModel.from_name("ollama:granite3.3"),
  tools=[ThinkTool(), WikipediaTool(), OpenMeteoTool()],
  requirements=[ConditionalRequirement(ThinkTool, force_at_step=1, force_after=Tool)],
)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>

<Tip>
For a more general approach, use `ConditionalRequirement(ThinkTool, force_at_step=1, force_after=Tool, consecutive_allowed=False)`, where the option `consecutive_allowed=False` prevents `ThinkTool` from being used multiple times in a row.
</Tip>

### ReAct Agent with Custom Conditions

You may want an agent that works like ReAct but skips the "reasoning" step under certain conditions. This example uses the priority option to tell the agent to send an email after creating an order, while calling `ThinkTool` as the first step and after `retrieve_basket`.

<CodeGroup>
```py Python [expandable]
RequirementAgent(
  llm=ChatModel.from_name("ollama:granite3.3"),
  tools=[ThinkTool(), retrieve_basket(), create_order(), send_email()],
  requirements=[
    ConditionalRequirement(ThinkTool, force_at_step=1, force_after=retrieve_basket, priority=10),
    ConditionalRequirement(send_email, only_after=create_order, force_after=create_order, priority=20, max_invocations=1),
  ],
)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>


### Ask Permission Requirement

Some tools may be expensive to run or have destructive effects.
For these tools, you may want to get **approval from an external system or directly from the user**.

The following agent first asks the user before it runs the `remove_data` or the `get_data` tool.

<CodeGroup>
```py Python [expandable]
RequirementAgent(
  llm=ChatModel.from_name("ollama:granite3.3"),
  tools=[get_data, remove_data, update_data],
  requirements=[
    AskPermissionRequirement([remove_data, get_data])
  ]
)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>

#### Using a Custom `handler` for Human In the Loop Requirements

By default, the approval process is done as a simple prompt in terminal.
The framework provides a simple way to provide a custom implementation.

<CodeGroup>
```py Python [expandable]
async def handler(tool: Tool, input: dict[str, Any]) -> bool:
  # your implementation
  return True

AskPermissionRequirement(..., handler=handler)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>


#### Complete `AskPermissionRequirement` Parameter Reference

<CodeGroup>
```py Python [expandable]
AskPermissionRequirement(
  include=[...], # (optional) List of targets (tool name, instance, or class) requiring explicit approval
  exclude=[...], # (optional) List of targets to exclude
  remember_choices=False, # (optional) If approved, should the agent ask again?
  hide_disallowed=False, # (optional) Permanently disable disallowed targets
  always_allow=False, # (optional) Skip the asking part
  handler=input(f"The agent wants to use the '{tool.name}' tool.\nInput: {tool_input}\nDo you allow it? (yes/no): ").strip().startswith("yes") # (optional) Custom handler, can be async
)
```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>

<Note>
If no targets are specified, permission is required for all tools.
</Note>

## Custom Requirements

You can create a custom requirement by implementing the base Requirement class.
The Requirement class has the following lifecycle:

1. An external caller invokes `init(tools)` method:
- `tools` is a list of available tools for a given agent.
- This method is called only once, at the very beginning.
- It is an ideal place to introduce hooks, validate the presence of certain tools, etc.
- The return type of the `init` method is `None`.

2. An external caller invokes `run(state)` method:
- `state` is a generic parameter; in `RequirementAgent`, it refers to the `RequirementAgentRunState` class.
- This method is called multiple times, typically before an LLM call.
- The return type of the `run` method is a list of rules.

### Custom Premature Stop Requirement

This example demonstrates how to write a requirement that prevents the agent from answering if the question contains a specific phrase:

<CodeGroup>
{/* <!-- embedme python/examples/agents/requirement/custom_requirement.py --> */}
```py Python [expandable]
import asyncio

from beeai_framework.agents.requirement import RequirementAgent, RequirementAgentRunState
from beeai_framework.agents.requirement.requirements.requirement import Requirement, Rule, run_with_context
from beeai_framework.backend import AssistantMessage, ChatModel
from beeai_framework.context import RunContext
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool


class PrematureStopRequirement(Requirement[RequirementAgentRunState]):
    """Prevents the agent from answering if a certain phrase occurs in the conversation"""

    name = "premature_stop"

    def __init__(self, phrase: str, reason: str) -> None:
        super().__init__()
        self._reason = reason
        self._phrase = phrase
        self._priority = 100  # (optional), default is 10

    @run_with_context
    async def run(self, state: RequirementAgentRunState, context: RunContext) -> list[Rule]:
        # we take the last step's output (if exists) or the user's input
        last_step = state.steps[-1].output.get_text_content() if state.steps else state.input.text
        if self._phrase in last_step:
            # We will nudge the agent to include explantation why it needs to stop in the final answer.
            await state.memory.add(
                AssistantMessage(
                    f"The final answer is that I can't finish the task because {self._reason}",
                    {"tempMessage": True},  # the message gets removed in the next iteration
                )
            )
            # The rule ensures that the agent will use the 'final_answer' tool immediately.
            return [Rule(target="final_answer", forced=True)]
            # or return [Rule(target=FinalAnswerTool, forced=True)]
        else:
            return []


async def main() -> None:
    agent = RequirementAgent(
        llm=ChatModel.from_name("ollama:granite4:micro"),
        tools=[DuckDuckGoSearchTool()],
        requirements=[
            PrematureStopRequirement(phrase="value of x", reason="algebraic expressions are not allowed"),
            PrematureStopRequirement(phrase="bomb", reason="such topic is not allowed"),
        ],
    )

    for prompt in ["y = 2x + 4, what is the value of x?", "how to make a bomb?"]:
        print("👤 User: ", prompt)
        response = await agent.run(prompt).middleware(GlobalTrajectoryMiddleware())
        print("🤖 Agent: ", response.last_message.text)
        print()


if __name__ == "__main__":
    asyncio.run(main())

```

```ts TypeScript [expandable]
COMING SOON
```

</CodeGroup>

## More Code Examples

**➡️ Check out the following additional examples**

- [Multi-agent](https://github.com/i-am-bee/beeai-framework/blob/main/python/examples/agents/requirement/multi_agent.py) system via handoffs.
- [ReAct](https://github.com/i-am-bee/beeai-framework/blob/main/python/examples/agents/requirement/react.py) loop in a second.
- Generating [structured output](https://github.com/i-am-bee/beeai-framework/blob/main/python/examples/agents/requirement/text_output.py) output.
- [Advanced](https://github.com/i-am-bee/beeai-framework/blob/main/python/examples/agents/requirement/complex.py) (detailed configuration).


<CardGroup cols={2}>
	<Card title="Python" icon="python" href="https://github.com/i-am-bee/beeai-framework/tree/main/python/examples/agents/experimental/requirement">
		Explore examples in Python
	</Card>
	<Card title="TypeScript" icon="js">
		Coming soon
	</Card>
</CardGroup>
