---
title: Building Effective Agents
sidebarTitle: Overview
description: Based on Anthropic's guide to building effective AI agents, focusing on simplicity, composability, and practical patterns.
---

## Introduction

Building effective AI agents requires understanding when and how to add complexity to your LLM applications. According to Anthropic's experience working with dozens of teams across industries, the most successful agent implementations use simple, composable patterns rather than complex frameworks.

<Note>
  We enjoyed reading [building effective AI agents](https://www.anthropic.com/engineering/building-effective-agents) by Anthropic's engineering team. So we adapted the key points to work with Latitude projects.
</Note>

## Core Principles

<Expandable title="Start Simple">
  Find the simplest solution possible and only increase complexity when needed. This might mean not building agentic systems at all. Often, optimizing single LLM calls with retrieval and in-context examples is sufficient.
</Expandable>
<Expandable title="Consider Trade-offs">
  Agentic systems often trade latency and cost for better task performance. Consider when this trade-off makes sense for your use case.
</Expandable>
<Expandable title="Choose the Right Pattern">
- **Workflows** offer predictability and consistency for well-defined tasks
- **Agents** are better when flexibility and model-driven decision-making are needed at scale
</Expandable>

## The Augmented LLM (Foundation)

The basic building block is an LLM enhanced with:
- **Retrieval**: Access to external information with [Latitude tools](/guides/prompt-manager/latitude-tools) and third-party [MCP integrations](/guides/prompt-manager/mcp-integrations)
- **Tools**: Ability to perform actions with [calling LLM tools](/guides/prompt-manager/tools)
- **Memory**: Context retention across interactions with [RAGs](/examples/techniques/retrieval-augmented-generation)

# What are Agents?

Anthropic categorizes agentic systems into two main types:

<Columns cols={2}>
  <Card title="Workflows">
    Systems where LLMs and tools are orchestrated through predefined code paths
  </Card>
  <Card title="Agents">
    Systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks
  </Card>
</Columns>

## Workflow Patterns

<Card
  arrow
  title="Chaining"
  href="/examples/cases/building-effective-agents/prompt-chaining"
>
  An example of a workflow pattern where tasks are decomposed into sequential steps, each step building on the previous one.
</Card>

<Card
  arrow
  title="Routing"
  href="/examples/cases/building-effective-agents/prompt-routing"
>
  Classifies input and directs it to specialized follow-up tasks.
</Card>

<Card
  arrow
  title="Parallelization"
  href="/examples/cases/building-effective-agents/prompt-parallelization"
>
  LLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically
</Card>

<Card
  arrow
  title="Orchestrator-Workers"
  href="/examples/cases/building-effective-agents/orchestrator-workers"
>
  A central LLM dynamically breaks down tasks, delegates to worker LLMs, and synthesizes results.
</Card>

<Card
  arrow
  title="Evaluator-Optimizer"
  href="/examples/cases/building-effective-agents/evaluator-optimizer"
>
  One LLM generates responses while another provides evaluation and feedback in a loop.
</Card>

## Autonomous Agents
<Card
  arrow
  title="Autonomous Agents"
  href="/examples/cases/building-effective-agents/autonomous-agents"
>
  Agents operate independently using tools based on environmental feedback in loops. They're ideal for open-ended problems where you can't predict the required number of steps or hardcode a fixed path.
</Card>

## Key Takeaways

Success in the LLM space isn't about building the most sophisticated system—it's about building the right system for your needs. Start with simple prompts, optimize them with comprehensive evaluation, and add multi-step agentic systems only when simpler solutions fall short.

The most effective approach is to:
1. Begin with the simplest possible solution
2. Measure performance rigorously
3. Add complexity only when it demonstrably improves outcomes
4. Focus on clear tool design and transparent agent behavior
5. Test extensively in sandboxed environments with appropriate guardrails

