import { Callout } from "nextra/components";

# Guarantees & Tradeoffs

Hatchet is designed as a modern task orchestration platform that bridges the gap between simple job queues and complex workflow engines. Understanding where it excels—and where it doesn't—will help you determine if it's the right fit for your needs.

### Good Fit

<table>
  <tbody>
    <tr>
      <td>✅</td>
      <td>
        <strong>Real-time Requests</strong> - Sub-25ms task dispatch for hot
        workers with thousands of concurrent tasks
      </td>
    </tr>
    <tr>
      <td>✅</td>
      <td>
        <strong>Workflow Orchestration</strong> with dependencies and error
        handling
      </td>
    </tr>
    <tr>
      <td>✅</td>
      <td>
        <strong>Reliable Task Processing</strong> where durability matters
      </td>
    </tr>
    <tr>
      <td>✅</td>
      <td>
        <strong>Moderate Throughput</strong> (hundreds to low 10,000s of
        tasks/second)
      </td>
    </tr>
    <tr>
      <td>✅</td>
      <td>
        <strong>Multi-Language Workers</strong> or polyglot teams
      </td>
    </tr>
    <tr>
      <td>✅</td>
      <td>
        <strong>Operational Simplicity</strong> if your team is already using
        PostgreSQL
      </td>
    </tr>
    <tr>
      <td>✅</td>
      <td>
        <strong>Cloud or Air-Gapped Environments</strong> for flexible
        deployment options (
        <a href="https://cloud.onhatchet.run">Hatchet Cloud</a> and{" "}
        <a href="../self-hosting">self-hosting</a>)
      </td>
    </tr>
  </tbody>
</table>

### Not a Good Fit

<table>
  <tbody>
    <tr>
      <td>❌</td>
      <td>
        <strong>Extremely High Throughput</strong> (consistently 10,000+
        tasks/second)
      </td>
    </tr>
    <tr>
      <td>❌</td>
      <td>
        <strong>Sub-Millisecond Latency</strong> requirements
      </td>
    </tr>
    <tr>
      <td>❌</td>
      <td>
        <strong>Memory-Only Queuing</strong> where persistence or durability
        isn't needed
      </td>
    </tr>
    <tr>
      <td>❌</td>
      <td>
        <strong>Serverless Environments</strong> on cloud providers like AWS
        Lambda, Google Cloud Functions, or Azure Functions
      </td>
    </tr>
  </tbody>
</table>

## Core Reliability Guarantees

Hatchet is designed with the following core reliability guarantees:

**Every task will execute at least once.** Hatchet ensures that no task gets lost, even during system failures, network outages, or deployments. Failed tasks automatically retry according to your configuration, and all tasks persist through restarts and network issues.

**Consistent state management.** All workflow state changes happen within PostgreSQL transactions, ensuring that your workflow dependencies resolve consistently and no tasks are lost during failures or deployments.

**Predictable execution order.** The default task assignment strategy is First In First Out (FIFO) which can be modified with [concurrency policies](./concurrency.mdx), [rate limits](./rate-limits.mdx), and [priorities](./priority.mdx).

**Operational resilience.** The engine and API servers are stateless, allowing them to restart without losing state and enabling horizontal scaling by simply adding more instances. Workers automatically reconnect after network issues and can be deployed anywhere—containers, VMs, or local development environments.

## Performance Expectations

Understanding Hatchet's performance characteristics helps you plan your implementation and set realistic expectations.

**Typical time-to-start latency** for task dispatch is sub 50ms with PostgreSQL storage, though this can be optimized to ~25ms P95 for hot workers in optimized setups. Network latency between your workers and the Hatchet engine will directly impact dispatch times, so consider deployment topology when latency matters.

**Throughput capacity** varies significantly based on your setup. A single engine instance with PostgreSQL-only storage typically handles hundreds of tasks per second. When you need higher throughput, adding RabbitMQ as a message queue can substantially increase capacity, though your database will eventually become the bottleneck at very high scales. Through tuning and sharding, we can support throughputs of tens of thousands of tasks per second.

**Concurrent processing** scales well — Hatchet supports thousands of concurrent workers, with worker-level concurrency controlled through slot configuration. The depth of your queues is limited by your database storage capacity rather than memory constraints.

**Performance optimization** comes through several strategies: RabbitMQ for high-throughput workloads, read replicas for analytics queries, connection pooling with tools like PgBouncer, and shorter retention periods for execution history. Conversely, performance can be limited by database connection limits, large task payloads (over 1MB), complex dependency graphs, and cross-region network latency.

<Callout type="warning">

**Not seeing expected performance?**

If you're not seeing the performance you expect, please [reach out to us](https://hatchet.run/office-hours) or [join our community](https://hatchet.run/discord) to explore tuning options.

</Callout>

## Ready to Get Started?

Now that you understand Hatchet's capabilities and limitations, explore the technical details:

**[Quick Start](../setup.mdx)** - Set up your first Hatchet worker.

**[Self-Hosting](../self-hosting)** - Learn how to deploy Hatchet on your own infrastructure with appropriate sizing for your needs.
