---
title: Overview
description: The TensorZero Gateway is a high-performance model gateway that provides a unified interface for all your LLM applications.
---

The TensorZero Gateway is a high-performance model gateway that provides a unified interface for all your LLM applications.

- **One API for All LLMs.**
  The gateway provides a unified interface for all major LLM providers, allowing for seamless cross-platform integration and fallbacks.
  TensorZero natively supports
  [Anthropic](/integrations/model-providers/anthropic/),
  [AWS Bedrock](/integrations/model-providers/aws-bedrock/),
  [AWS SageMaker](/integrations/model-providers/aws-sagemaker/),
  [Azure OpenAI Service](/integrations/model-providers/azure/),
  [Fireworks](/integrations/model-providers/fireworks/),
  [GCP Vertex AI Anthropic](/integrations/model-providers/gcp-vertex-ai-anthropic/),
  [GCP Vertex AI Gemini](/integrations/model-providers/gcp-vertex-ai-gemini/),
  [Google AI Studio (Gemini API)](/integrations/model-providers/google-ai-studio-gemini/),
  [Groq](/integrations/model-providers/groq/),
  [Hyperbolic](/integrations/model-providers/hyperbolic/),
  [Mistral](/integrations/model-providers/mistral/),
  [OpenAI](/integrations/model-providers/openai/),
  [OpenRouter](/integrations/model-providers/openrouter/),
  [Together](/integrations/model-providers/together/),
  [vLLM](/integrations/model-providers/vllm/), and
  [xAI](/integrations/model-providers/xai/).
  Need something else?
  Your provider is most likely supported because TensorZero integrates with [any OpenAI-compatible API (e.g. Ollama)](/integrations/model-providers/openai-compatible/).
  Still not supported?
  Open an issue on [GitHub](https://github.com/tensorzero/tensorzero/issues) and we'll integrate it!

  <Tip>

  Learn more in our [How to call any LLM](/gateway/call-any-llm) guide.

  </Tip>

- **Blazing Fast.**
  The gateway (written in Rust 🦀) achieves &lt;1ms P99 latency overhead under extreme load.
  In [benchmarks](/gateway/benchmarks/), LiteLLM @ 100 QPS adds 25-100x+ more latency than our gateway @ 10,000 QPS.

- **Structured Inferences.**
  The gateway enforces schemas for inputs and outputs, ensuring robustness for your application.
  Structured inference data is later used for powerful optimization recipes (e.g. swapping historical prompts before fine-tuning).
  Learn more about [creating prompt templates](/gateway/create-a-prompt-template).

- **Multi-Step LLM Workflows.**
  The gateway provides first-class support for complex multi-step LLM workflows by associating multiple inferences with an episode.
  Feedback can be assigned at the inference or episode level, allowing for end-to-end optimization of compound LLM systems.
  Learn more about [episodes](/gateway/guides/episodes/).

- **Built-in Observability.**
  The gateway collects structured inference traces along with associated downstream metrics and natural-language feedback.
  Everything is stored in a ClickHouse database for real-time, scalable, and developer-friendly analytics.
  [TensorZero Recipes](/recipes/) leverage this dataset to optimize your LLMs.

- **Built-in Experimentation.**
  The gateway automatically routes traffic between variants to enable A/B tests.
  It ensures consistent variants within an episode in multi-step workflows.
  Learn more about [adaptive A/B tests](/experimentation/run-adaptive-ab-tests).

- **Built-in Fallbacks.**
  The gateway automatically fallbacks failed inferences to different inference providers, or even completely different variants.
  Ensure misconfiguration, provider downtime, and other edge cases don't affect your availability.

- **Access Controls.**
  The gateway supports TensorZero API key authentication, allowing you to control access to your TensorZero deployment.
  Create and manage custom API keys for different clients or services.
  Learn more about [setting up auth for TensorZero](/operations/set-up-auth-for-tensorzero).

- **GitOps Orchestration.**
  Orchestrate prompts, models, parameters, tools, experiments, and more with GitOps-friendly configuration.
  Manage a few LLMs manually with human-friendly readable configuration files, or thousands of prompts and LLMs entirely programmatically.

## Next Steps

<Columns cols={2}>
  <Card title="Quickstart" href="/quickstart/">
    Make your first TensorZero API call with built-in observability and
    fine-tuning in under 5 minutes.
  </Card>
  <Card title="Deployment" href="/deployment/tensorzero-gateway/">
    Quickly deploy locally, or set up high-availability services for production
    environments.
  </Card>
  <Card title="Integrations" href="/integrations/model-providers/">
    The TensorZero Gateway integrates with the major LLM providers.
  </Card>
  <Card title="Benchmarks" href="/gateway/benchmarks/">
    The TensorZero Gateway achieves sub-millisecond latency overhead under
    extreme load.
  </Card>
  <Card title="API Reference" href="/gateway/api-reference/inference/">
    The TensorZero Gateway provides an unified interface for making inference
    and feedback API calls.
  </Card>
  <Card
    title="Configuration Reference"
    href="/gateway/configuration-reference/"
  >
    Easily manage your LLM applications with GitOps orchestration — even complex
    multi-step systems.
  </Card>
</Columns>
