---
title: "HuggingFaceAPIGenerator"
id: huggingfaceapigenerator
slug: "/huggingfaceapigenerator"
description: "This generator enables text generation using various Hugging Face APIs."
---

# HuggingFaceAPIGenerator

This generator enables text generation using various Hugging Face APIs.

<div className="key-value-table">

|  |  |
| --- | --- |
| **Most common position in a pipeline** | After a [`PromptBuilder`](../builders/promptbuilder.mdx) |
| **Mandatory init variables** | `api_type`: The type of Hugging Face API to use  <br /> <br />`api_params`: A dictionary with one of the following keys:  <br /> <br />- `model`: Hugging Face model ID. Required when `api_type` is `SERVERLESS_INFERENCE_API`.**OR** - `url`: URL of the inference endpoint. Required when `api_type` is `INFERENCE_ENDPOINTS` or `TEXT_EMBEDDINGS_INFERENCE`.`token`: The Hugging Face API token. Can be set with `HF_API_TOKEN` or `HF_TOKEN` env var. |
| **Mandatory run variables** | `prompt`: A string containing the prompt for the LLM |
| **Output variables** | `replies`: A list of strings with all the replies generated by the LLM  <br /> <br />`meta`: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and others |
| **API reference** | [Generators](/reference/generators-api) |
| **GitHub link** | https://github.com/deepset-ai/haystack/blob/main/haystack/components/generators/hugging_face_api.py |

</div>

## Overview

`HuggingFaceAPIGenerator` can be used to generate text using different Hugging Face APIs:

- [Paid Inference Endpoints](https://huggingface.co/inference-endpoints)
- [Self-hosted Text Generation Inference](https://github.com/huggingface/text-generation-inference)

:::note Important Note

As of July 2025, the Hugging Face Inference API no longer offers generative models through the `text_generation` endpoint. Generative models are now only available through providers supporting the `chat_completion` endpoint. As a result, this component might no longer work with the Hugging Face Inference API.

Use the [`HuggingFaceAPIChatGenerator`](huggingfaceapichatgenerator.mdx) component instead, which supports the `chat_completion` endpoint and works with the free Serverless Inference API.
:::

:::info
This component is designed for text generation, not for chat. If you want to use these LLMs for chat, use [`HuggingFaceAPIChatGenerator`](huggingfaceapichatgenerator.mdx) instead.
:::

The component uses a `HF_API_TOKEN` environment variable by default. Otherwise, you can pass a Hugging Face API token at initialization with `token` – see code examples below.
The token is needed when you use the Inference Endpoints.

### Streaming

This Generator supports [streaming](guides-to-generators/choosing-the-right-generator.mdx#streaming-support) the tokens from the LLM directly in output. To do so, pass a function to the `streaming_callback` init parameter.

## Usage

### On its own

#### Using Paid Inference Endpoints

In this case, a private instance of the model is deployed by Hugging Face, and you typically pay per hour.

To understand how to spin up an Inference Endpoint, visit [Hugging Face documentation](https://huggingface.co/inference-endpoints/dedicated).

Additionally, in this case, you need to provide your Hugging Face token.
The Generator expects the `url` of your endpoint in `api_params`.

```python
from haystack.components.generators import HuggingFaceAPIGenerator
from haystack.utils import Secret

generator = HuggingFaceAPIGenerator(api_type="inference_endpoints",
                                    api_params={"url": "<your-inference-endpoint-url>"},
                                    token=Secret.from_token("<your-api-key>"))

result = generator.run(prompt="What's Natural Language Processing?")
print(result)
```

#### Using Self-Hosted Text Generation Inference (TGI)

[Hugging Face Text Generation Inference](https://github.com/huggingface/text-generation-inference) is a toolkit for efficiently deploying and serving LLMs.

While it powers the most recent versions of Serverless Inference API and Inference Endpoints, it can be used easily on-premise through Docker.

For example, you can run a TGI container as follows:

```shell
model=mistralai/Mistral-7B-v0.1
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model
```

For more information, refer to the [official TGI repository](https://github.com/huggingface/text-generation-inference).

The Generator expects the `url` of your TGI instance in `api_params`.

```python
from haystack.components.generators import HuggingFaceAPIGenerator

generator = HuggingFaceAPIGenerator(api_type="text_generation_inference",
                                    api_params={"url": "http://localhost:8080"})

result = generator.run(prompt="What's Natural Language Processing?")
print(result)
```

#### Using the Free Serverless Inference API (Not Recommended)

:::warning
This example might not work as the Hugging Face Inference API no longer offers models that support the `text_generation` endpoint. Use the [`HuggingFaceAPIChatGenerator`](huggingfaceapichatgenerator.mdx) for generative models through the `chat_completion` endpoint.

:::

Formerly known as (free) Hugging Face Inference API, this API allows you to quickly experiment with many models hosted on the Hugging Face Hub, offloading the inference to Hugging Face servers. It's rate-limited and not meant for production.

To use this API, you need a [free Hugging Face token](https://huggingface.co/settings/tokens).
The Generator expects the `model` in `api_params`.

```python
from haystack.components.generators import HuggingFaceAPIGenerator
from haystack.utils import Secret

generator = HuggingFaceAPIGenerator(api_type="serverless_inference_api",
                                    api_params={"model": "HuggingFaceH4/zephyr-7b-beta"},
                                    token=Secret.from_token("<your-api-key>"))

result = generator.run(prompt="What's Natural Language Processing?")
print(result)
```

### In a pipeline

```python
from haystack import Pipeline
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.generators import HuggingFaceAPIGenerator
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack import Document

docstore = InMemoryDocumentStore()
docstore.write_documents([Document(content="Rome is the capital of Italy"), Document(content="Paris is the capital of France")])

query = "What is the capital of France?"

template = """
Given the following information, answer the question.

Context:
{% for document in documents %}
    {{ document.content }}
{% endfor %}

Question: {{ query }}?
"""

generator = HuggingFaceAPIGenerator(api_type="inference_endpoints",
                                    api_params={"url": "<your-inference-endpoint-url>"},
                                    token=Secret.from_token("<your-api-key>"))

pipe = Pipeline()

pipe.add_component("retriever", InMemoryBM25Retriever(document_store=docstore))
pipe.add_component("prompt_builder", PromptBuilder(template=template))
pipe.add_component("llm", generator)
pipe.connect("retriever", "prompt_builder.documents")
pipe.connect("prompt_builder", "llm")

res=pipe.run({
    "prompt_builder": {
        "query": query
    },
    "retriever": {
        "query": query
    }
})

print(res)
```

## Additional References

🧑‍🍳 Cookbooks:

- [Multilingual RAG from a podcast with Whisper, Qdrant and Mistral](https://haystack.deepset.ai/cookbook/multilingual_rag_podcast)
- [Information Extraction with Raven](https://haystack.deepset.ai/cookbook/information_extraction_raven)
- [Web QA with Mixtral-8x7B-Instruct-v0.1](https://haystack.deepset.ai/cookbook/mixtral-8x7b-for-web-qa)
