---
title: HuggingFace
description: Learn how to use HuggingFace models in Agno.
---

Hugging Face provides a wide range of state-of-the-art language models tailored to diverse NLP tasks,
including text generation, summarization, translation, and question answering.
These models are available through the Hugging Face Transformers library and are widely
adopted due to their ease of use, flexibility, and comprehensive documentation.

Explore HuggingFace’s language models [here](https://huggingface.co/docs/text-generation-inference/en/supported_models).

## Authentication

Set your `HF_TOKEN` environment. You can get one [from HuggingFace here](https://huggingface.co/settings/tokens).

<CodeGroup>

```bash Mac
export HF_TOKEN=***
```

```bash Windows
setx HF_TOKEN ***
```

</CodeGroup>

## Example

Use `HuggingFace` with your `Agent`:

<CodeGroup>

```python agent.py
from agno.agent import Agent
from agno.models.huggingface import HuggingFace

agent = Agent(
    model=HuggingFace(
        id="meta-llama/Meta-Llama-3-8B-Instruct",
        max_tokens=4096,
    ),
    markdown=True
)

# Print the response on the terminal
agent.print_response("Share a 2 sentence horror story.")
```

</CodeGroup>

<Note> View more examples [here](/examples/models/huggingface/basic). </Note>

## Parameters

| Parameter         | Type               | Default                           | Description                                                           |
| ----------------- | ------------------ | --------------------------------- | --------------------------------------------------------------------- |
| `id`              | `str`              | `"microsoft/DialoGPT-medium"`     | The id of the Hugging Face model to use                              |
| `name`            | `str`              | `"HuggingFace"`                   | The name of the model                                                 |
| `provider`        | `str`              | `"HuggingFace"`                   | The provider of the model                                             |
| `api_key`         | `Optional[str]`    | `None`                            | The API key for Hugging Face (defaults to HF_TOKEN env var)          |
| `base_url`        | `str`              | `"https://api-inference.huggingface.co/models"` | The base URL for Hugging Face Inference API       |
| `wait_for_model`  | `bool`             | `True`                            | Whether to wait for the model to load if it's cold                   |
| `use_cache`       | `bool`             | `True`                            | Whether to use caching for faster inference                           |
| `max_tokens`      | `Optional[int]`    | `None`                            | Maximum number of tokens to generate                                  |
| `temperature`     | `Optional[float]`  | `None`                            | Controls randomness in the model's output                             |
| `top_p`           | `Optional[float]`  | `None`                            | Controls diversity via nucleus sampling                               |
| `repetition_penalty` | `Optional[float]` | `None`                           | Penalty for repeating tokens (higher values reduce repetition)       |

`HuggingFace` is a subclass of the [Model](/reference/models/model) class and has access to the same params.
