---
title: vLLM
---

The vLLM Embedder provides high-performance embedding inference with support for both local and remote deployment modes. It can load models directly for local inference or connect to a remote vLLM server via an OpenAI-compatible API.

## Usage

```python
from agno.knowledge.embedder.vllm import VLLMEmbedder
from agno.knowledge.knowledge import Knowledge
from agno.vectordb.pgvector import PgVector

# Local mode
embedder = VLLMEmbedder(
    id="intfloat/e5-mistral-7b-instruct",
    dimensions=4096,
    enforce_eager=True,
    vllm_kwargs={
        "disable_sliding_window": True,
        "max_model_len": 4096,
    },
)

# Use with Knowledge
knowledge = Knowledge(
    vector_db=PgVector(
        db_url="postgresql+psycopg://ai:ai@localhost:5532/ai",
        table_name="vllm_embeddings",
        embedder=embedder,
    ),
)
```

## Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `id` | `str` | `"intfloat/e5-mistral-7b-instruct"` | Model identifier (HuggingFace model name) |
| `dimensions` | `int` | `4096` | Embedding vector dimensions |
| `base_url` | `Optional[str]` | `None` | Remote vLLM server URL (enables remote mode) |
| `api_key` | `Optional[str]` | `getenv("VLLM_API_KEY")` | API key for remote server authentication |
| `enable_batch` | `bool` | `False` | Enable batch processing for multiple texts |
| `batch_size` | `int` | `10` | Number of texts to process per batch |
| `enforce_eager` | `bool` | `True` | Use eager execution mode (local mode) |
| `vllm_kwargs` | `Optional[Dict[str, Any]]` | `None` | Additional vLLM engine parameters (local mode) |
| `request_params` | `Optional[Dict[str, Any]]` | `None` | Additional request parameters (remote mode) |
| `client_params` | `Optional[Dict[str, Any]]` | `None` | OpenAI client configuration (remote mode) |

## Developer Resources
- View [Cookbook](https://github.com/agno-agi/agno/tree/main/cookbook/knowledge/embedders/vllm_embedder.py)
