The easiest way to get started with TEI is to use one of the official Docker containers (see Supported models and hardware to choose the right container).
After making sure that your hardware is supported, install the NVIDIA Container Toolkit if you plan on utilizing GPUs. We also recommend using NVIDIA drivers with CUDA version 12.2 or higher.
Next, install Docker following their installation instructions.
Finally, deploy your model. Let’s say you want to use BAAI/bge-large-en-v1.5
. Here’s how you can do this:
model=BAAI/bge-large-en-v1.5 revision=refs/pr/5 volume=$PWD/data docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.2.2 --model-id $model --revision $revision
Here we pass a revision=refs/pr/5
, because the safetensors
variant of this model is currently in a pull request.
We also recommend sharing a volume with the Docker container (volume=$PWD/data
) to avoid downloading weights every run.
Once you have deployed a model you can use the embed
endpoint by sending requests:
curl 127.0.0.1:8080/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'