The easiest way to get started with TEI is to use one of the official Docker containers (see Supported models and hardware to choose the right container).
After making sure that your hardware is supported, install the NVIDIA Container Toolkit if you plan on utilizing GPUs. We also recommend using NVIDIA drivers with CUDA version 12.2 or higher.
Next, install Docker following their installation instructions.
Finally, deploy your model. Let’s say you want to use BAAI/bge-large-en-v1.5
. Here’s how you can do this:
model=BAAI/bge-large-en-v1.5 revision=refs/pr/5 volume=$PWD/data docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.4.0 --model-id $model --revision $revision
Here we pass a revision=refs/pr/5
, because the safetensors
variant of this model is currently in a pull request.
We also recommend sharing a volume with the Docker container (volume=$PWD/data
) to avoid downloading weights every run.
Once you have deployed a model you can use the embed
endpoint by sending requests:
curl 127.0.0.1:8080/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
TEI can also be used to deploy Sequence Classification models. See this blogpost by the LlamaIndex team to understand how you can use Sequence Classification models in your RAG pipeline to improve downstream performance.
Let’s say you want to use BAAI/bge-reranker-large
:
model=BAAI/bge-reranker-large revision=refs/pr/4 volume=$PWD/data docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.4.0 --model-id $model --revision $revision
Once you have deployed a model you can use the predict
endpoint and rank the similarity between a pair of inputs:
curl 127.0.0.1:8080/predict \
-X POST \
-d '{"inputs":["What is Deep Learning?", "Deep learning is..."], "raw_scores": true}' \
-H 'Content-Type: application/json'
You can also use classic Sequence Classification models like SamLowe/roberta-base-go_emotions
:
model=SamLowe/roberta-base-go_emotions volume=$PWD/data docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.4.0 --model-id $model
Once you have deployed the model you can use the predict
endpoint to get the emotions most associated with an input:
curl 127.0.0.1:8080/predict \
-X POST \
-d '{"inputs":"I like you."}' \
-H 'Content-Type: application/json'