Text Embeddings Inference currently supports BERT, CamemBERT, and XLM-RoBERTa models with absolute positions. We are continually expanding our support for other model types and plan to include them in future updates.
Below are some examples of the currently supported models:
MTEB Rank | Model Type | Model ID |
---|---|---|
1 | Bert | BAAI/bge-large-en-v1.5 |
2 | BAAI/bge-base-en-v1.5 | |
3 | llmrails/ember-v1 | |
4 | thenlper/gte-large | |
5 | thenlper/gte-base | |
6 | intfloat/e5-large-v2 | |
7 | BAAI/bge-small-en-v1.5 | |
10 | intfloat/e5-base-v2 | |
11 | XLM-RoBERTa | intfloat/multilingual-e5-large |
To explore the list of best performing text embeddings models, visit the Massive Text Embedding Benchmark (MTEB) Leaderboard.
Text Embeddings Inference supports can be used on CPU, Turing (T4, RTX 2000 series, …), Ampere 80 (A100, A30), Ampere 86 (A10, A40, …), Ada Lovelace (RTX 4000 series, …), and Hopper (H100) architectures.
The library does not support CUDA compute capabilities < 7.5, which means V100, Titan V, GTX 1000 series, etc. are not supported. To leverage your GPUs, make sure to install the NVIDIA Container Toolkit, and use NVIDIA drivers with CUDA version 12.2 or higher.
Find the appropriate Docker image for your hardware in the following table:
Architecture | Image |
---|---|
CPU | ghcr.io/huggingface/text-embeddings-inference:cpu-0.2.2 |
Volta | NOT SUPPORTED |
Turing (T4, RTX 2000 series, …) | ghcr.io/huggingface/text-embeddings-inference:turing-0.2.2 |
Ampere 80 (A100, A30) | ghcr.io/huggingface/text-embeddings-inference:0.2.2 |
Ampere 86 (A10, A40, …) | ghcr.io/huggingface/text-embeddings-inference:86-0.2.2 |
Ada Lovelace (RTX 4000 series, …) | ghcr.io/huggingface/text-embeddings-inference:89-0.2.2 |
Hopper (H100) | ghcr.io/huggingface/text-embeddings-inference:hopper-0.2.2 |