juliuslipp
commited on
Commit
•
fcf8910
1
Parent(s):
d41dac6
Update README.md
Browse files
README.md
CHANGED
@@ -2764,7 +2764,7 @@ similarities = cosine_similarity([embeddings[0]], [embeddings[1]])
|
|
2764 |
print(similarities)
|
2765 |
```
|
2766 |
|
2767 |
-
The API comes with native INT8 and binary quantization support!
|
2768 |
|
2769 |
## Evaluation
|
2770 |
As of March 2024, our model archives SOTA performance for Bert-large sized models on the [MTEB](https://huggingface.co/spaces/mteb/leaderboard). It ourperforms commercial models like OpenAIs text-embedding-3-large and matches the performance of model 20x it's size like the [echo-mistral-7b](https://huggingface.co/jspringer/echo-mistral-7b-instruct-lasttoken). Our model was trained with no overlap of the MTEB data, which indicates that our model generalizes well across several domains, tasks and text length. We know there are some limitations with this model, which will be fixed in v2.
|
|
|
2764 |
print(similarities)
|
2765 |
```
|
2766 |
|
2767 |
+
The API comes with native INT8 and binary quantization support! Check out the [docs](https://mixedbread.ai/docs) for more information.
|
2768 |
|
2769 |
## Evaluation
|
2770 |
As of March 2024, our model archives SOTA performance for Bert-large sized models on the [MTEB](https://huggingface.co/spaces/mteb/leaderboard). It ourperforms commercial models like OpenAIs text-embedding-3-large and matches the performance of model 20x it's size like the [echo-mistral-7b](https://huggingface.co/jspringer/echo-mistral-7b-instruct-lasttoken). Our model was trained with no overlap of the MTEB data, which indicates that our model generalizes well across several domains, tasks and text length. We know there are some limitations with this model, which will be fixed in v2.
|