The crispy sentence embedding family from Mixedbread.
mixedbread-ai/mxbai-embed-xsmall-v1
This model is an open-source English embedding model developed by Mixedbread. It's built upon sentence-transformers/all-MiniLM-L6-v2 and trained with the AnglE loss and Espresso. Read more details in our blog post.
In a bread loaf:
- State-of-the-art performance
- Supports both binary quantization and Matryoshka Representation Learning (MRL).
- Optimized for retrieval tasks
Performance
Binary Quantization and Matryoshka
Our model supports both binary quantization and Matryoshka Representation Learning (MRL), allowing for significant efficiency gains:
- Binary quantization: Retains 93.9% of performance while increasing efficiency by a factor of 32
- MRL: A 33% reduction in vector size still leaves 96.2% of model performance
These optimizations can lead to substantial reductions in infrastructure costs for cloud computing and vector databases. Read more here.
Quickstart
Here are several ways to produce German sentence embeddings using our model.
angle-emb
pip install -U angle-emb
from angle_emb import AnglE
from angle_emb.utils import cosine_similarity
# 1. Specify preferred dimensions
dimensions = 384
# 2. Load model and set pooling strategy to avg
model = AnglE.from_pretrained(
"mixedbread-ai/mxbai-embed-xsmall-v1",
pooling_strategy='avg').cuda()
query = 'A man is eating a piece of bread'
docs = [
query,
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
# 3. Encode
embeddings = model.encode(docs, embedding_size=dimensions)
for doc, emb in zip(docs[1:], embeddings[1:]):
print(f'{query} ||| {doc}', cosine_similarity(embeddings[0], emb))
Sentence Transformers
python -m pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
# 1. Specify preferred dimensions
dimensions = 384
# 2. Load model
model = SentenceTransformer("mixedbread-ai/mxbai-embed-xsmall-v1", truncate_dim=dimensions)
query = 'A man is eating a piece of bread'
docs = [
query,
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
# 3. Encode
embeddings = model.encode(docs)
similarities = cos_sim(embeddings[0], embeddings[1:])
print('similarities:', similarities)
transformers
pip install -U transformers
from typing import Dict
import torch
import numpy as np
from transformers import AutoModel, AutoTokenizer
from sentence_transformers.util import cos_sim
def pooling(outputs: torch.Tensor, inputs: Dict) -> np.ndarray:
outputs = torch.sum(
outputs * inputs["attention_mask"][:, :, None], dim=1) / torch.sum(inputs["attention_mask"])
return outputs.detach().cpu().numpy()
# 1. Load model
model_id = 'mixedbread-ai/mxbai-embed-xsmall-v1'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id).cuda()
query = 'A man is eating a piece of bread'
docs = [
query,
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
# 2. Encode
inputs = tokenizer(docs, padding=True, return_tensors='pt')
for k, v in inputs.items():
inputs[k] = v.cuda()
outputs = model(**inputs).last_hidden_state
embeddings = pooling(outputs, inputs)
# 3. Compute similarity scores
similarities = cos_sim(embeddings[0], embeddings[1:])
print('similarities:', similarities)
Batched API
python -m pip install batched
import uvicorn
import batched
from fastapi import FastAPI
from fastapi.responses import ORJSONResponse
from sentence_transformers import SentenceTransformer
from pydantic import BaseModel
app = FastAPI()
model = SentenceTransformer('mixedbread-ai/mxbai-embed-xsmall-v1')
model.encode = batched.aio.dynamically(model.encode)
class EmbeddingsRequest(BaseModel):
input: str | list[str]
@app.post("/embeddings")
async def embeddings(request: EmbeddingsRequest):
return ORJSONResponse({"embeddings": await model.encode(request.input)})
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Community
Join our discord community to share your feedback and thoughts. We're here to help and always happy to discuss the exciting field of machine learning!
License
Apache 2.0
Citation
@online{xsmall2024mxbai,
title={Every Byte Matters: Introducing mxbai-embed-xsmall-v1},
author={Sean Lee and Julius Lipp and Rui Huang and Darius Koenig},
year={2024},
url={https://www.mixedbread.ai/blog/mxbai-embed-xsmall-v1},
}
- Downloads last month
- 13,634
Model tree for mixedbread-ai/mxbai-embed-xsmall-v1
Unable to build the model tree, the base model loops to the model itself. Learn more.
Space using mixedbread-ai/mxbai-embed-xsmall-v1 1
Collection including mixedbread-ai/mxbai-embed-xsmall-v1
Evaluation results
- ndcg_at_1 on MTEB ArguAnatest set self-reported25.180
- ndcg_at_3 on MTEB ArguAnatest set self-reported39.220
- ndcg_at_5 on MTEB ArguAnatest set self-reported43.930
- ndcg_at_10 on MTEB ArguAnatest set self-reported49.580
- ndcg_at_30 on MTEB ArguAnatest set self-reported53.410
- ndcg_at_100 on MTEB ArguAnatest set self-reported54.110
- map_at_1 on MTEB ArguAnatest set self-reported25.180
- map_at_3 on MTEB ArguAnatest set self-reported35.660
- map_at_5 on MTEB ArguAnatest set self-reported38.250
- map_at_10 on MTEB ArguAnatest set self-reported40.580