Edit model card

ONNX Conversion of sentence-transformers/paraphrase-MiniLM-L3-v2

  • ONNX model for CPU with O3 optimisation
  • This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage

import torch
import torch.nn.functional as F
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer

sentences = [
    "The llama (/ˈlɑːmə/) (Lama glama) is a domesticated South American camelid.",
    "The alpaca (Lama pacos) is a species of South American camelid mammal.",
    "The vicuña (Lama vicugna) (/vɪˈkuːnjə/) is one of the two wild South American camelids.",
]

model_name = "EmbeddedLLM/paraphrase-MiniLM-L3-v2-onnx-o3-cpu"
device = "cpu"
provider = "CPUExecutionProvider"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = ORTModelForFeatureExtraction.from_pretrained(
    model_name, use_io_binding=True, provider=provider, device_map=device
)
inputs = tokenizer(
    sentences,
    padding=True,
    truncation=True,
    return_tensors="pt",
    max_length=model.config.max_position_embeddings,
)
inputs = inputs.to(device)
token_embeddings = model(**inputs).last_hidden_state
# Pool
att_mask = inputs["attention_mask"].unsqueeze(-1).expand(token_embeddings.size()).float()
embeddings = torch.sum(token_embeddings * att_mask, 1) / torch.clamp(att_mask.sum(1), min=1e-9)
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings.cpu().numpy().shape)
Downloads last month
2
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.