Edit model card

onnx-models/paraphrase-MiniLM-L12-v2-onnx

This is the ONNX-ported version of the sentence-transformers/paraphrase-MiniLM-L12-v2 for generating text embeddings.

Model details

  • Embedding dimension: 384
  • Max sequence length: 256
  • File size on disk: 0.12 GB
  • Modules incorporated in the onnx: Transformer, Pooling

Usage

Using this model becomes easy when you have light-embed installed:

pip install -U light-embed

Then you can use the model by specifying the original model name like this:

from light_embed import TextEmbedding
sentences = [
    "This is an example sentence",
    "Each sentence is converted"
]

model = TextEmbedding('sentence-transformers/paraphrase-MiniLM-L12-v2')
embeddings = model.encode(sentences)
print(embeddings)

or by specifying the onnx model name like this:

from light_embed import TextEmbedding
sentences = [
    "This is an example sentence",
    "Each sentence is converted"
]

model = TextEmbedding('onnx-models/paraphrase-MiniLM-L12-v2-onnx')
embeddings = model.encode(sentences)
print(embeddings)

Citing & Authors

Binh Nguyen / binhcode25@gmail.com

Downloads last month
0
Inference API (serverless) does not yet support light-embed models for this pipeline type.