bge-micro-v2-quant / README.md
zeroshot's picture
Update README.md
a6151ba
metadata
license: mit
language:
  - en

bge-micro-v2-quant

This is the quantized (INT8) ONNX variant of the bge-micro-v2 embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization.

pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/bge-micro-v2-quant', export=False)

# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
    'Sentences are passed as a list of string.',
    'The quick brown fox jumps over the lazy dog.']

# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)

# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
    print("Sentence:", sentence)
    print("Embedding:", embedding.shape)
    print("")

For further details regarding DeepSparse & Sentence Transformers integration, refer to the DeepSparse README.

For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.

;)