bge-large-en-v1.5-quant
DeepSparse is able to improve latency performance on a 10 core laptop by 4.8X and up to 3.5X on a 16 core AWS instance.
Usage
This is the quantized (INT8) ONNX variant of the bge-large-en-v1.5 embeddings model accelerated with Sparsify for quantization and DeepSparseSentenceTransformers for inference.
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-large-en-v1.5-quant', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 17,314
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported75.537
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported38.306
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported69.428
- cos_sim_pearson on MTEB BIOSSEStest set self-reported89.273
- cos_sim_spearman on MTEB BIOSSEStest set self-reported88.365
- euclidean_pearson on MTEB BIOSSEStest set self-reported86.831
- euclidean_spearman on MTEB BIOSSEStest set self-reported87.562
- manhattan_pearson on MTEB BIOSSEStest set self-reported86.593
- manhattan_spearman on MTEB BIOSSEStest set self-reported87.707
- cos_sim_pearson on MTEB SICK-Rtest set self-reported86.190
- cos_sim_spearman on MTEB SICK-Rtest set self-reported82.061
- euclidean_pearson on MTEB SICK-Rtest set self-reported83.660
- euclidean_spearman on MTEB SICK-Rtest set self-reported81.917
- manhattan_pearson on MTEB SICK-Rtest set self-reported83.691
- manhattan_spearman on MTEB SICK-Rtest set self-reported81.918
- cos_sim_pearson on MTEB STS12test set self-reported86.934
- cos_sim_spearman on MTEB STS12test set self-reported78.830
- euclidean_pearson on MTEB STS12test set self-reported83.397
- euclidean_spearman on MTEB STS12test set self-reported78.870
- manhattan_pearson on MTEB STS12test set self-reported83.394
- manhattan_spearman on MTEB STS12test set self-reported78.854
- cos_sim_pearson on MTEB STS13test set self-reported87.259
- cos_sim_spearman on MTEB STS13test set self-reported87.994
- euclidean_pearson on MTEB STS13test set self-reported86.990
- euclidean_spearman on MTEB STS13test set self-reported87.727
- manhattan_pearson on MTEB STS13test set self-reported86.897
- manhattan_spearman on MTEB STS13test set self-reported87.652
- cos_sim_pearson on MTEB STS14test set self-reported85.414
- cos_sim_spearman on MTEB STS14test set self-reported83.503
- euclidean_pearson on MTEB STS14test set self-reported84.678
- euclidean_spearman on MTEB STS14test set self-reported83.437
- manhattan_pearson on MTEB STS14test set self-reported84.660
- manhattan_spearman on MTEB STS14test set self-reported83.435
- cos_sim_pearson on MTEB STS15test set self-reported88.025
- cos_sim_spearman on MTEB STS15test set self-reported89.004
- euclidean_pearson on MTEB STS15test set self-reported88.163
- euclidean_spearman on MTEB STS15test set self-reported88.665
- manhattan_pearson on MTEB STS15test set self-reported88.153
- manhattan_spearman on MTEB STS15test set self-reported88.662
- cos_sim_pearson on MTEB STS16test set self-reported85.102
- cos_sim_spearman on MTEB STS16test set self-reported86.454
- euclidean_pearson on MTEB STS16test set self-reported85.453
- euclidean_spearman on MTEB STS16test set self-reported86.066
- manhattan_pearson on MTEB STS16test set self-reported85.411
- manhattan_spearman on MTEB STS16test set self-reported86.044
- cos_sim_pearson on MTEB STS17 (en-en)test set self-reported89.870
- cos_sim_spearman on MTEB STS17 (en-en)test set self-reported89.562
- euclidean_pearson on MTEB STS17 (en-en)test set self-reported89.018
- euclidean_spearman on MTEB STS17 (en-en)test set self-reported88.387
- manhattan_pearson on MTEB STS17 (en-en)test set self-reported89.076
- manhattan_spearman on MTEB STS17 (en-en)test set self-reported88.517
- cos_sim_pearson on MTEB STS22 (en)test set self-reported68.385
- cos_sim_spearman on MTEB STS22 (en)test set self-reported68.152
- euclidean_pearson on MTEB STS22 (en)test set self-reported68.992
- euclidean_spearman on MTEB STS22 (en)test set self-reported68.013
- manhattan_pearson on MTEB STS22 (en)test set self-reported68.850
- manhattan_spearman on MTEB STS22 (en)test set self-reported67.854