bge-small-en-v1.5-quant
DeepSparse is able to improve latency performance on a 10 core laptop by 3X and up to 5X on a 16 core AWS instance.
Usage
This is the quantized (INT8) ONNX variant of the bge-small-en-v1.5 embeddings model accelerated with Sparsify for quantization and DeepSparseSentenceTransformers for inference.
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-small-en-v1.5-quant', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 8,339
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported74.194
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported37.562
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported68.470
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported91.894
- ap on MTEB AmazonPolarityClassificationtest set self-reported88.646
- f1 on MTEB AmazonPolarityClassificationtest set self-reported91.872
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported46.718
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported46.258
- map_at_1 on MTEB ArguAnatest set self-reported34.424
- map_at_10 on MTEB ArguAnatest set self-reported49.630
- map_at_100 on MTEB ArguAnatest set self-reported50.477
- map_at_1000 on MTEB ArguAnatest set self-reported50.483
- map_at_3 on MTEB ArguAnatest set self-reported45.389
- map_at_5 on MTEB ArguAnatest set self-reported47.889
- mrr_at_1 on MTEB ArguAnatest set self-reported34.780
- mrr_at_10 on MTEB ArguAnatest set self-reported49.793
- mrr_at_100 on MTEB ArguAnatest set self-reported50.633
- mrr_at_1000 on MTEB ArguAnatest set self-reported50.638
- mrr_at_3 on MTEB ArguAnatest set self-reported45.531
- mrr_at_5 on MTEB ArguAnatest set self-reported48.010
- ndcg_at_1 on MTEB ArguAnatest set self-reported34.424
- ndcg_at_10 on MTEB ArguAnatest set self-reported57.774
- ndcg_at_100 on MTEB ArguAnatest set self-reported61.248
- ndcg_at_1000 on MTEB ArguAnatest set self-reported61.378
- ndcg_at_3 on MTEB ArguAnatest set self-reported49.067
- ndcg_at_5 on MTEB ArguAnatest set self-reported53.561
- precision_at_1 on MTEB ArguAnatest set self-reported34.424
- precision_at_10 on MTEB ArguAnatest set self-reported8.364
- precision_at_100 on MTEB ArguAnatest set self-reported0.985
- precision_at_1000 on MTEB ArguAnatest set self-reported0.100
- precision_at_3 on MTEB ArguAnatest set self-reported19.915
- precision_at_5 on MTEB ArguAnatest set self-reported14.125
- recall_at_1 on MTEB ArguAnatest set self-reported34.424
- recall_at_10 on MTEB ArguAnatest set self-reported83.642
- recall_at_100 on MTEB ArguAnatest set self-reported98.506
- recall_at_1000 on MTEB ArguAnatest set self-reported99.502
- recall_at_3 on MTEB ArguAnatest set self-reported59.744
- recall_at_5 on MTEB ArguAnatest set self-reported70.626
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported46.919
- v_measure on MTEB ArxivClusteringS2Stest set self-reported39.120
- map on MTEB AskUbuntuDupQuestionstest set self-reported62.403
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported75.332
- cos_sim_pearson on MTEB BIOSSEStest set self-reported88.004
- cos_sim_spearman on MTEB BIOSSEStest set self-reported86.656
- euclidean_pearson on MTEB BIOSSEStest set self-reported85.989
- euclidean_spearman on MTEB BIOSSEStest set self-reported86.091
- manhattan_pearson on MTEB BIOSSEStest set self-reported86.027
- manhattan_spearman on MTEB BIOSSEStest set self-reported85.893
- accuracy on MTEB Banking77Classificationtest set self-reported85.104
- f1 on MTEB Banking77Classificationtest set self-reported85.069
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported37.426