bge-small-en-v1.5-sparse
Usage
This is the sparse ONNX variant of the bge-small-en-v1.5 embeddings model accelerated with Sparsify for quantization/pruning and DeepSparseSentenceTransformers for inference.
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-small-en-v1.5-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 4,453
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported70.716
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported32.851
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported64.481
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported83.340
- ap on MTEB AmazonPolarityClassificationtest set self-reported78.287
- f1 on MTEB AmazonPolarityClassificationtest set self-reported83.274
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported40.988
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported40.777
- map_at_1 on MTEB ArguAnatest set self-reported26.102
- map_at_10 on MTEB ArguAnatest set self-reported40.754
- map_at_100 on MTEB ArguAnatest set self-reported41.830
- map_at_1000 on MTEB ArguAnatest set self-reported41.845
- map_at_3 on MTEB ArguAnatest set self-reported36.178
- map_at_5 on MTEB ArguAnatest set self-reported38.646
- mrr_at_1 on MTEB ArguAnatest set self-reported26.600
- mrr_at_10 on MTEB ArguAnatest set self-reported40.934
- mrr_at_100 on MTEB ArguAnatest set self-reported42.015
- mrr_at_1000 on MTEB ArguAnatest set self-reported42.030
- mrr_at_3 on MTEB ArguAnatest set self-reported36.344
- mrr_at_5 on MTEB ArguAnatest set self-reported38.848
- ndcg_at_1 on MTEB ArguAnatest set self-reported26.102
- ndcg_at_10 on MTEB ArguAnatest set self-reported49.127
- ndcg_at_100 on MTEB ArguAnatest set self-reported53.816
- ndcg_at_1000 on MTEB ArguAnatest set self-reported54.178
- ndcg_at_3 on MTEB ArguAnatest set self-reported39.607
- ndcg_at_5 on MTEB ArguAnatest set self-reported44.087
- precision_at_1 on MTEB ArguAnatest set self-reported26.102
- precision_at_10 on MTEB ArguAnatest set self-reported7.596
- precision_at_100 on MTEB ArguAnatest set self-reported0.967
- precision_at_1000 on MTEB ArguAnatest set self-reported0.099
- precision_at_3 on MTEB ArguAnatest set self-reported16.524
- precision_at_5 on MTEB ArguAnatest set self-reported12.105
- recall_at_1 on MTEB ArguAnatest set self-reported26.102
- recall_at_10 on MTEB ArguAnatest set self-reported75.960
- recall_at_100 on MTEB ArguAnatest set self-reported96.657
- recall_at_1000 on MTEB ArguAnatest set self-reported99.431
- recall_at_3 on MTEB ArguAnatest set self-reported49.573
- recall_at_5 on MTEB ArguAnatest set self-reported60.526
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported43.107
- v_measure on MTEB ArxivClusteringS2Stest set self-reported34.411
- map on MTEB AskUbuntuDupQuestionstest set self-reported56.966
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported69.925
- cos_sim_pearson on MTEB BIOSSEStest set self-reported79.649
- cos_sim_spearman on MTEB BIOSSEStest set self-reported78.953
- euclidean_pearson on MTEB BIOSSEStest set self-reported78.925
- euclidean_spearman on MTEB BIOSSEStest set self-reported78.565
- manhattan_pearson on MTEB BIOSSEStest set self-reported79.214
- manhattan_spearman on MTEB BIOSSEStest set self-reported78.663
- accuracy on MTEB Banking77Classificationtest set self-reported81.250
- f1 on MTEB Banking77Classificationtest set self-reported81.208
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported34.695