gte-large-sparse
This is the sparse ONNX variant of the gte-large embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization (INT8) and unstructured pruning 50%.
Current list of sparse and quantized gte ONNX models:
Links | Sparsification Method |
---|---|
zeroshot/gte-large-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-large-quant | Quantization (INT8) |
zeroshot/gte-base-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-base-quant | Quantization (INT8) |
zeroshot/gte-small-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-small-quant | Quantization (INT8) |
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For further details regarding DeepSparse & Sentence Transformers integration, refer to the DeepSparse README.
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 3,988
Evaluation results
- cos_sim_pearson on MTEB BIOSSEStest set self-reported88.643
- cos_sim_spearman on MTEB BIOSSEStest set self-reported85.834
- euclidean_pearson on MTEB BIOSSEStest set self-reported86.861
- euclidean_spearman on MTEB BIOSSEStest set self-reported85.616
- manhattan_pearson on MTEB BIOSSEStest set self-reported86.690
- manhattan_spearman on MTEB BIOSSEStest set self-reported85.603
- cos_sim_pearson on MTEB SICK-Rtest set self-reported85.233
- cos_sim_spearman on MTEB SICK-Rtest set self-reported79.001
- euclidean_pearson on MTEB SICK-Rtest set self-reported83.480
- euclidean_spearman on MTEB SICK-Rtest set self-reported78.954
- manhattan_pearson on MTEB SICK-Rtest set self-reported83.469
- manhattan_spearman on MTEB SICK-Rtest set self-reported78.924
- cos_sim_pearson on MTEB STS12test set self-reported81.775
- cos_sim_spearman on MTEB STS12test set self-reported73.485
- euclidean_pearson on MTEB STS12test set self-reported78.045
- euclidean_spearman on MTEB STS12test set self-reported73.014
- manhattan_pearson on MTEB STS12test set self-reported78.088
- manhattan_spearman on MTEB STS12test set self-reported73.051
- cos_sim_pearson on MTEB STS13test set self-reported84.578
- cos_sim_spearman on MTEB STS13test set self-reported86.139
- euclidean_pearson on MTEB STS13test set self-reported85.127
- euclidean_spearman on MTEB STS13test set self-reported85.525
- manhattan_pearson on MTEB STS13test set self-reported85.068
- manhattan_spearman on MTEB STS13test set self-reported85.450
- cos_sim_pearson on MTEB STS14test set self-reported83.305
- cos_sim_spearman on MTEB STS14test set self-reported80.365
- euclidean_pearson on MTEB STS14test set self-reported82.920
- euclidean_spearman on MTEB STS14test set self-reported80.170
- manhattan_pearson on MTEB STS14test set self-reported82.882
- manhattan_spearman on MTEB STS14test set self-reported80.143
- cos_sim_pearson on MTEB STS15test set self-reported86.999
- cos_sim_spearman on MTEB STS15test set self-reported88.531
- euclidean_pearson on MTEB STS15test set self-reported87.968
- euclidean_spearman on MTEB STS15test set self-reported88.448
- manhattan_pearson on MTEB STS15test set self-reported87.949
- manhattan_spearman on MTEB STS15test set self-reported88.455
- cos_sim_pearson on MTEB STS16test set self-reported82.464
- cos_sim_spearman on MTEB STS16test set self-reported84.081
- euclidean_pearson on MTEB STS16test set self-reported83.706
- euclidean_spearman on MTEB STS16test set self-reported84.359
- manhattan_pearson on MTEB STS16test set self-reported83.703
- manhattan_spearman on MTEB STS16test set self-reported84.355
- cos_sim_pearson on MTEB STS17 (en-en)test set self-reported88.762
- cos_sim_spearman on MTEB STS17 (en-en)test set self-reported89.419
- euclidean_pearson on MTEB STS17 (en-en)test set self-reported89.473
- euclidean_spearman on MTEB STS17 (en-en)test set self-reported89.492
- manhattan_pearson on MTEB STS17 (en-en)test set self-reported89.500
- manhattan_spearman on MTEB STS17 (en-en)test set self-reported89.531
- cos_sim_pearson on MTEB STS22 (en)test set self-reported64.572
- cos_sim_spearman on MTEB STS22 (en)test set self-reported66.751
- euclidean_pearson on MTEB STS22 (en)test set self-reported66.455
- euclidean_spearman on MTEB STS22 (en)test set self-reported66.146
- manhattan_pearson on MTEB STS22 (en)test set self-reported66.474
- manhattan_spearman on MTEB STS22 (en)test set self-reported66.211
- cos_sim_pearson on MTEB STSBenchmarktest set self-reported85.056
- cos_sim_spearman on MTEB STSBenchmarktest set self-reported85.454
- euclidean_pearson on MTEB STSBenchmarktest set self-reported86.317
- euclidean_spearman on MTEB STSBenchmarktest set self-reported85.661
- manhattan_pearson on MTEB STSBenchmarktest set self-reported86.281
- manhattan_spearman on MTEB STSBenchmarktest set self-reported85.637