gte-large-quant
This is the quantized (INT8) ONNX variant of the gte-large embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization.
Current list of sparse and quantized gte ONNX models:
Links | Sparsification Method |
---|---|
zeroshot/gte-large-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-large-quant | Quantization (INT8) |
zeroshot/gte-base-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-base-quant | Quantization (INT8) |
zeroshot/gte-small-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-small-quant | Quantization (INT8) |
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-quant', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For further details regarding DeepSparse & Sentence Transformers integration, refer to the DeepSparse README.
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 3,956
Evaluation results
- cos_sim_pearson on MTEB BIOSSEStest set self-reported90.273
- cos_sim_spearman on MTEB BIOSSEStest set self-reported87.978
- euclidean_pearson on MTEB BIOSSEStest set self-reported88.428
- euclidean_spearman on MTEB BIOSSEStest set self-reported87.972
- manhattan_pearson on MTEB BIOSSEStest set self-reported88.138
- manhattan_spearman on MTEB BIOSSEStest set self-reported87.434
- cos_sim_pearson on MTEB SICK-Rtest set self-reported85.142
- cos_sim_spearman on MTEB SICK-Rtest set self-reported79.134
- euclidean_pearson on MTEB SICK-Rtest set self-reported83.080
- euclidean_spearman on MTEB SICK-Rtest set self-reported79.316
- manhattan_pearson on MTEB SICK-Rtest set self-reported83.104
- manhattan_spearman on MTEB SICK-Rtest set self-reported79.308
- cos_sim_pearson on MTEB STS12test set self-reported84.930
- cos_sim_spearman on MTEB STS12test set self-reported75.981
- euclidean_pearson on MTEB STS12test set self-reported81.208
- euclidean_spearman on MTEB STS12test set self-reported75.746
- manhattan_pearson on MTEB STS12test set self-reported81.232
- manhattan_spearman on MTEB STS12test set self-reported75.731
- cos_sim_pearson on MTEB STS13test set self-reported85.669
- cos_sim_spearman on MTEB STS13test set self-reported87.550
- euclidean_pearson on MTEB STS13test set self-reported86.556
- euclidean_spearman on MTEB STS13test set self-reported87.479
- manhattan_pearson on MTEB STS13test set self-reported86.520
- manhattan_spearman on MTEB STS13test set self-reported87.439
- cos_sim_pearson on MTEB STS14test set self-reported84.374
- cos_sim_spearman on MTEB STS14test set self-reported81.987
- euclidean_pearson on MTEB STS14test set self-reported84.220
- euclidean_spearman on MTEB STS14test set self-reported81.987
- manhattan_pearson on MTEB STS14test set self-reported84.219
- manhattan_spearman on MTEB STS14test set self-reported81.979
- cos_sim_pearson on MTEB STS15test set self-reported87.345
- cos_sim_spearman on MTEB STS15test set self-reported88.927
- euclidean_pearson on MTEB STS15test set self-reported88.201
- euclidean_spearman on MTEB STS15test set self-reported88.915
- manhattan_pearson on MTEB STS15test set self-reported88.244
- manhattan_spearman on MTEB STS15test set self-reported88.975
- cos_sim_pearson on MTEB STS16test set self-reported82.118
- cos_sim_spearman on MTEB STS16test set self-reported83.594
- euclidean_pearson on MTEB STS16test set self-reported82.973
- euclidean_spearman on MTEB STS16test set self-reported83.745
- manhattan_pearson on MTEB STS16test set self-reported82.974
- manhattan_spearman on MTEB STS16test set self-reported83.723
- cos_sim_pearson on MTEB STS17 (en-en)test set self-reported88.297
- cos_sim_spearman on MTEB STS17 (en-en)test set self-reported88.508
- euclidean_pearson on MTEB STS17 (en-en)test set self-reported89.015
- euclidean_spearman on MTEB STS17 (en-en)test set self-reported88.728
- manhattan_pearson on MTEB STS17 (en-en)test set self-reported89.143
- manhattan_spearman on MTEB STS17 (en-en)test set self-reported88.984
- cos_sim_pearson on MTEB STS22 (en)test set self-reported70.115
- cos_sim_spearman on MTEB STS22 (en)test set self-reported69.722
- euclidean_pearson on MTEB STS22 (en)test set self-reported70.037
- euclidean_spearman on MTEB STS22 (en)test set self-reported68.960
- manhattan_pearson on MTEB STS22 (en)test set self-reported69.837
- manhattan_spearman on MTEB STS22 (en)test set self-reported68.718
- cos_sim_pearson on MTEB STSBenchmarktest set self-reported84.867
- cos_sim_spearman on MTEB STSBenchmarktest set self-reported85.396
- euclidean_pearson on MTEB STSBenchmarktest set self-reported85.685
- euclidean_spearman on MTEB STSBenchmarktest set self-reported85.510
- manhattan_pearson on MTEB STSBenchmarktest set self-reported85.669
- manhattan_spearman on MTEB STSBenchmarktest set self-reported85.465