File size: 13,159 Bytes
8a69692
4c95d04
 
47911f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a69692
4c95d04
 
8a69692
4c95d04
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7245a49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c95d04
2c2c275
 
4c95d04
 
47911f8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
---
tags:
- sparse sparsity quantized onnx embeddings int8
- mteb
model-index:
- name: gte-large-sparse
  results:
  - task:
      type: STS
    dataset:
      type: mteb/biosses-sts
      name: MTEB BIOSSES
      config: default
      split: test
      revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
    metrics:
    - type: cos_sim_pearson
      value: 88.64253410928214
    - type: cos_sim_spearman
      value: 85.83388349410652
    - type: euclidean_pearson
      value: 86.86126159318735
    - type: euclidean_spearman
      value: 85.61580623591163
    - type: manhattan_pearson
      value: 86.6901132883383
    - type: manhattan_spearman
      value: 85.60255292187769
  - task:
      type: STS
    dataset:
      type: mteb/sickr-sts
      name: MTEB SICK-R
      config: default
      split: test
      revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
    metrics:
    - type: cos_sim_pearson
      value: 85.23314640591607
    - type: cos_sim_spearman
      value: 79.00078545104338
    - type: euclidean_pearson
      value: 83.48009254500714
    - type: euclidean_spearman
      value: 78.95413001389939
    - type: manhattan_pearson
      value: 83.46945566025941
    - type: manhattan_spearman
      value: 78.9241707208135
  - task:
      type: STS
    dataset:
      type: mteb/sts12-sts
      name: MTEB STS12
      config: default
      split: test
      revision: a0d554a64d88156834ff5ae9920b964011b16384
    metrics:
    - type: cos_sim_pearson
      value: 81.77526666043804
    - type: cos_sim_spearman
      value: 73.4849063285867
    - type: euclidean_pearson
      value: 78.04477932740524
    - type: euclidean_spearman
      value: 73.01394205771743
    - type: manhattan_pearson
      value: 78.08836684503294
    - type: manhattan_spearman
      value: 73.05074711098149
  - task:
      type: STS
    dataset:
      type: mteb/sts13-sts
      name: MTEB STS13
      config: default
      split: test
      revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
    metrics:
    - type: cos_sim_pearson
      value: 84.57839215661352
    - type: cos_sim_spearman
      value: 86.13854767345153
    - type: euclidean_pearson
      value: 85.12712609946449
    - type: euclidean_spearman
      value: 85.52497994789026
    - type: manhattan_pearson
      value: 85.06833141611173
    - type: manhattan_spearman
      value: 85.45003068636466
  - task:
      type: STS
    dataset:
      type: mteb/sts14-sts
      name: MTEB STS14
      config: default
      split: test
      revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
    metrics:
    - type: cos_sim_pearson
      value: 83.30485126978374
    - type: cos_sim_spearman
      value: 80.36497172462357
    - type: euclidean_pearson
      value: 82.91977909424605
    - type: euclidean_spearman
      value: 80.16995106297438
    - type: manhattan_pearson
      value: 82.88200991402184
    - type: manhattan_spearman
      value: 80.14259757215227
  - task:
      type: STS
    dataset:
      type: mteb/sts15-sts
      name: MTEB STS15
      config: default
      split: test
      revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
    metrics:
    - type: cos_sim_pearson
      value: 86.99883111314007
    - type: cos_sim_spearman
      value: 88.531352572377
    - type: euclidean_pearson
      value: 87.96834578059067
    - type: euclidean_spearman
      value: 88.44800718542935
    - type: manhattan_pearson
      value: 87.94889391725033
    - type: manhattan_spearman
      value: 88.45467695837115
  - task:
      type: STS
    dataset:
      type: mteb/sts16-sts
      name: MTEB STS16
      config: default
      split: test
      revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
    metrics:
    - type: cos_sim_pearson
      value: 82.4636984892402
    - type: cos_sim_spearman
      value: 84.0808920789148
    - type: euclidean_pearson
      value: 83.70613486028309
    - type: euclidean_spearman
      value: 84.35941626905009
    - type: manhattan_pearson
      value: 83.70259457073782
    - type: manhattan_spearman
      value: 84.35496521501604
  - task:
      type: STS
    dataset:
      type: mteb/sts17-crosslingual-sts
      name: MTEB STS17 (en-en)
      config: en-en
      split: test
      revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
    metrics:
    - type: cos_sim_pearson
      value: 88.76172944971023
    - type: cos_sim_spearman
      value: 89.4190945039165
    - type: euclidean_pearson
      value: 89.47263005347381
    - type: euclidean_spearman
      value: 89.49228360724095
    - type: manhattan_pearson
      value: 89.49959868816694
    - type: manhattan_spearman
      value: 89.5314536157954
  - task:
      type: STS
    dataset:
      type: mteb/sts22-crosslingual-sts
      name: MTEB STS22 (en)
      config: en
      split: test
      revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
    metrics:
    - type: cos_sim_pearson
      value: 64.57158223787549
    - type: cos_sim_spearman
      value: 66.75053533168037
    - type: euclidean_pearson
      value: 66.45526604831747
    - type: euclidean_spearman
      value: 66.14567667353113
    - type: manhattan_pearson
      value: 66.47352000151176
    - type: manhattan_spearman
      value: 66.21099856852885
  - task:
      type: STS
    dataset:
      type: mteb/stsbenchmark-sts
      name: MTEB STSBenchmark
      config: default
      split: test
      revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
    metrics:
    - type: cos_sim_pearson
      value: 85.055653571006
    - type: cos_sim_spearman
      value: 85.45387832634702
    - type: euclidean_pearson
      value: 86.31667154906651
    - type: euclidean_spearman
      value: 85.66079590537946
    - type: manhattan_pearson
      value: 86.2806853257308
    - type: manhattan_spearman
      value: 85.63700636713952
  - task:
      type: PairClassification
    dataset:
      type: mteb/sprintduplicatequestions-pairclassification
      name: MTEB SprintDuplicateQuestions
      config: default
      split: test
      revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
    metrics:
    - type: cos_sim_accuracy
      value: 99.78811881188119
    - type: cos_sim_ap
      value: 94.67027715905307
    - type: cos_sim_f1
      value: 89.33074684772066
    - type: cos_sim_precision
      value: 86.7231638418079
    - type: cos_sim_recall
      value: 92.10000000000001
    - type: dot_accuracy
      value: 99.47128712871287
    - type: dot_ap
      value: 78.41478815918727
    - type: dot_f1
      value: 73.30049261083744
    - type: dot_precision
      value: 72.23300970873787
    - type: dot_recall
      value: 74.4
    - type: euclidean_accuracy
      value: 99.78415841584159
    - type: euclidean_ap
      value: 94.60075930867181
    - type: euclidean_f1
      value: 89.12175648702593
    - type: euclidean_precision
      value: 88.94422310756973
    - type: euclidean_recall
      value: 89.3
    - type: manhattan_accuracy
      value: 99.78415841584159
    - type: manhattan_ap
      value: 94.62867439278095
    - type: manhattan_f1
      value: 89.2337536372454
    - type: manhattan_precision
      value: 86.62900188323917
    - type: manhattan_recall
      value: 92.0
    - type: max_accuracy
      value: 99.78811881188119
    - type: max_ap
      value: 94.67027715905307
    - type: max_f1
      value: 89.33074684772066
  - task:
      type: PairClassification
    dataset:
      type: mteb/twittersemeval2015-pairclassification
      name: MTEB TwitterSemEval2015
      config: default
      split: test
      revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
    metrics:
    - type: cos_sim_accuracy
      value: 85.09864695714371
    - type: cos_sim_ap
      value: 70.33704198164713
    - type: cos_sim_f1
      value: 66.22893954410307
    - type: cos_sim_precision
      value: 62.42410088743577
    - type: cos_sim_recall
      value: 70.52770448548813
    - type: dot_accuracy
      value: 79.11426357513263
    - type: dot_ap
      value: 49.15484584572233
    - type: dot_f1
      value: 51.12580243364951
    - type: dot_precision
      value: 40.13840830449827
    - type: dot_recall
      value: 70.3957783641161
    - type: euclidean_accuracy
      value: 85.15825236931514
    - type: euclidean_ap
      value: 70.51017350854076
    - type: euclidean_f1
      value: 66.45416294785159
    - type: euclidean_precision
      value: 64.29805082654823
    - type: euclidean_recall
      value: 68.7598944591029
    - type: manhattan_accuracy
      value: 85.1403707456637
    - type: manhattan_ap
      value: 70.47587863399994
    - type: manhattan_f1
      value: 66.4576802507837
    - type: manhattan_precision
      value: 63.32138590203107
    - type: manhattan_recall
      value: 69.92084432717678
    - type: max_accuracy
      value: 85.15825236931514
    - type: max_ap
      value: 70.51017350854076
    - type: max_f1
      value: 66.4576802507837
  - task:
      type: PairClassification
    dataset:
      type: mteb/twitterurlcorpus-pairclassification
      name: MTEB TwitterURLCorpus
      config: default
      split: test
      revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
    metrics:
    - type: cos_sim_accuracy
      value: 88.8539604921023
    - type: cos_sim_ap
      value: 85.71869912577101
    - type: cos_sim_f1
      value: 78.00535626720983
    - type: cos_sim_precision
      value: 76.46232344893885
    - type: cos_sim_recall
      value: 79.61194949183862
    - type: dot_accuracy
      value: 84.57717235223348
    - type: dot_ap
      value: 74.89496650237145
    - type: dot_f1
      value: 69.05327823892932
    - type: dot_precision
      value: 65.75666829166377
    - type: dot_recall
      value: 72.69787496150293
    - type: euclidean_accuracy
      value: 88.89471028835332
    - type: euclidean_ap
      value: 85.75169460500409
    - type: euclidean_f1
      value: 78.17055393586006
    - type: euclidean_precision
      value: 74.21118184334348
    - type: euclidean_recall
      value: 82.57622420696026
    - type: manhattan_accuracy
      value: 88.92187681918733
    - type: manhattan_ap
      value: 85.7496679471825
    - type: manhattan_f1
      value: 78.11088295687884
    - type: manhattan_precision
      value: 75.82083061535117
    - type: manhattan_recall
      value: 80.5435786880197
    - type: max_accuracy
      value: 88.92187681918733
    - type: max_ap
      value: 85.75169460500409
    - type: max_f1
      value: 78.17055393586006
license: mit
language:
- en
---

# gte-large-sparse

This is the sparse ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning 50%.

Current list of sparse and quantized gte ONNX models:

| Links                                                                                               | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse)     |    Quantization (INT8) & 50% Pruning                    |
| [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant)     |   Quantization (INT8)                     |
| [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse)     |    Quantization (INT8) & 50% Pruning                    |
| [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant)     |   Quantization (INT8)                     |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse)     |    Quantization (INT8) & 50% Pruning                    |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant)     |   Quantization (INT8)                     |

```bash
pip install -U deepsparse-nightly[sentence_transformers]
```

```python
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-sparse', export=False)

# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
    'Sentences are passed as a list of string.',
    'The quick brown fox jumps over the lazy dog.']

# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)

# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
    print("Sentence:", sentence)
    print("Embedding:", embedding.shape)
    print("")
```

For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).

For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).

![;)](https://media.giphy.com/media/bYg33GbNbNIVzSrr84/giphy-downsized-large.gif)