morriszms commited on
Commit
7a19a67
·
verified ·
1 Parent(s): e7d216f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ e5-R-mistral-7b-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ e5-R-mistral-7b-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ e5-R-mistral-7b-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ e5-R-mistral-7b-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ e5-R-mistral-7b-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ e5-R-mistral-7b-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ e5-R-mistral-7b-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ e5-R-mistral-7b-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ e5-R-mistral-7b-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ e5-R-mistral-7b-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ e5-R-mistral-7b-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ e5-R-mistral-7b-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,1150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - BeastyZ/E5-R
6
+ language:
7
+ - en
8
+ tags:
9
+ - mteb
10
+ - TensorBlock
11
+ - GGUF
12
+ base_model: BeastyZ/e5-R-mistral-7b
13
+ model-index:
14
+ - name: e5-R-mistral-7b
15
+ results:
16
+ - task:
17
+ type: Retrieval
18
+ dataset:
19
+ name: MTEB ArguAna
20
+ type: mteb/arguana
21
+ config: default
22
+ split: test
23
+ revision: None
24
+ metrics:
25
+ - type: map_at_1
26
+ value: 33.57
27
+ - type: map_at_10
28
+ value: 49.952000000000005
29
+ - type: map_at_100
30
+ value: 50.673
31
+ - type: map_at_1000
32
+ value: 50.674
33
+ - type: map_at_3
34
+ value: 44.915
35
+ - type: map_at_5
36
+ value: 47.876999999999995
37
+ - type: mrr_at_1
38
+ value: 34.211000000000006
39
+ - type: mrr_at_10
40
+ value: 50.19
41
+ - type: mrr_at_100
42
+ value: 50.905
43
+ - type: mrr_at_1000
44
+ value: 50.906
45
+ - type: mrr_at_3
46
+ value: 45.128
47
+ - type: mrr_at_5
48
+ value: 48.097
49
+ - type: ndcg_at_1
50
+ value: 33.57
51
+ - type: ndcg_at_10
52
+ value: 58.994
53
+ - type: ndcg_at_100
54
+ value: 61.806000000000004
55
+ - type: ndcg_at_1000
56
+ value: 61.824999999999996
57
+ - type: ndcg_at_3
58
+ value: 48.681000000000004
59
+ - type: ndcg_at_5
60
+ value: 54.001
61
+ - type: precision_at_1
62
+ value: 33.57
63
+ - type: precision_at_10
64
+ value: 8.784
65
+ - type: precision_at_100
66
+ value: 0.9950000000000001
67
+ - type: precision_at_1000
68
+ value: 0.1
69
+ - type: precision_at_3
70
+ value: 19.867
71
+ - type: precision_at_5
72
+ value: 14.495
73
+ - type: recall_at_1
74
+ value: 33.57
75
+ - type: recall_at_10
76
+ value: 87.83800000000001
77
+ - type: recall_at_100
78
+ value: 99.502
79
+ - type: recall_at_1000
80
+ value: 99.644
81
+ - type: recall_at_3
82
+ value: 59.602
83
+ - type: recall_at_5
84
+ value: 72.475
85
+ - type: main_score
86
+ value: 58.994
87
+ - task:
88
+ type: Retrieval
89
+ dataset:
90
+ name: MTEB CQADupstackRetrieval
91
+ type: mteb/cqadupstack
92
+ config: default
93
+ split: test
94
+ revision: None
95
+ metrics:
96
+ - type: map_at_1
97
+ value: 24.75
98
+ - type: map_at_10
99
+ value: 34.025
100
+ - type: map_at_100
101
+ value: 35.126000000000005
102
+ - type: map_at_1000
103
+ value: 35.219
104
+ - type: map_at_3
105
+ value: 31.607000000000003
106
+ - type: map_at_5
107
+ value: 32.962
108
+ - type: mrr_at_1
109
+ value: 27.357
110
+ - type: mrr_at_10
111
+ value: 36.370999999999995
112
+ - type: mrr_at_100
113
+ value: 37.364000000000004
114
+ - type: mrr_at_1000
115
+ value: 37.423
116
+ - type: mrr_at_3
117
+ value: 34.288000000000004
118
+ - type: mrr_at_5
119
+ value: 35.434
120
+ - type: ndcg_at_1
121
+ value: 27.357
122
+ - type: ndcg_at_10
123
+ value: 46.593999999999994
124
+ - type: ndcg_at_100
125
+ value: 44.317
126
+ - type: ndcg_at_1000
127
+ value: 46.475
128
+ - type: ndcg_at_3
129
+ value: 34.473
130
+ - type: ndcg_at_5
131
+ value: 36.561
132
+ - type: precision_at_1
133
+ value: 27.357
134
+ - type: precision_at_10
135
+ value: 6.081
136
+ - type: precision_at_100
137
+ value: 0.9299999999999999
138
+ - type: precision_at_1000
139
+ value: 0.124
140
+ - type: precision_at_3
141
+ value: 14.911
142
+ - type: precision_at_5
143
+ value: 10.24
144
+ - type: recall_at_1
145
+ value: 24.75
146
+ - type: recall_at_10
147
+ value: 51.856
148
+ - type: recall_at_100
149
+ value: 76.44300000000001
150
+ - type: recall_at_1000
151
+ value: 92.078
152
+ - type: recall_at_3
153
+ value: 39.427
154
+ - type: recall_at_5
155
+ value: 44.639
156
+ - type: main_score
157
+ value: 46.593999999999994
158
+ - task:
159
+ type: Retrieval
160
+ dataset:
161
+ name: MTEB ClimateFEVER
162
+ type: mteb/climate-fever
163
+ config: default
164
+ split: test
165
+ revision: None
166
+ metrics:
167
+ - type: map_at_1
168
+ value: 16.436
169
+ - type: map_at_10
170
+ value: 29.693
171
+ - type: map_at_100
172
+ value: 32.179
173
+ - type: map_at_1000
174
+ value: 32.353
175
+ - type: map_at_3
176
+ value: 24.556
177
+ - type: map_at_5
178
+ value: 27.105
179
+ - type: mrr_at_1
180
+ value: 37.524
181
+ - type: mrr_at_10
182
+ value: 51.475
183
+ - type: mrr_at_100
184
+ value: 52.107000000000006
185
+ - type: mrr_at_1000
186
+ value: 52.123
187
+ - type: mrr_at_3
188
+ value: 48.35
189
+ - type: mrr_at_5
190
+ value: 50.249
191
+ - type: ndcg_at_1
192
+ value: 37.524
193
+ - type: ndcg_at_10
194
+ value: 40.258
195
+ - type: ndcg_at_100
196
+ value: 48.364000000000004
197
+ - type: ndcg_at_1000
198
+ value: 51.031000000000006
199
+ - type: ndcg_at_3
200
+ value: 33.359
201
+ - type: ndcg_at_5
202
+ value: 35.573
203
+ - type: precision_at_1
204
+ value: 37.524
205
+ - type: precision_at_10
206
+ value: 12.886000000000001
207
+ - type: precision_at_100
208
+ value: 2.169
209
+ - type: precision_at_1000
210
+ value: 0.268
211
+ - type: precision_at_3
212
+ value: 25.624000000000002
213
+ - type: precision_at_5
214
+ value: 19.453
215
+ - type: recall_at_1
216
+ value: 16.436
217
+ - type: recall_at_10
218
+ value: 47.77
219
+ - type: recall_at_100
220
+ value: 74.762
221
+ - type: recall_at_1000
222
+ value: 89.316
223
+ - type: recall_at_3
224
+ value: 30.508000000000003
225
+ - type: recall_at_5
226
+ value: 37.346000000000004
227
+ - type: main_score
228
+ value: 40.258
229
+ - task:
230
+ type: Retrieval
231
+ dataset:
232
+ name: MTEB DBPedia
233
+ type: mteb/dbpedia
234
+ config: default
235
+ split: test
236
+ revision: None
237
+ metrics:
238
+ - type: map_at_1
239
+ value: 10.147
240
+ - type: map_at_10
241
+ value: 24.631
242
+ - type: map_at_100
243
+ value: 35.657
244
+ - type: map_at_1000
245
+ value: 37.824999999999996
246
+ - type: map_at_3
247
+ value: 16.423
248
+ - type: map_at_5
249
+ value: 19.666
250
+ - type: mrr_at_1
251
+ value: 76.5
252
+ - type: mrr_at_10
253
+ value: 82.793
254
+ - type: mrr_at_100
255
+ value: 83.015
256
+ - type: mrr_at_1000
257
+ value: 83.021
258
+ - type: mrr_at_3
259
+ value: 81.75
260
+ - type: mrr_at_5
261
+ value: 82.375
262
+ - type: ndcg_at_1
263
+ value: 64.75
264
+ - type: ndcg_at_10
265
+ value: 51.031000000000006
266
+ - type: ndcg_at_100
267
+ value: 56.005
268
+ - type: ndcg_at_1000
269
+ value: 63.068000000000005
270
+ - type: ndcg_at_3
271
+ value: 54.571999999999996
272
+ - type: ndcg_at_5
273
+ value: 52.66499999999999
274
+ - type: precision_at_1
275
+ value: 76.5
276
+ - type: precision_at_10
277
+ value: 42.15
278
+ - type: precision_at_100
279
+ value: 13.22
280
+ - type: precision_at_1000
281
+ value: 2.5989999999999998
282
+ - type: precision_at_3
283
+ value: 58.416999999999994
284
+ - type: precision_at_5
285
+ value: 52.2
286
+ - type: recall_at_1
287
+ value: 10.147
288
+ - type: recall_at_10
289
+ value: 30.786
290
+ - type: recall_at_100
291
+ value: 62.873000000000005
292
+ - type: recall_at_1000
293
+ value: 85.358
294
+ - type: recall_at_3
295
+ value: 17.665
296
+ - type: recall_at_5
297
+ value: 22.088
298
+ - type: main_score
299
+ value: 51.031000000000006
300
+ - task:
301
+ type: Retrieval
302
+ dataset:
303
+ name: MTEB FEVER
304
+ type: mteb/fever
305
+ config: default
306
+ split: test
307
+ revision: None
308
+ metrics:
309
+ - type: map_at_1
310
+ value: 78.52900000000001
311
+ - type: map_at_10
312
+ value: 87.24199999999999
313
+ - type: map_at_100
314
+ value: 87.446
315
+ - type: map_at_1000
316
+ value: 87.457
317
+ - type: map_at_3
318
+ value: 86.193
319
+ - type: map_at_5
320
+ value: 86.898
321
+ - type: mrr_at_1
322
+ value: 84.518
323
+ - type: mrr_at_10
324
+ value: 90.686
325
+ - type: mrr_at_100
326
+ value: 90.73
327
+ - type: mrr_at_1000
328
+ value: 90.731
329
+ - type: mrr_at_3
330
+ value: 90.227
331
+ - type: mrr_at_5
332
+ value: 90.575
333
+ - type: ndcg_at_1
334
+ value: 84.518
335
+ - type: ndcg_at_10
336
+ value: 90.324
337
+ - type: ndcg_at_100
338
+ value: 90.96300000000001
339
+ - type: ndcg_at_1000
340
+ value: 91.134
341
+ - type: ndcg_at_3
342
+ value: 88.937
343
+ - type: ndcg_at_5
344
+ value: 89.788
345
+ - type: precision_at_1
346
+ value: 84.518
347
+ - type: precision_at_10
348
+ value: 10.872
349
+ - type: precision_at_100
350
+ value: 1.1440000000000001
351
+ - type: precision_at_1000
352
+ value: 0.117
353
+ - type: precision_at_3
354
+ value: 34.108
355
+ - type: precision_at_5
356
+ value: 21.154999999999998
357
+ - type: recall_at_1
358
+ value: 78.52900000000001
359
+ - type: recall_at_10
360
+ value: 96.123
361
+ - type: recall_at_100
362
+ value: 98.503
363
+ - type: recall_at_1000
364
+ value: 99.518
365
+ - type: recall_at_3
366
+ value: 92.444
367
+ - type: recall_at_5
368
+ value: 94.609
369
+ - type: main_score
370
+ value: 90.324
371
+ - task:
372
+ type: Retrieval
373
+ dataset:
374
+ name: MTEB FiQA2018
375
+ type: mteb/fiqa
376
+ config: default
377
+ split: test
378
+ revision: None
379
+ metrics:
380
+ - type: map_at_1
381
+ value: 29.38
382
+ - type: map_at_10
383
+ value: 50.28
384
+ - type: map_at_100
385
+ value: 52.532999999999994
386
+ - type: map_at_1000
387
+ value: 52.641000000000005
388
+ - type: map_at_3
389
+ value: 43.556
390
+ - type: map_at_5
391
+ value: 47.617
392
+ - type: mrr_at_1
393
+ value: 56.79
394
+ - type: mrr_at_10
395
+ value: 65.666
396
+ - type: mrr_at_100
397
+ value: 66.211
398
+ - type: mrr_at_1000
399
+ value: 66.226
400
+ - type: mrr_at_3
401
+ value: 63.452
402
+ - type: mrr_at_5
403
+ value: 64.895
404
+ - type: ndcg_at_1
405
+ value: 56.79
406
+ - type: ndcg_at_10
407
+ value: 58.68
408
+ - type: ndcg_at_100
409
+ value: 65.22
410
+ - type: ndcg_at_1000
411
+ value: 66.645
412
+ - type: ndcg_at_3
413
+ value: 53.981
414
+ - type: ndcg_at_5
415
+ value: 55.95
416
+ - type: precision_at_1
417
+ value: 56.79
418
+ - type: precision_at_10
419
+ value: 16.311999999999998
420
+ - type: precision_at_100
421
+ value: 2.316
422
+ - type: precision_at_1000
423
+ value: 0.258
424
+ - type: precision_at_3
425
+ value: 36.214
426
+ - type: precision_at_5
427
+ value: 27.067999999999998
428
+ - type: recall_at_1
429
+ value: 29.38
430
+ - type: recall_at_10
431
+ value: 66.503
432
+ - type: recall_at_100
433
+ value: 89.885
434
+ - type: recall_at_1000
435
+ value: 97.954
436
+ - type: recall_at_3
437
+ value: 48.866
438
+ - type: recall_at_5
439
+ value: 57.60999999999999
440
+ - type: main_score
441
+ value: 58.68
442
+ - task:
443
+ type: Retrieval
444
+ dataset:
445
+ name: MTEB HotpotQA
446
+ type: mteb/hotpotqa
447
+ config: default
448
+ split: test
449
+ revision: None
450
+ metrics:
451
+ - type: map_at_1
452
+ value: 42.134
453
+ - type: map_at_10
454
+ value: 73.412
455
+ - type: map_at_100
456
+ value: 74.144
457
+ - type: map_at_1000
458
+ value: 74.181
459
+ - type: map_at_3
460
+ value: 70.016
461
+ - type: map_at_5
462
+ value: 72.174
463
+ - type: mrr_at_1
464
+ value: 84.267
465
+ - type: mrr_at_10
466
+ value: 89.18599999999999
467
+ - type: mrr_at_100
468
+ value: 89.29599999999999
469
+ - type: mrr_at_1000
470
+ value: 89.298
471
+ - type: mrr_at_3
472
+ value: 88.616
473
+ - type: mrr_at_5
474
+ value: 88.957
475
+ - type: ndcg_at_1
476
+ value: 84.267
477
+ - type: ndcg_at_10
478
+ value: 80.164
479
+ - type: ndcg_at_100
480
+ value: 82.52199999999999
481
+ - type: ndcg_at_1000
482
+ value: 83.176
483
+ - type: ndcg_at_3
484
+ value: 75.616
485
+ - type: ndcg_at_5
486
+ value: 78.184
487
+ - type: precision_at_1
488
+ value: 84.267
489
+ - type: precision_at_10
490
+ value: 16.916
491
+ - type: precision_at_100
492
+ value: 1.872
493
+ - type: precision_at_1000
494
+ value: 0.196
495
+ - type: precision_at_3
496
+ value: 49.71
497
+ - type: precision_at_5
498
+ value: 31.854
499
+ - type: recall_at_1
500
+ value: 42.134
501
+ - type: recall_at_10
502
+ value: 84.578
503
+ - type: recall_at_100
504
+ value: 93.606
505
+ - type: recall_at_1000
506
+ value: 97.86
507
+ - type: recall_at_3
508
+ value: 74.564
509
+ - type: recall_at_5
510
+ value: 79.635
511
+ - type: main_score
512
+ value: 80.164
513
+ - task:
514
+ type: Retrieval
515
+ dataset:
516
+ name: MTEB MSMARCO
517
+ type: mteb/msmarco
518
+ config: default
519
+ split: dev
520
+ revision: None
521
+ metrics:
522
+ - type: map_at_1
523
+ value: 22.276
524
+ - type: map_at_10
525
+ value: 35.493
526
+ - type: map_at_100
527
+ value: 36.656
528
+ - type: map_at_1000
529
+ value: 36.699
530
+ - type: map_at_3
531
+ value: 31.320999999999998
532
+ - type: map_at_5
533
+ value: 33.772999999999996
534
+ - type: mrr_at_1
535
+ value: 22.966
536
+ - type: mrr_at_10
537
+ value: 36.074
538
+ - type: mrr_at_100
539
+ value: 37.183
540
+ - type: mrr_at_1000
541
+ value: 37.219
542
+ - type: mrr_at_3
543
+ value: 31.984
544
+ - type: mrr_at_5
545
+ value: 34.419
546
+ - type: ndcg_at_1
547
+ value: 22.966
548
+ - type: ndcg_at_10
549
+ value: 42.895
550
+ - type: ndcg_at_100
551
+ value: 48.453
552
+ - type: ndcg_at_1000
553
+ value: 49.464999999999996
554
+ - type: ndcg_at_3
555
+ value: 34.410000000000004
556
+ - type: ndcg_at_5
557
+ value: 38.78
558
+ - type: precision_at_1
559
+ value: 22.966
560
+ - type: precision_at_10
561
+ value: 6.88
562
+ - type: precision_at_100
563
+ value: 0.966
564
+ - type: precision_at_1000
565
+ value: 0.105
566
+ - type: precision_at_3
567
+ value: 14.785
568
+ - type: precision_at_5
569
+ value: 11.074
570
+ - type: recall_at_1
571
+ value: 22.276
572
+ - type: recall_at_10
573
+ value: 65.756
574
+ - type: recall_at_100
575
+ value: 91.34100000000001
576
+ - type: recall_at_1000
577
+ value: 98.957
578
+ - type: recall_at_3
579
+ value: 42.67
580
+ - type: recall_at_5
581
+ value: 53.161
582
+ - type: main_score
583
+ value: 42.895
584
+ - task:
585
+ type: Retrieval
586
+ dataset:
587
+ name: MTEB NFCorpus
588
+ type: mteb/nfcorpus
589
+ config: default
590
+ split: test
591
+ revision: None
592
+ metrics:
593
+ - type: map_at_1
594
+ value: 7.188999999999999
595
+ - type: map_at_10
596
+ value: 16.176
597
+ - type: map_at_100
598
+ value: 20.504
599
+ - type: map_at_1000
600
+ value: 22.203999999999997
601
+ - type: map_at_3
602
+ value: 11.766
603
+ - type: map_at_5
604
+ value: 13.655999999999999
605
+ - type: mrr_at_1
606
+ value: 55.418
607
+ - type: mrr_at_10
608
+ value: 62.791
609
+ - type: mrr_at_100
610
+ value: 63.339
611
+ - type: mrr_at_1000
612
+ value: 63.369
613
+ - type: mrr_at_3
614
+ value: 60.99099999999999
615
+ - type: mrr_at_5
616
+ value: 62.059
617
+ - type: ndcg_at_1
618
+ value: 53.715
619
+ - type: ndcg_at_10
620
+ value: 41.377
621
+ - type: ndcg_at_100
622
+ value: 37.999
623
+ - type: ndcg_at_1000
624
+ value: 46.726
625
+ - type: ndcg_at_3
626
+ value: 47.262
627
+ - type: ndcg_at_5
628
+ value: 44.708999999999996
629
+ - type: precision_at_1
630
+ value: 55.108000000000004
631
+ - type: precision_at_10
632
+ value: 30.154999999999998
633
+ - type: precision_at_100
634
+ value: 9.582
635
+ - type: precision_at_1000
636
+ value: 2.2720000000000002
637
+ - type: precision_at_3
638
+ value: 43.55
639
+ - type: precision_at_5
640
+ value: 38.204
641
+ - type: recall_at_1
642
+ value: 7.188999999999999
643
+ - type: recall_at_10
644
+ value: 20.655
645
+ - type: recall_at_100
646
+ value: 38.068000000000005
647
+ - type: recall_at_1000
648
+ value: 70.208
649
+ - type: recall_at_3
650
+ value: 12.601
651
+ - type: recall_at_5
652
+ value: 15.573999999999998
653
+ - type: main_score
654
+ value: 41.377
655
+ - task:
656
+ type: Retrieval
657
+ dataset:
658
+ name: MTEB NQ
659
+ type: mteb/nq
660
+ config: default
661
+ split: test
662
+ revision: None
663
+ metrics:
664
+ - type: map_at_1
665
+ value: 46.017
666
+ - type: map_at_10
667
+ value: 62.910999999999994
668
+ - type: map_at_100
669
+ value: 63.526
670
+ - type: map_at_1000
671
+ value: 63.536
672
+ - type: map_at_3
673
+ value: 59.077999999999996
674
+ - type: map_at_5
675
+ value: 61.521
676
+ - type: mrr_at_1
677
+ value: 51.68000000000001
678
+ - type: mrr_at_10
679
+ value: 65.149
680
+ - type: mrr_at_100
681
+ value: 65.542
682
+ - type: mrr_at_1000
683
+ value: 65.55
684
+ - type: mrr_at_3
685
+ value: 62.49
686
+ - type: mrr_at_5
687
+ value: 64.178
688
+ - type: ndcg_at_1
689
+ value: 51.651
690
+ - type: ndcg_at_10
691
+ value: 69.83500000000001
692
+ - type: ndcg_at_100
693
+ value: 72.18
694
+ - type: ndcg_at_1000
695
+ value: 72.393
696
+ - type: ndcg_at_3
697
+ value: 63.168
698
+ - type: ndcg_at_5
699
+ value: 66.958
700
+ - type: precision_at_1
701
+ value: 51.651
702
+ - type: precision_at_10
703
+ value: 10.626
704
+ - type: precision_at_100
705
+ value: 1.195
706
+ - type: precision_at_1000
707
+ value: 0.121
708
+ - type: precision_at_3
709
+ value: 28.012999999999998
710
+ - type: precision_at_5
711
+ value: 19.09
712
+ - type: recall_at_1
713
+ value: 46.017
714
+ - type: recall_at_10
715
+ value: 88.345
716
+ - type: recall_at_100
717
+ value: 98.129
718
+ - type: recall_at_1000
719
+ value: 99.696
720
+ - type: recall_at_3
721
+ value: 71.531
722
+ - type: recall_at_5
723
+ value: 80.108
724
+ - type: main_score
725
+ value: 69.83500000000001
726
+ - task:
727
+ type: Retrieval
728
+ dataset:
729
+ name: MTEB QuoraRetrieval
730
+ type: mteb/quora
731
+ config: default
732
+ split: test
733
+ revision: None
734
+ metrics:
735
+ - type: map_at_1
736
+ value: 72.473
737
+ - type: map_at_10
738
+ value: 86.72800000000001
739
+ - type: map_at_100
740
+ value: 87.323
741
+ - type: map_at_1000
742
+ value: 87.332
743
+ - type: map_at_3
744
+ value: 83.753
745
+ - type: map_at_5
746
+ value: 85.627
747
+ - type: mrr_at_1
748
+ value: 83.39
749
+ - type: mrr_at_10
750
+ value: 89.149
751
+ - type: mrr_at_100
752
+ value: 89.228
753
+ - type: mrr_at_1000
754
+ value: 89.229
755
+ - type: mrr_at_3
756
+ value: 88.335
757
+ - type: mrr_at_5
758
+ value: 88.895
759
+ - type: ndcg_at_1
760
+ value: 83.39
761
+ - type: ndcg_at_10
762
+ value: 90.109
763
+ - type: ndcg_at_100
764
+ value: 91.09
765
+ - type: ndcg_at_1000
766
+ value: 91.13900000000001
767
+ - type: ndcg_at_3
768
+ value: 87.483
769
+ - type: ndcg_at_5
770
+ value: 88.942
771
+ - type: precision_at_1
772
+ value: 83.39
773
+ - type: precision_at_10
774
+ value: 13.711
775
+ - type: precision_at_100
776
+ value: 1.549
777
+ - type: precision_at_1000
778
+ value: 0.157
779
+ - type: precision_at_3
780
+ value: 38.342999999999996
781
+ - type: precision_at_5
782
+ value: 25.188
783
+ - type: recall_at_1
784
+ value: 72.473
785
+ - type: recall_at_10
786
+ value: 96.57
787
+ - type: recall_at_100
788
+ value: 99.792
789
+ - type: recall_at_1000
790
+ value: 99.99900000000001
791
+ - type: recall_at_3
792
+ value: 88.979
793
+ - type: recall_at_5
794
+ value: 93.163
795
+ - type: main_score
796
+ value: 90.109
797
+ - task:
798
+ type: Retrieval
799
+ dataset:
800
+ name: MTEB SCIDOCS
801
+ type: mteb/scidocs
802
+ config: default
803
+ split: test
804
+ revision: None
805
+ metrics:
806
+ - type: map_at_1
807
+ value: 4.598
808
+ - type: map_at_10
809
+ value: 11.405999999999999
810
+ - type: map_at_100
811
+ value: 13.447999999999999
812
+ - type: map_at_1000
813
+ value: 13.758999999999999
814
+ - type: map_at_3
815
+ value: 8.332
816
+ - type: map_at_5
817
+ value: 9.709
818
+ - type: mrr_at_1
819
+ value: 22.6
820
+ - type: mrr_at_10
821
+ value: 32.978
822
+ - type: mrr_at_100
823
+ value: 34.149
824
+ - type: mrr_at_1000
825
+ value: 34.213
826
+ - type: mrr_at_3
827
+ value: 29.7
828
+ - type: mrr_at_5
829
+ value: 31.485000000000003
830
+ - type: ndcg_at_1
831
+ value: 22.6
832
+ - type: ndcg_at_10
833
+ value: 19.259999999999998
834
+ - type: ndcg_at_100
835
+ value: 27.21
836
+ - type: ndcg_at_1000
837
+ value: 32.7
838
+ - type: ndcg_at_3
839
+ value: 18.445
840
+ - type: ndcg_at_5
841
+ value: 15.812000000000001
842
+ - type: precision_at_1
843
+ value: 22.6
844
+ - type: precision_at_10
845
+ value: 9.959999999999999
846
+ - type: precision_at_100
847
+ value: 2.139
848
+ - type: precision_at_1000
849
+ value: 0.345
850
+ - type: precision_at_3
851
+ value: 17.299999999999997
852
+ - type: precision_at_5
853
+ value: 13.719999999999999
854
+ - type: recall_at_1
855
+ value: 4.598
856
+ - type: recall_at_10
857
+ value: 20.186999999999998
858
+ - type: recall_at_100
859
+ value: 43.362
860
+ - type: recall_at_1000
861
+ value: 70.11800000000001
862
+ - type: recall_at_3
863
+ value: 10.543
864
+ - type: recall_at_5
865
+ value: 13.923
866
+ - type: main_score
867
+ value: 19.259999999999998
868
+ - task:
869
+ type: Retrieval
870
+ dataset:
871
+ name: MTEB SciFact
872
+ type: mteb/scifact
873
+ config: default
874
+ split: test
875
+ revision: None
876
+ metrics:
877
+ - type: map_at_1
878
+ value: 65.467
879
+ - type: map_at_10
880
+ value: 74.935
881
+ - type: map_at_100
882
+ value: 75.395
883
+ - type: map_at_1000
884
+ value: 75.412
885
+ - type: map_at_3
886
+ value: 72.436
887
+ - type: map_at_5
888
+ value: 73.978
889
+ - type: mrr_at_1
890
+ value: 68.667
891
+ - type: mrr_at_10
892
+ value: 76.236
893
+ - type: mrr_at_100
894
+ value: 76.537
895
+ - type: mrr_at_1000
896
+ value: 76.55499999999999
897
+ - type: mrr_at_3
898
+ value: 74.722
899
+ - type: mrr_at_5
900
+ value: 75.639
901
+ - type: ndcg_at_1
902
+ value: 68.667
903
+ - type: ndcg_at_10
904
+ value: 78.92099999999999
905
+ - type: ndcg_at_100
906
+ value: 80.645
907
+ - type: ndcg_at_1000
908
+ value: 81.045
909
+ - type: ndcg_at_3
910
+ value: 75.19500000000001
911
+ - type: ndcg_at_5
912
+ value: 77.114
913
+ - type: precision_at_1
914
+ value: 68.667
915
+ - type: precision_at_10
916
+ value: 10.133000000000001
917
+ - type: precision_at_100
918
+ value: 1.0999999999999999
919
+ - type: precision_at_1000
920
+ value: 0.11299999999999999
921
+ - type: precision_at_3
922
+ value: 28.889
923
+ - type: precision_at_5
924
+ value: 18.8
925
+ - type: recall_at_1
926
+ value: 65.467
927
+ - type: recall_at_10
928
+ value: 89.517
929
+ - type: recall_at_100
930
+ value: 97
931
+ - type: recall_at_1000
932
+ value: 100
933
+ - type: recall_at_3
934
+ value: 79.72200000000001
935
+ - type: recall_at_5
936
+ value: 84.511
937
+ - type: main_score
938
+ value: 78.92099999999999
939
+ - task:
940
+ type: Retrieval
941
+ dataset:
942
+ name: MTEB TRECCOVID
943
+ type: mteb/trec-covid
944
+ config: default
945
+ split: test
946
+ revision: None
947
+ metrics:
948
+ - type: map_at_1
949
+ value: 0.244
950
+ - type: map_at_10
951
+ value: 2.183
952
+ - type: map_at_100
953
+ value: 13.712
954
+ - type: map_at_1000
955
+ value: 33.147
956
+ - type: map_at_3
957
+ value: 0.7270000000000001
958
+ - type: map_at_5
959
+ value: 1.199
960
+ - type: mrr_at_1
961
+ value: 94
962
+ - type: mrr_at_10
963
+ value: 97
964
+ - type: mrr_at_100
965
+ value: 97
966
+ - type: mrr_at_1000
967
+ value: 97
968
+ - type: mrr_at_3
969
+ value: 97
970
+ - type: mrr_at_5
971
+ value: 97
972
+ - type: ndcg_at_1
973
+ value: 92
974
+ - type: ndcg_at_10
975
+ value: 84.399
976
+ - type: ndcg_at_100
977
+ value: 66.771
978
+ - type: ndcg_at_1000
979
+ value: 59.092
980
+ - type: ndcg_at_3
981
+ value: 89.173
982
+ - type: ndcg_at_5
983
+ value: 88.52600000000001
984
+ - type: precision_at_1
985
+ value: 94
986
+ - type: precision_at_10
987
+ value: 86.8
988
+ - type: precision_at_100
989
+ value: 68.24
990
+ - type: precision_at_1000
991
+ value: 26.003999999999998
992
+ - type: precision_at_3
993
+ value: 92.667
994
+ - type: precision_at_5
995
+ value: 92.4
996
+ - type: recall_at_1
997
+ value: 0.244
998
+ - type: recall_at_10
999
+ value: 2.302
1000
+ - type: recall_at_100
1001
+ value: 16.622
1002
+ - type: recall_at_1000
1003
+ value: 55.175
1004
+ - type: recall_at_3
1005
+ value: 0.748
1006
+ - type: recall_at_5
1007
+ value: 1.247
1008
+ - type: main_score
1009
+ value: 84.399
1010
+ - task:
1011
+ type: Retrieval
1012
+ dataset:
1013
+ name: MTEB Touche2020
1014
+ type: mteb/touche2020
1015
+ config: default
1016
+ split: test
1017
+ revision: None
1018
+ metrics:
1019
+ - type: map_at_1
1020
+ value: 2.707
1021
+ - type: map_at_10
1022
+ value: 10.917
1023
+ - type: map_at_100
1024
+ value: 16.308
1025
+ - type: map_at_1000
1026
+ value: 17.953
1027
+ - type: map_at_3
1028
+ value: 5.65
1029
+ - type: map_at_5
1030
+ value: 7.379
1031
+ - type: mrr_at_1
1032
+ value: 34.694
1033
+ - type: mrr_at_10
1034
+ value: 49.745
1035
+ - type: mrr_at_100
1036
+ value: 50.309000000000005
1037
+ - type: mrr_at_1000
1038
+ value: 50.32
1039
+ - type: mrr_at_3
1040
+ value: 44.897999999999996
1041
+ - type: mrr_at_5
1042
+ value: 48.061
1043
+ - type: ndcg_at_1
1044
+ value: 33.672999999999995
1045
+ - type: ndcg_at_10
1046
+ value: 26.894000000000002
1047
+ - type: ndcg_at_100
1048
+ value: 37.423
1049
+ - type: ndcg_at_1000
1050
+ value: 49.376999999999995
1051
+ - type: ndcg_at_3
1052
+ value: 30.456
1053
+ - type: ndcg_at_5
1054
+ value: 27.772000000000002
1055
+ - type: precision_at_1
1056
+ value: 34.694
1057
+ - type: precision_at_10
1058
+ value: 23.878
1059
+ - type: precision_at_100
1060
+ value: 7.489999999999999
1061
+ - type: precision_at_1000
1062
+ value: 1.555
1063
+ - type: precision_at_3
1064
+ value: 31.293
1065
+ - type: precision_at_5
1066
+ value: 26.939
1067
+ - type: recall_at_1
1068
+ value: 2.707
1069
+ - type: recall_at_10
1070
+ value: 18.104
1071
+ - type: recall_at_100
1072
+ value: 46.93
1073
+ - type: recall_at_1000
1074
+ value: 83.512
1075
+ - type: recall_at_3
1076
+ value: 6.622999999999999
1077
+ - type: recall_at_5
1078
+ value: 10.051
1079
+ - type: main_score
1080
+ value: 26.894000000000002
1081
+ ---
1082
+
1083
+ <div style="width: auto; margin-left: auto; margin-right: auto">
1084
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
1085
+ </div>
1086
+ <div style="display: flex; justify-content: space-between; width: 100%;">
1087
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
1088
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
1089
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
1090
+ </p>
1091
+ </div>
1092
+ </div>
1093
+
1094
+ ## BeastyZ/e5-R-mistral-7b - GGUF
1095
+
1096
+ This repo contains GGUF format model files for [BeastyZ/e5-R-mistral-7b](https://huggingface.co/BeastyZ/e5-R-mistral-7b).
1097
+
1098
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
1099
+
1100
+ <div style="text-align: left; margin: 20px 0;">
1101
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
1102
+ Run them on the TensorBlock client using your local machine ↗
1103
+ </a>
1104
+ </div>
1105
+
1106
+ ## Prompt template
1107
+
1108
+ ```
1109
+
1110
+ ```
1111
+
1112
+ ## Model file specification
1113
+
1114
+ | Filename | Quant type | File Size | Description |
1115
+ | -------- | ---------- | --------- | ----------- |
1116
+ | [e5-R-mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
1117
+ | [e5-R-mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
1118
+ | [e5-R-mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
1119
+ | [e5-R-mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
1120
+ | [e5-R-mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
1121
+ | [e5-R-mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
1122
+ | [e5-R-mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
1123
+ | [e5-R-mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
1124
+ | [e5-R-mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
1125
+ | [e5-R-mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
1126
+ | [e5-R-mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
1127
+ | [e5-R-mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/e5-R-mistral-7b-GGUF/blob/main/e5-R-mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
1128
+
1129
+
1130
+ ## Downloading instruction
1131
+
1132
+ ### Command line
1133
+
1134
+ Firstly, install Huggingface Client
1135
+
1136
+ ```shell
1137
+ pip install -U "huggingface_hub[cli]"
1138
+ ```
1139
+
1140
+ Then, downoad the individual model file the a local directory
1141
+
1142
+ ```shell
1143
+ huggingface-cli download tensorblock/e5-R-mistral-7b-GGUF --include "e5-R-mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
1144
+ ```
1145
+
1146
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
1147
+
1148
+ ```shell
1149
+ huggingface-cli download tensorblock/e5-R-mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
1150
+ ```
e5-R-mistral-7b-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eac0e158364b30d727adc0a952a3f25ccd31a768d8ba10956412535c26aad48d
3
+ size 2719242368
e5-R-mistral-7b-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7789f26a2159691fa601e53c4c85dadda832d5f2eb69ee890a7af39f8e27e8c7
3
+ size 3822024832
e5-R-mistral-7b-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ffcd0e8a9c7ad5213e5f8de429084a8577e981cf1ca766aa081631f51e5c35b
3
+ size 3518986368
e5-R-mistral-7b-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:910f5e142e9ef5064f4e2f501aeef419e4e3da445a0d27b197511b4f29e48e0d
3
+ size 3164567680
e5-R-mistral-7b-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a632f11f5368618c239af7b5bd1fdd24d73f7744ed99d343413e25ec00ffc430
3
+ size 4108916864
e5-R-mistral-7b-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff63b603ce97157cd8ba677f530cf3248f83b7314c3689e0ab5da1710975d886
3
+ size 4368439424
e5-R-mistral-7b-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcd82fad6944736fdf7d8925ce800a00d7297403290f152b6291d5439af02e58
3
+ size 4140374144
e5-R-mistral-7b-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:766e97ccf2af0dbad86645308acb6c38b35ceb8d87d6cc32cbbdc24e6e23510b
3
+ size 4997716096
e5-R-mistral-7b-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:619ebbbd2d3eba24ca17391edc12c583dac47f53613ab3cef5431ac84710efd8
3
+ size 5131409536
e5-R-mistral-7b-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a673d8d74b889d1d2bc4404d496bc431c740bd5c8d71417e0ce3e76b992fe537
3
+ size 4997716096
e5-R-mistral-7b-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aeb64b7935689e0f18a8154a05bd387faae13bc80592b4ef9bae63767f05c74
3
+ size 5942065280
e5-R-mistral-7b-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4387681cf7711241d90622d3016d2d0c9983f025c9267a23b5641eb8057d91a
3
+ size 7695857792