consciousAI commited on
Commit
dfdd97d
1 Parent(s): 1a34f8c

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +73 -0
  2. mteb_metadata.md +1390 -0
README.md CHANGED
@@ -1,3 +1,76 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ ---
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - sentence-transformers
9
+ - feature-extraction
10
+ - sentence-similarity
11
+ - transformers
12
+
13
+ ---
14
+
15
+ # {MODEL_NAME}
16
+
17
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
18
+
19
+ <!--- Describe your model here -->
20
+
21
+ ## Usage (Sentence-Transformers)
22
+
23
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
24
+
25
+ ```
26
+ pip install -U sentence-transformers
27
+ ```
28
+
29
+ Then you can use the model like this:
30
+
31
+ ```python
32
+ from sentence_transformers import SentenceTransformer
33
+ sentences = ["This is an example sentence", "Each sentence is converted"]
34
+
35
+ model = SentenceTransformer('{MODEL_NAME}')
36
+ embeddings = model.encode(sentences)
37
+ print(embeddings)
38
+ ```
39
+
40
+
41
+
42
+ ## Usage (HuggingFace Transformers)
43
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
44
+
45
+ ```python
46
+ from transformers import AutoTokenizer, AutoModel
47
+ import torch
48
+
49
+
50
+ #Mean Pooling - Take attention mask into account for correct averaging
51
+ def mean_pooling(model_output, attention_mask):
52
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
53
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
54
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
55
+
56
+
57
+ # Sentences we want sentence embeddings for
58
+ sentences = ['This is an example sentence', 'Each sentence is converted']
59
+
60
+ # Load model from HuggingFace Hub
61
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
62
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
63
+
64
+ # Tokenize sentences
65
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
66
+
67
+ # Compute token embeddings
68
+ with torch.no_grad():
69
+ model_output = model(**encoded_input)
70
+
71
+ # Perform pooling. In this case, mean pooling.
72
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
73
+
74
+ print("Sentence embeddings:")
75
+ print(sentence_embeddings)
76
+ ```
mteb_metadata.md ADDED
@@ -0,0 +1,1390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ model-index:
5
+ - name: cai-lunaris-text-embeddings
6
+ results:
7
+ - task:
8
+ type: Retrieval
9
+ dataset:
10
+ type: arguana
11
+ name: MTEB ArguAna
12
+ config: default
13
+ split: test
14
+ revision: None
15
+ metrics:
16
+ - type: map_at_1
17
+ value: 17.07
18
+ - type: map_at_10
19
+ value: 29.372999999999998
20
+ - type: map_at_100
21
+ value: 30.79
22
+ - type: map_at_1000
23
+ value: 30.819999999999997
24
+ - type: map_at_3
25
+ value: 24.395
26
+ - type: map_at_5
27
+ value: 27.137
28
+ - type: mrr_at_1
29
+ value: 17.923000000000002
30
+ - type: mrr_at_10
31
+ value: 29.695
32
+ - type: mrr_at_100
33
+ value: 31.098
34
+ - type: mrr_at_1000
35
+ value: 31.128
36
+ - type: mrr_at_3
37
+ value: 24.704
38
+ - type: mrr_at_5
39
+ value: 27.449
40
+ - type: ndcg_at_1
41
+ value: 17.07
42
+ - type: ndcg_at_10
43
+ value: 37.269000000000005
44
+ - type: ndcg_at_100
45
+ value: 43.716
46
+ - type: ndcg_at_1000
47
+ value: 44.531
48
+ - type: ndcg_at_3
49
+ value: 26.839000000000002
50
+ - type: ndcg_at_5
51
+ value: 31.845000000000002
52
+ - type: precision_at_1
53
+ value: 17.07
54
+ - type: precision_at_10
55
+ value: 6.3020000000000005
56
+ - type: precision_at_100
57
+ value: 0.922
58
+ - type: precision_at_1000
59
+ value: 0.099
60
+ - type: precision_at_3
61
+ value: 11.309
62
+ - type: precision_at_5
63
+ value: 9.246
64
+ - type: recall_at_1
65
+ value: 17.07
66
+ - type: recall_at_10
67
+ value: 63.016000000000005
68
+ - type: recall_at_100
69
+ value: 92.24799999999999
70
+ - type: recall_at_1000
71
+ value: 98.72
72
+ - type: recall_at_3
73
+ value: 33.926
74
+ - type: recall_at_5
75
+ value: 46.23
76
+ - task:
77
+ type: Reranking
78
+ dataset:
79
+ type: mteb/askubuntudupquestions-reranking
80
+ name: MTEB AskUbuntuDupQuestions
81
+ config: default
82
+ split: test
83
+ revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
84
+ metrics:
85
+ - type: map
86
+ value: 53.44266265900711
87
+ - type: mrr
88
+ value: 66.54695950402322
89
+ - task:
90
+ type: STS
91
+ dataset:
92
+ type: mteb/biosses-sts
93
+ name: MTEB BIOSSES
94
+ config: default
95
+ split: test
96
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
97
+ metrics:
98
+ - type: cos_sim_pearson
99
+ value: 75.9652953730204
100
+ - type: cos_sim_spearman
101
+ value: 73.96554077670989
102
+ - type: euclidean_pearson
103
+ value: 75.68477255792381
104
+ - type: euclidean_spearman
105
+ value: 74.59447076995703
106
+ - type: manhattan_pearson
107
+ value: 75.94984623881341
108
+ - type: manhattan_spearman
109
+ value: 74.72218452337502
110
+ - task:
111
+ type: Retrieval
112
+ dataset:
113
+ type: BeIR/cqadupstack
114
+ name: MTEB CQADupstackAndroidRetrieval
115
+ config: default
116
+ split: test
117
+ revision: None
118
+ metrics:
119
+ - type: map_at_1
120
+ value: 14.119000000000002
121
+ - type: map_at_10
122
+ value: 19.661
123
+ - type: map_at_100
124
+ value: 20.706
125
+ - type: map_at_1000
126
+ value: 20.848
127
+ - type: map_at_3
128
+ value: 17.759
129
+ - type: map_at_5
130
+ value: 18.645
131
+ - type: mrr_at_1
132
+ value: 17.166999999999998
133
+ - type: mrr_at_10
134
+ value: 23.313
135
+ - type: mrr_at_100
136
+ value: 24.263
137
+ - type: mrr_at_1000
138
+ value: 24.352999999999998
139
+ - type: mrr_at_3
140
+ value: 21.412
141
+ - type: mrr_at_5
142
+ value: 22.313
143
+ - type: ndcg_at_1
144
+ value: 17.166999999999998
145
+ - type: ndcg_at_10
146
+ value: 23.631
147
+ - type: ndcg_at_100
148
+ value: 28.427000000000003
149
+ - type: ndcg_at_1000
150
+ value: 31.862000000000002
151
+ - type: ndcg_at_3
152
+ value: 20.175
153
+ - type: ndcg_at_5
154
+ value: 21.397
155
+ - type: precision_at_1
156
+ value: 17.166999999999998
157
+ - type: precision_at_10
158
+ value: 4.549
159
+ - type: precision_at_100
160
+ value: 0.8370000000000001
161
+ - type: precision_at_1000
162
+ value: 0.136
163
+ - type: precision_at_3
164
+ value: 9.68
165
+ - type: precision_at_5
166
+ value: 6.981
167
+ - type: recall_at_1
168
+ value: 14.119000000000002
169
+ - type: recall_at_10
170
+ value: 32.147999999999996
171
+ - type: recall_at_100
172
+ value: 52.739999999999995
173
+ - type: recall_at_1000
174
+ value: 76.67
175
+ - type: recall_at_3
176
+ value: 22.019
177
+ - type: recall_at_5
178
+ value: 25.361
179
+ - task:
180
+ type: Retrieval
181
+ dataset:
182
+ type: BeIR/cqadupstack
183
+ name: MTEB CQADupstackEnglishRetrieval
184
+ config: default
185
+ split: test
186
+ revision: None
187
+ metrics:
188
+ - type: map_at_1
189
+ value: 16.576
190
+ - type: map_at_10
191
+ value: 22.281000000000002
192
+ - type: map_at_100
193
+ value: 23.066
194
+ - type: map_at_1000
195
+ value: 23.166
196
+ - type: map_at_3
197
+ value: 20.385
198
+ - type: map_at_5
199
+ value: 21.557000000000002
200
+ - type: mrr_at_1
201
+ value: 20.892
202
+ - type: mrr_at_10
203
+ value: 26.605
204
+ - type: mrr_at_100
205
+ value: 27.229
206
+ - type: mrr_at_1000
207
+ value: 27.296
208
+ - type: mrr_at_3
209
+ value: 24.809
210
+ - type: mrr_at_5
211
+ value: 25.927
212
+ - type: ndcg_at_1
213
+ value: 20.892
214
+ - type: ndcg_at_10
215
+ value: 26.092
216
+ - type: ndcg_at_100
217
+ value: 29.398999999999997
218
+ - type: ndcg_at_1000
219
+ value: 31.884
220
+ - type: ndcg_at_3
221
+ value: 23.032
222
+ - type: ndcg_at_5
223
+ value: 24.634
224
+ - type: precision_at_1
225
+ value: 20.892
226
+ - type: precision_at_10
227
+ value: 4.885
228
+ - type: precision_at_100
229
+ value: 0.818
230
+ - type: precision_at_1000
231
+ value: 0.126
232
+ - type: precision_at_3
233
+ value: 10.977
234
+ - type: precision_at_5
235
+ value: 8.013
236
+ - type: recall_at_1
237
+ value: 16.576
238
+ - type: recall_at_10
239
+ value: 32.945
240
+ - type: recall_at_100
241
+ value: 47.337
242
+ - type: recall_at_1000
243
+ value: 64.592
244
+ - type: recall_at_3
245
+ value: 24.053
246
+ - type: recall_at_5
247
+ value: 28.465
248
+ - task:
249
+ type: Retrieval
250
+ dataset:
251
+ type: BeIR/cqadupstack
252
+ name: MTEB CQADupstackGamingRetrieval
253
+ config: default
254
+ split: test
255
+ revision: None
256
+ metrics:
257
+ - type: map_at_1
258
+ value: 20.604
259
+ - type: map_at_10
260
+ value: 28.754999999999995
261
+ - type: map_at_100
262
+ value: 29.767
263
+ - type: map_at_1000
264
+ value: 29.852
265
+ - type: map_at_3
266
+ value: 26.268
267
+ - type: map_at_5
268
+ value: 27.559
269
+ - type: mrr_at_1
270
+ value: 24.326
271
+ - type: mrr_at_10
272
+ value: 31.602000000000004
273
+ - type: mrr_at_100
274
+ value: 32.46
275
+ - type: mrr_at_1000
276
+ value: 32.521
277
+ - type: mrr_at_3
278
+ value: 29.415000000000003
279
+ - type: mrr_at_5
280
+ value: 30.581000000000003
281
+ - type: ndcg_at_1
282
+ value: 24.326
283
+ - type: ndcg_at_10
284
+ value: 33.335
285
+ - type: ndcg_at_100
286
+ value: 38.086
287
+ - type: ndcg_at_1000
288
+ value: 40.319
289
+ - type: ndcg_at_3
290
+ value: 28.796
291
+ - type: ndcg_at_5
292
+ value: 30.758999999999997
293
+ - type: precision_at_1
294
+ value: 24.326
295
+ - type: precision_at_10
296
+ value: 5.712
297
+ - type: precision_at_100
298
+ value: 0.893
299
+ - type: precision_at_1000
300
+ value: 0.11499999999999999
301
+ - type: precision_at_3
302
+ value: 13.208
303
+ - type: precision_at_5
304
+ value: 9.329
305
+ - type: recall_at_1
306
+ value: 20.604
307
+ - type: recall_at_10
308
+ value: 44.505
309
+ - type: recall_at_100
310
+ value: 65.866
311
+ - type: recall_at_1000
312
+ value: 82.61800000000001
313
+ - type: recall_at_3
314
+ value: 31.794
315
+ - type: recall_at_5
316
+ value: 36.831
317
+ - task:
318
+ type: Retrieval
319
+ dataset:
320
+ type: BeIR/cqadupstack
321
+ name: MTEB CQADupstackGisRetrieval
322
+ config: default
323
+ split: test
324
+ revision: None
325
+ metrics:
326
+ - type: map_at_1
327
+ value: 8.280999999999999
328
+ - type: map_at_10
329
+ value: 11.636000000000001
330
+ - type: map_at_100
331
+ value: 12.363
332
+ - type: map_at_1000
333
+ value: 12.469
334
+ - type: map_at_3
335
+ value: 10.415000000000001
336
+ - type: map_at_5
337
+ value: 11.144
338
+ - type: mrr_at_1
339
+ value: 9.266
340
+ - type: mrr_at_10
341
+ value: 12.838
342
+ - type: mrr_at_100
343
+ value: 13.608999999999998
344
+ - type: mrr_at_1000
345
+ value: 13.700999999999999
346
+ - type: mrr_at_3
347
+ value: 11.507000000000001
348
+ - type: mrr_at_5
349
+ value: 12.343
350
+ - type: ndcg_at_1
351
+ value: 9.266
352
+ - type: ndcg_at_10
353
+ value: 13.877
354
+ - type: ndcg_at_100
355
+ value: 18.119
356
+ - type: ndcg_at_1000
357
+ value: 21.247
358
+ - type: ndcg_at_3
359
+ value: 11.376999999999999
360
+ - type: ndcg_at_5
361
+ value: 12.675
362
+ - type: precision_at_1
363
+ value: 9.266
364
+ - type: precision_at_10
365
+ value: 2.226
366
+ - type: precision_at_100
367
+ value: 0.47200000000000003
368
+ - type: precision_at_1000
369
+ value: 0.077
370
+ - type: precision_at_3
371
+ value: 4.859
372
+ - type: precision_at_5
373
+ value: 3.6380000000000003
374
+ - type: recall_at_1
375
+ value: 8.280999999999999
376
+ - type: recall_at_10
377
+ value: 19.872999999999998
378
+ - type: recall_at_100
379
+ value: 40.585
380
+ - type: recall_at_1000
381
+ value: 65.225
382
+ - type: recall_at_3
383
+ value: 13.014000000000001
384
+ - type: recall_at_5
385
+ value: 16.147
386
+ - task:
387
+ type: Retrieval
388
+ dataset:
389
+ type: BeIR/cqadupstack
390
+ name: MTEB CQADupstackMathematicaRetrieval
391
+ config: default
392
+ split: test
393
+ revision: None
394
+ metrics:
395
+ - type: map_at_1
396
+ value: 4.1209999999999996
397
+ - type: map_at_10
398
+ value: 7.272
399
+ - type: map_at_100
400
+ value: 8.079
401
+ - type: map_at_1000
402
+ value: 8.199
403
+ - type: map_at_3
404
+ value: 6.212
405
+ - type: map_at_5
406
+ value: 6.736000000000001
407
+ - type: mrr_at_1
408
+ value: 5.721
409
+ - type: mrr_at_10
410
+ value: 9.418
411
+ - type: mrr_at_100
412
+ value: 10.281
413
+ - type: mrr_at_1000
414
+ value: 10.385
415
+ - type: mrr_at_3
416
+ value: 8.126
417
+ - type: mrr_at_5
418
+ value: 8.779
419
+ - type: ndcg_at_1
420
+ value: 5.721
421
+ - type: ndcg_at_10
422
+ value: 9.673
423
+ - type: ndcg_at_100
424
+ value: 13.852999999999998
425
+ - type: ndcg_at_1000
426
+ value: 17.546999999999997
427
+ - type: ndcg_at_3
428
+ value: 7.509
429
+ - type: ndcg_at_5
430
+ value: 8.373
431
+ - type: precision_at_1
432
+ value: 5.721
433
+ - type: precision_at_10
434
+ value: 2.04
435
+ - type: precision_at_100
436
+ value: 0.48
437
+ - type: precision_at_1000
438
+ value: 0.093
439
+ - type: precision_at_3
440
+ value: 4.022
441
+ - type: precision_at_5
442
+ value: 3.06
443
+ - type: recall_at_1
444
+ value: 4.1209999999999996
445
+ - type: recall_at_10
446
+ value: 15.201
447
+ - type: recall_at_100
448
+ value: 33.922999999999995
449
+ - type: recall_at_1000
450
+ value: 61.529999999999994
451
+ - type: recall_at_3
452
+ value: 8.869
453
+ - type: recall_at_5
454
+ value: 11.257
455
+ - task:
456
+ type: Retrieval
457
+ dataset:
458
+ type: BeIR/cqadupstack
459
+ name: MTEB CQADupstackPhysicsRetrieval
460
+ config: default
461
+ split: test
462
+ revision: None
463
+ metrics:
464
+ - type: map_at_1
465
+ value: 14.09
466
+ - type: map_at_10
467
+ value: 19.573999999999998
468
+ - type: map_at_100
469
+ value: 20.580000000000002
470
+ - type: map_at_1000
471
+ value: 20.704
472
+ - type: map_at_3
473
+ value: 17.68
474
+ - type: map_at_5
475
+ value: 18.64
476
+ - type: mrr_at_1
477
+ value: 17.227999999999998
478
+ - type: mrr_at_10
479
+ value: 23.152
480
+ - type: mrr_at_100
481
+ value: 24.056
482
+ - type: mrr_at_1000
483
+ value: 24.141000000000002
484
+ - type: mrr_at_3
485
+ value: 21.142
486
+ - type: mrr_at_5
487
+ value: 22.201
488
+ - type: ndcg_at_1
489
+ value: 17.227999999999998
490
+ - type: ndcg_at_10
491
+ value: 23.39
492
+ - type: ndcg_at_100
493
+ value: 28.483999999999998
494
+ - type: ndcg_at_1000
495
+ value: 31.709
496
+ - type: ndcg_at_3
497
+ value: 19.883
498
+ - type: ndcg_at_5
499
+ value: 21.34
500
+ - type: precision_at_1
501
+ value: 17.227999999999998
502
+ - type: precision_at_10
503
+ value: 4.3790000000000004
504
+ - type: precision_at_100
505
+ value: 0.826
506
+ - type: precision_at_1000
507
+ value: 0.128
508
+ - type: precision_at_3
509
+ value: 9.496
510
+ - type: precision_at_5
511
+ value: 6.872
512
+ - type: recall_at_1
513
+ value: 14.09
514
+ - type: recall_at_10
515
+ value: 31.580000000000002
516
+ - type: recall_at_100
517
+ value: 54.074
518
+ - type: recall_at_1000
519
+ value: 77.092
520
+ - type: recall_at_3
521
+ value: 21.601
522
+ - type: recall_at_5
523
+ value: 25.333
524
+ - task:
525
+ type: Retrieval
526
+ dataset:
527
+ type: BeIR/cqadupstack
528
+ name: MTEB CQADupstackProgrammersRetrieval
529
+ config: default
530
+ split: test
531
+ revision: None
532
+ metrics:
533
+ - type: map_at_1
534
+ value: 10.538
535
+ - type: map_at_10
536
+ value: 15.75
537
+ - type: map_at_100
538
+ value: 16.71
539
+ - type: map_at_1000
540
+ value: 16.838
541
+ - type: map_at_3
542
+ value: 13.488
543
+ - type: map_at_5
544
+ value: 14.712
545
+ - type: mrr_at_1
546
+ value: 13.813
547
+ - type: mrr_at_10
548
+ value: 19.08
549
+ - type: mrr_at_100
550
+ value: 19.946
551
+ - type: mrr_at_1000
552
+ value: 20.044
553
+ - type: mrr_at_3
554
+ value: 16.838
555
+ - type: mrr_at_5
556
+ value: 17.951
557
+ - type: ndcg_at_1
558
+ value: 13.813
559
+ - type: ndcg_at_10
560
+ value: 19.669
561
+ - type: ndcg_at_100
562
+ value: 24.488
563
+ - type: ndcg_at_1000
564
+ value: 27.87
565
+ - type: ndcg_at_3
566
+ value: 15.479000000000001
567
+ - type: ndcg_at_5
568
+ value: 17.229
569
+ - type: precision_at_1
570
+ value: 13.813
571
+ - type: precision_at_10
572
+ value: 3.916
573
+ - type: precision_at_100
574
+ value: 0.743
575
+ - type: precision_at_1000
576
+ value: 0.122
577
+ - type: precision_at_3
578
+ value: 7.534000000000001
579
+ - type: precision_at_5
580
+ value: 5.822
581
+ - type: recall_at_1
582
+ value: 10.538
583
+ - type: recall_at_10
584
+ value: 28.693
585
+ - type: recall_at_100
586
+ value: 50.308
587
+ - type: recall_at_1000
588
+ value: 74.44
589
+ - type: recall_at_3
590
+ value: 16.866999999999997
591
+ - type: recall_at_5
592
+ value: 21.404999999999998
593
+ - task:
594
+ type: Retrieval
595
+ dataset:
596
+ type: BeIR/cqadupstack
597
+ name: MTEB CQADupstackRetrieval
598
+ config: default
599
+ split: test
600
+ revision: None
601
+ metrics:
602
+ - type: map_at_1
603
+ value: 11.044583333333332
604
+ - type: map_at_10
605
+ value: 15.682833333333335
606
+ - type: map_at_100
607
+ value: 16.506500000000003
608
+ - type: map_at_1000
609
+ value: 16.623833333333334
610
+ - type: map_at_3
611
+ value: 14.130833333333333
612
+ - type: map_at_5
613
+ value: 14.963583333333332
614
+ - type: mrr_at_1
615
+ value: 13.482833333333332
616
+ - type: mrr_at_10
617
+ value: 18.328500000000002
618
+ - type: mrr_at_100
619
+ value: 19.095416666666665
620
+ - type: mrr_at_1000
621
+ value: 19.18241666666666
622
+ - type: mrr_at_3
623
+ value: 16.754749999999998
624
+ - type: mrr_at_5
625
+ value: 17.614749999999997
626
+ - type: ndcg_at_1
627
+ value: 13.482833333333332
628
+ - type: ndcg_at_10
629
+ value: 18.81491666666667
630
+ - type: ndcg_at_100
631
+ value: 22.946833333333334
632
+ - type: ndcg_at_1000
633
+ value: 26.061083333333336
634
+ - type: ndcg_at_3
635
+ value: 15.949333333333332
636
+ - type: ndcg_at_5
637
+ value: 17.218333333333334
638
+ - type: precision_at_1
639
+ value: 13.482833333333332
640
+ - type: precision_at_10
641
+ value: 3.456583333333333
642
+ - type: precision_at_100
643
+ value: 0.6599166666666666
644
+ - type: precision_at_1000
645
+ value: 0.109
646
+ - type: precision_at_3
647
+ value: 7.498833333333332
648
+ - type: precision_at_5
649
+ value: 5.477166666666667
650
+ - type: recall_at_1
651
+ value: 11.044583333333332
652
+ - type: recall_at_10
653
+ value: 25.737750000000005
654
+ - type: recall_at_100
655
+ value: 44.617916666666666
656
+ - type: recall_at_1000
657
+ value: 67.56524999999999
658
+ - type: recall_at_3
659
+ value: 17.598249999999997
660
+ - type: recall_at_5
661
+ value: 20.9035
662
+ - task:
663
+ type: Retrieval
664
+ dataset:
665
+ type: BeIR/cqadupstack
666
+ name: MTEB CQADupstackStatsRetrieval
667
+ config: default
668
+ split: test
669
+ revision: None
670
+ metrics:
671
+ - type: map_at_1
672
+ value: 9.362
673
+ - type: map_at_10
674
+ value: 13.414000000000001
675
+ - type: map_at_100
676
+ value: 14.083000000000002
677
+ - type: map_at_1000
678
+ value: 14.168
679
+ - type: map_at_3
680
+ value: 12.098
681
+ - type: map_at_5
682
+ value: 12.803999999999998
683
+ - type: mrr_at_1
684
+ value: 11.043
685
+ - type: mrr_at_10
686
+ value: 15.158
687
+ - type: mrr_at_100
688
+ value: 15.845999999999998
689
+ - type: mrr_at_1000
690
+ value: 15.916
691
+ - type: mrr_at_3
692
+ value: 13.88
693
+ - type: mrr_at_5
694
+ value: 14.601
695
+ - type: ndcg_at_1
696
+ value: 11.043
697
+ - type: ndcg_at_10
698
+ value: 16.034000000000002
699
+ - type: ndcg_at_100
700
+ value: 19.686
701
+ - type: ndcg_at_1000
702
+ value: 22.188
703
+ - type: ndcg_at_3
704
+ value: 13.530000000000001
705
+ - type: ndcg_at_5
706
+ value: 14.704
707
+ - type: precision_at_1
708
+ value: 11.043
709
+ - type: precision_at_10
710
+ value: 2.791
711
+ - type: precision_at_100
712
+ value: 0.5
713
+ - type: precision_at_1000
714
+ value: 0.077
715
+ - type: precision_at_3
716
+ value: 6.237
717
+ - type: precision_at_5
718
+ value: 4.5089999999999995
719
+ - type: recall_at_1
720
+ value: 9.362
721
+ - type: recall_at_10
722
+ value: 22.396
723
+ - type: recall_at_100
724
+ value: 39.528999999999996
725
+ - type: recall_at_1000
726
+ value: 58.809
727
+ - type: recall_at_3
728
+ value: 15.553
729
+ - type: recall_at_5
730
+ value: 18.512
731
+ - task:
732
+ type: Retrieval
733
+ dataset:
734
+ type: BeIR/cqadupstack
735
+ name: MTEB CQADupstackTexRetrieval
736
+ config: default
737
+ split: test
738
+ revision: None
739
+ metrics:
740
+ - type: map_at_1
741
+ value: 5.657
742
+ - type: map_at_10
743
+ value: 8.273
744
+ - type: map_at_100
745
+ value: 8.875
746
+ - type: map_at_1000
747
+ value: 8.977
748
+ - type: map_at_3
749
+ value: 7.32
750
+ - type: map_at_5
751
+ value: 7.792000000000001
752
+ - type: mrr_at_1
753
+ value: 7.02
754
+ - type: mrr_at_10
755
+ value: 9.966999999999999
756
+ - type: mrr_at_100
757
+ value: 10.636
758
+ - type: mrr_at_1000
759
+ value: 10.724
760
+ - type: mrr_at_3
761
+ value: 8.872
762
+ - type: mrr_at_5
763
+ value: 9.461
764
+ - type: ndcg_at_1
765
+ value: 7.02
766
+ - type: ndcg_at_10
767
+ value: 10.199
768
+ - type: ndcg_at_100
769
+ value: 13.642000000000001
770
+ - type: ndcg_at_1000
771
+ value: 16.643
772
+ - type: ndcg_at_3
773
+ value: 8.333
774
+ - type: ndcg_at_5
775
+ value: 9.103
776
+ - type: precision_at_1
777
+ value: 7.02
778
+ - type: precision_at_10
779
+ value: 1.8929999999999998
780
+ - type: precision_at_100
781
+ value: 0.43
782
+ - type: precision_at_1000
783
+ value: 0.08099999999999999
784
+ - type: precision_at_3
785
+ value: 3.843
786
+ - type: precision_at_5
787
+ value: 2.884
788
+ - type: recall_at_1
789
+ value: 5.657
790
+ - type: recall_at_10
791
+ value: 14.563
792
+ - type: recall_at_100
793
+ value: 30.807000000000002
794
+ - type: recall_at_1000
795
+ value: 53.251000000000005
796
+ - type: recall_at_3
797
+ value: 9.272
798
+ - type: recall_at_5
799
+ value: 11.202
800
+ - task:
801
+ type: Retrieval
802
+ dataset:
803
+ type: BeIR/cqadupstack
804
+ name: MTEB CQADupstackUnixRetrieval
805
+ config: default
806
+ split: test
807
+ revision: None
808
+ metrics:
809
+ - type: map_at_1
810
+ value: 10.671999999999999
811
+ - type: map_at_10
812
+ value: 14.651
813
+ - type: map_at_100
814
+ value: 15.406
815
+ - type: map_at_1000
816
+ value: 15.525
817
+ - type: map_at_3
818
+ value: 13.461
819
+ - type: map_at_5
820
+ value: 14.163
821
+ - type: mrr_at_1
822
+ value: 12.407
823
+ - type: mrr_at_10
824
+ value: 16.782
825
+ - type: mrr_at_100
826
+ value: 17.562
827
+ - type: mrr_at_1000
828
+ value: 17.653
829
+ - type: mrr_at_3
830
+ value: 15.47
831
+ - type: mrr_at_5
832
+ value: 16.262
833
+ - type: ndcg_at_1
834
+ value: 12.407
835
+ - type: ndcg_at_10
836
+ value: 17.251
837
+ - type: ndcg_at_100
838
+ value: 21.378
839
+ - type: ndcg_at_1000
840
+ value: 24.689
841
+ - type: ndcg_at_3
842
+ value: 14.915000000000001
843
+ - type: ndcg_at_5
844
+ value: 16.1
845
+ - type: precision_at_1
846
+ value: 12.407
847
+ - type: precision_at_10
848
+ value: 2.91
849
+ - type: precision_at_100
850
+ value: 0.573
851
+ - type: precision_at_1000
852
+ value: 0.096
853
+ - type: precision_at_3
854
+ value: 6.779
855
+ - type: precision_at_5
856
+ value: 4.888
857
+ - type: recall_at_1
858
+ value: 10.671999999999999
859
+ - type: recall_at_10
860
+ value: 23.099
861
+ - type: recall_at_100
862
+ value: 41.937999999999995
863
+ - type: recall_at_1000
864
+ value: 66.495
865
+ - type: recall_at_3
866
+ value: 16.901
867
+ - type: recall_at_5
868
+ value: 19.807
869
+ - task:
870
+ type: Retrieval
871
+ dataset:
872
+ type: BeIR/cqadupstack
873
+ name: MTEB CQADupstackWebmastersRetrieval
874
+ config: default
875
+ split: test
876
+ revision: None
877
+ metrics:
878
+ - type: map_at_1
879
+ value: 13.364
880
+ - type: map_at_10
881
+ value: 17.772
882
+ - type: map_at_100
883
+ value: 18.659
884
+ - type: map_at_1000
885
+ value: 18.861
886
+ - type: map_at_3
887
+ value: 16.659
888
+ - type: map_at_5
889
+ value: 17.174
890
+ - type: mrr_at_1
891
+ value: 16.996
892
+ - type: mrr_at_10
893
+ value: 21.687
894
+ - type: mrr_at_100
895
+ value: 22.313
896
+ - type: mrr_at_1000
897
+ value: 22.422
898
+ - type: mrr_at_3
899
+ value: 20.652
900
+ - type: mrr_at_5
901
+ value: 21.146
902
+ - type: ndcg_at_1
903
+ value: 16.996
904
+ - type: ndcg_at_10
905
+ value: 21.067
906
+ - type: ndcg_at_100
907
+ value: 24.829
908
+ - type: ndcg_at_1000
909
+ value: 28.866999999999997
910
+ - type: ndcg_at_3
911
+ value: 19.466
912
+ - type: ndcg_at_5
913
+ value: 19.993
914
+ - type: precision_at_1
915
+ value: 16.996
916
+ - type: precision_at_10
917
+ value: 4.071000000000001
918
+ - type: precision_at_100
919
+ value: 0.9329999999999999
920
+ - type: precision_at_1000
921
+ value: 0.183
922
+ - type: precision_at_3
923
+ value: 9.223
924
+ - type: precision_at_5
925
+ value: 6.4030000000000005
926
+ - type: recall_at_1
927
+ value: 13.364
928
+ - type: recall_at_10
929
+ value: 25.976
930
+ - type: recall_at_100
931
+ value: 44.134
932
+ - type: recall_at_1000
933
+ value: 73.181
934
+ - type: recall_at_3
935
+ value: 20.503
936
+ - type: recall_at_5
937
+ value: 22.409000000000002
938
+ - task:
939
+ type: Retrieval
940
+ dataset:
941
+ type: BeIR/cqadupstack
942
+ name: MTEB CQADupstackWordpressRetrieval
943
+ config: default
944
+ split: test
945
+ revision: None
946
+ metrics:
947
+ - type: map_at_1
948
+ value: 5.151
949
+ - type: map_at_10
950
+ value: 9.155000000000001
951
+ - type: map_at_100
952
+ value: 9.783999999999999
953
+ - type: map_at_1000
954
+ value: 9.879
955
+ - type: map_at_3
956
+ value: 7.825
957
+ - type: map_at_5
958
+ value: 8.637
959
+ - type: mrr_at_1
960
+ value: 5.915
961
+ - type: mrr_at_10
962
+ value: 10.34
963
+ - type: mrr_at_100
964
+ value: 10.943999999999999
965
+ - type: mrr_at_1000
966
+ value: 11.033
967
+ - type: mrr_at_3
968
+ value: 8.934000000000001
969
+ - type: mrr_at_5
970
+ value: 9.812
971
+ - type: ndcg_at_1
972
+ value: 5.915
973
+ - type: ndcg_at_10
974
+ value: 11.561
975
+ - type: ndcg_at_100
976
+ value: 14.971
977
+ - type: ndcg_at_1000
978
+ value: 17.907999999999998
979
+ - type: ndcg_at_3
980
+ value: 8.896999999999998
981
+ - type: ndcg_at_5
982
+ value: 10.313
983
+ - type: precision_at_1
984
+ value: 5.915
985
+ - type: precision_at_10
986
+ value: 2.1069999999999998
987
+ - type: precision_at_100
988
+ value: 0.414
989
+ - type: precision_at_1000
990
+ value: 0.074
991
+ - type: precision_at_3
992
+ value: 4.128
993
+ - type: precision_at_5
994
+ value: 3.327
995
+ - type: recall_at_1
996
+ value: 5.151
997
+ - type: recall_at_10
998
+ value: 17.874000000000002
999
+ - type: recall_at_100
1000
+ value: 34.174
1001
+ - type: recall_at_1000
1002
+ value: 56.879999999999995
1003
+ - type: recall_at_3
1004
+ value: 10.732999999999999
1005
+ - type: recall_at_5
1006
+ value: 14.113000000000001
1007
+ - task:
1008
+ type: Retrieval
1009
+ dataset:
1010
+ type: climate-fever
1011
+ name: MTEB ClimateFEVER
1012
+ config: default
1013
+ split: test
1014
+ revision: None
1015
+ metrics:
1016
+ - type: map_at_1
1017
+ value: 3.101
1018
+ - type: map_at_10
1019
+ value: 5.434
1020
+ - type: map_at_100
1021
+ value: 6.267
1022
+ - type: map_at_1000
1023
+ value: 6.418
1024
+ - type: map_at_3
1025
+ value: 4.377000000000001
1026
+ - type: map_at_5
1027
+ value: 4.841
1028
+ - type: mrr_at_1
1029
+ value: 7.166
1030
+ - type: mrr_at_10
1031
+ value: 12.012
1032
+ - type: mrr_at_100
1033
+ value: 13.144
1034
+ - type: mrr_at_1000
1035
+ value: 13.229
1036
+ - type: mrr_at_3
1037
+ value: 9.826
1038
+ - type: mrr_at_5
1039
+ value: 10.921
1040
+ - type: ndcg_at_1
1041
+ value: 7.166
1042
+ - type: ndcg_at_10
1043
+ value: 8.687000000000001
1044
+ - type: ndcg_at_100
1045
+ value: 13.345
1046
+ - type: ndcg_at_1000
1047
+ value: 16.915
1048
+ - type: ndcg_at_3
1049
+ value: 6.276
1050
+ - type: ndcg_at_5
1051
+ value: 7.013
1052
+ - type: precision_at_1
1053
+ value: 7.166
1054
+ - type: precision_at_10
1055
+ value: 2.9250000000000003
1056
+ - type: precision_at_100
1057
+ value: 0.771
1058
+ - type: precision_at_1000
1059
+ value: 0.13999999999999999
1060
+ - type: precision_at_3
1061
+ value: 4.734
1062
+ - type: precision_at_5
1063
+ value: 3.8830000000000005
1064
+ - type: recall_at_1
1065
+ value: 3.101
1066
+ - type: recall_at_10
1067
+ value: 11.774999999999999
1068
+ - type: recall_at_100
1069
+ value: 28.819
1070
+ - type: recall_at_1000
1071
+ value: 49.886
1072
+ - type: recall_at_3
1073
+ value: 5.783
1074
+ - type: recall_at_5
1075
+ value: 7.692
1076
+ - task:
1077
+ type: Retrieval
1078
+ dataset:
1079
+ type: dbpedia-entity
1080
+ name: MTEB DBPedia
1081
+ config: default
1082
+ split: test
1083
+ revision: None
1084
+ metrics:
1085
+ - type: map_at_1
1086
+ value: 2.758
1087
+ - type: map_at_10
1088
+ value: 5.507
1089
+ - type: map_at_100
1090
+ value: 7.1819999999999995
1091
+ - type: map_at_1000
1092
+ value: 7.652
1093
+ - type: map_at_3
1094
+ value: 4.131
1095
+ - type: map_at_5
1096
+ value: 4.702
1097
+ - type: mrr_at_1
1098
+ value: 28.499999999999996
1099
+ - type: mrr_at_10
1100
+ value: 37.693
1101
+ - type: mrr_at_100
1102
+ value: 38.657000000000004
1103
+ - type: mrr_at_1000
1104
+ value: 38.704
1105
+ - type: mrr_at_3
1106
+ value: 34.792
1107
+ - type: mrr_at_5
1108
+ value: 36.417
1109
+ - type: ndcg_at_1
1110
+ value: 20.625
1111
+ - type: ndcg_at_10
1112
+ value: 14.771999999999998
1113
+ - type: ndcg_at_100
1114
+ value: 16.821
1115
+ - type: ndcg_at_1000
1116
+ value: 21.546000000000003
1117
+ - type: ndcg_at_3
1118
+ value: 16.528000000000002
1119
+ - type: ndcg_at_5
1120
+ value: 15.573
1121
+ - type: precision_at_1
1122
+ value: 28.499999999999996
1123
+ - type: precision_at_10
1124
+ value: 12.25
1125
+ - type: precision_at_100
1126
+ value: 3.7600000000000002
1127
+ - type: precision_at_1000
1128
+ value: 0.86
1129
+ - type: precision_at_3
1130
+ value: 19.167
1131
+ - type: precision_at_5
1132
+ value: 16.25
1133
+ - type: recall_at_1
1134
+ value: 2.758
1135
+ - type: recall_at_10
1136
+ value: 9.164
1137
+ - type: recall_at_100
1138
+ value: 21.022
1139
+ - type: recall_at_1000
1140
+ value: 37.053999999999995
1141
+ - type: recall_at_3
1142
+ value: 5.112
1143
+ - type: recall_at_5
1144
+ value: 6.413
1145
+ - task:
1146
+ type: Reranking
1147
+ dataset:
1148
+ type: mteb/mind_small
1149
+ name: MTEB MindSmallReranking
1150
+ config: default
1151
+ split: test
1152
+ revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
1153
+ metrics:
1154
+ - type: map
1155
+ value: 28.53554681148413
1156
+ - type: mrr
1157
+ value: 29.290078704990325
1158
+ - task:
1159
+ type: STS
1160
+ dataset:
1161
+ type: mteb/sickr-sts
1162
+ name: MTEB SICK-R
1163
+ config: default
1164
+ split: test
1165
+ revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
1166
+ metrics:
1167
+ - type: cos_sim_pearson
1168
+ value: 76.52926207453477
1169
+ - type: cos_sim_spearman
1170
+ value: 68.98528351149498
1171
+ - type: euclidean_pearson
1172
+ value: 73.7744559091218
1173
+ - type: euclidean_spearman
1174
+ value: 69.03481995814735
1175
+ - type: manhattan_pearson
1176
+ value: 73.72818267270651
1177
+ - type: manhattan_spearman
1178
+ value: 69.00576442086793
1179
+ - task:
1180
+ type: STS
1181
+ dataset:
1182
+ type: mteb/sts12-sts
1183
+ name: MTEB STS12
1184
+ config: default
1185
+ split: test
1186
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
1187
+ metrics:
1188
+ - type: cos_sim_pearson
1189
+ value: 61.71540153163407
1190
+ - type: cos_sim_spearman
1191
+ value: 58.502746406116614
1192
+ - type: euclidean_pearson
1193
+ value: 60.82817999438477
1194
+ - type: euclidean_spearman
1195
+ value: 58.988494433752756
1196
+ - type: manhattan_pearson
1197
+ value: 60.87147859170236
1198
+ - type: manhattan_spearman
1199
+ value: 59.03527382025516
1200
+ - task:
1201
+ type: STS
1202
+ dataset:
1203
+ type: mteb/sts13-sts
1204
+ name: MTEB STS13
1205
+ config: default
1206
+ split: test
1207
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
1208
+ metrics:
1209
+ - type: cos_sim_pearson
1210
+ value: 72.89990498692094
1211
+ - type: cos_sim_spearman
1212
+ value: 74.03028513377879
1213
+ - type: euclidean_pearson
1214
+ value: 73.8252088833803
1215
+ - type: euclidean_spearman
1216
+ value: 74.15554246478399
1217
+ - type: manhattan_pearson
1218
+ value: 73.80947397334666
1219
+ - type: manhattan_spearman
1220
+ value: 74.13117958176566
1221
+ - task:
1222
+ type: STS
1223
+ dataset:
1224
+ type: mteb/sts14-sts
1225
+ name: MTEB STS14
1226
+ config: default
1227
+ split: test
1228
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
1229
+ metrics:
1230
+ - type: cos_sim_pearson
1231
+ value: 70.67974206005906
1232
+ - type: cos_sim_spearman
1233
+ value: 66.18263558486296
1234
+ - type: euclidean_pearson
1235
+ value: 69.5048876024341
1236
+ - type: euclidean_spearman
1237
+ value: 66.36380457878391
1238
+ - type: manhattan_pearson
1239
+ value: 69.4895372451589
1240
+ - type: manhattan_spearman
1241
+ value: 66.36941569935124
1242
+ - task:
1243
+ type: STS
1244
+ dataset:
1245
+ type: mteb/sts15-sts
1246
+ name: MTEB STS15
1247
+ config: default
1248
+ split: test
1249
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
1250
+ metrics:
1251
+ - type: cos_sim_pearson
1252
+ value: 73.99856913569187
1253
+ - type: cos_sim_spearman
1254
+ value: 75.54712054246464
1255
+ - type: euclidean_pearson
1256
+ value: 74.55692573876115
1257
+ - type: euclidean_spearman
1258
+ value: 75.34499056740096
1259
+ - type: manhattan_pearson
1260
+ value: 74.59342318869683
1261
+ - type: manhattan_spearman
1262
+ value: 75.35708317926819
1263
+ - task:
1264
+ type: STS
1265
+ dataset:
1266
+ type: mteb/sts16-sts
1267
+ name: MTEB STS16
1268
+ config: default
1269
+ split: test
1270
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
1271
+ metrics:
1272
+ - type: cos_sim_pearson
1273
+ value: 72.3343670787494
1274
+ - type: cos_sim_spearman
1275
+ value: 73.7136650302399
1276
+ - type: euclidean_pearson
1277
+ value: 73.86004257913046
1278
+ - type: euclidean_spearman
1279
+ value: 73.9557418048638
1280
+ - type: manhattan_pearson
1281
+ value: 73.78919091538661
1282
+ - type: manhattan_spearman
1283
+ value: 73.86316425954108
1284
+ - task:
1285
+ type: STS
1286
+ dataset:
1287
+ type: mteb/sts17-crosslingual-sts
1288
+ name: MTEB STS17 (en-en)
1289
+ config: en-en
1290
+ split: test
1291
+ revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
1292
+ metrics:
1293
+ - type: cos_sim_pearson
1294
+ value: 79.08159601556619
1295
+ - type: cos_sim_spearman
1296
+ value: 80.13910828685532
1297
+ - type: euclidean_pearson
1298
+ value: 79.39197806617453
1299
+ - type: euclidean_spearman
1300
+ value: 79.85692277871196
1301
+ - type: manhattan_pearson
1302
+ value: 79.32452246324705
1303
+ - type: manhattan_spearman
1304
+ value: 79.70120373587193
1305
+ - task:
1306
+ type: STS
1307
+ dataset:
1308
+ type: mteb/sts22-crosslingual-sts
1309
+ name: MTEB STS22 (en)
1310
+ config: en
1311
+ split: test
1312
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
1313
+ metrics:
1314
+ - type: cos_sim_pearson
1315
+ value: 62.29720207747786
1316
+ - type: cos_sim_spearman
1317
+ value: 65.65260681394685
1318
+ - type: euclidean_pearson
1319
+ value: 64.49002165983158
1320
+ - type: euclidean_spearman
1321
+ value: 65.25917651158736
1322
+ - type: manhattan_pearson
1323
+ value: 64.49981108236335
1324
+ - type: manhattan_spearman
1325
+ value: 65.20426825202405
1326
+ - task:
1327
+ type: STS
1328
+ dataset:
1329
+ type: mteb/stsbenchmark-sts
1330
+ name: MTEB STSBenchmark
1331
+ config: default
1332
+ split: test
1333
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
1334
+ metrics:
1335
+ - type: cos_sim_pearson
1336
+ value: 71.1871068550574
1337
+ - type: cos_sim_spearman
1338
+ value: 71.40167034949341
1339
+ - type: euclidean_pearson
1340
+ value: 72.2373684855404
1341
+ - type: euclidean_spearman
1342
+ value: 71.90255429812984
1343
+ - type: manhattan_pearson
1344
+ value: 72.23173532049509
1345
+ - type: manhattan_spearman
1346
+ value: 71.87843489689064
1347
+ - task:
1348
+ type: Reranking
1349
+ dataset:
1350
+ type: mteb/scidocs-reranking
1351
+ name: MTEB SciDocsRR
1352
+ config: default
1353
+ split: test
1354
+ revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
1355
+ metrics:
1356
+ - type: map
1357
+ value: 68.65000574464773
1358
+ - type: mrr
1359
+ value: 88.29363084265044
1360
+ - task:
1361
+ type: Reranking
1362
+ dataset:
1363
+ type: mteb/stackoverflowdupquestions-reranking
1364
+ name: MTEB StackOverflowDupQuestions
1365
+ config: default
1366
+ split: test
1367
+ revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
1368
+ metrics:
1369
+ - type: map
1370
+ value: 40.76107749144358
1371
+ - type: mrr
1372
+ value: 41.03689202953908
1373
+ - task:
1374
+ type: Summarization
1375
+ dataset:
1376
+ type: mteb/summeval
1377
+ name: MTEB SummEval
1378
+ config: default
1379
+ split: test
1380
+ revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
1381
+ metrics:
1382
+ - type: cos_sim_pearson
1383
+ value: 28.68520527813894
1384
+ - type: cos_sim_spearman
1385
+ value: 29.017620841627433
1386
+ - type: dot_pearson
1387
+ value: 29.25380949876322
1388
+ - type: dot_spearman
1389
+ value: 29.33885250837327
1390
+ ---