vprelovac commited on
Commit
fad2357
1 Parent(s): b1480d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +684 -1
README.md CHANGED
@@ -1,4 +1,687 @@
1
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  This is a part of the [MTEB test](https://huggingface.co/spaces/mteb/leaderboard).
3
 
4
  ```
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ model-index:
5
+ - name: universal-sentence-encoder-large-5
6
+ results:
7
+ - task:
8
+ type: Classification
9
+ dataset:
10
+ type: mteb/amazon_counterfactual
11
+ name: MTEB AmazonCounterfactualClassification (en)
12
+ config: en
13
+ split: test
14
+ revision: e8379541af4e31359cca9fbcf4b00f2671dba205
15
+ metrics:
16
+ - type: accuracy
17
+ value: 76.19402985074628
18
+ - type: ap
19
+ value: 39.249966888759666
20
+ - type: f1
21
+ value: 70.17510532980124
22
+ - task:
23
+ type: Classification
24
+ dataset:
25
+ type: mteb/amazon_polarity
26
+ name: MTEB AmazonPolarityClassification
27
+ config: default
28
+ split: test
29
+ revision: e2d317d38cd51312af73b3d32a06d1a08b442046
30
+ metrics:
31
+ - type: accuracy
32
+ value: 69.6285
33
+ - type: ap
34
+ value: 63.97317997322299
35
+ - type: f1
36
+ value: 69.48624121982243
37
+ - task:
38
+ type: Classification
39
+ dataset:
40
+ type: mteb/amazon_reviews_multi
41
+ name: MTEB AmazonReviewsClassification (en)
42
+ config: en
43
+ split: test
44
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
45
+ metrics:
46
+ - type: accuracy
47
+ value: 35.534
48
+ - type: f1
49
+ value: 34.974303844745194
50
+ - task:
51
+ type: Clustering
52
+ dataset:
53
+ type: mteb/arxiv-clustering-p2p
54
+ name: MTEB ArxivClusteringP2P
55
+ config: default
56
+ split: test
57
+ revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
58
+ metrics:
59
+ - type: v_measure
60
+ value: 34.718110225806626
61
+ - task:
62
+ type: Clustering
63
+ dataset:
64
+ type: mteb/arxiv-clustering-s2s
65
+ name: MTEB ArxivClusteringS2S
66
+ config: default
67
+ split: test
68
+ revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
69
+ metrics:
70
+ - type: v_measure
71
+ value: 25.267234486849127
72
+ - task:
73
+ type: STS
74
+ dataset:
75
+ type: mteb/biosses-sts
76
+ name: MTEB BIOSSES
77
+ config: default
78
+ split: test
79
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
80
+ metrics:
81
+ - type: cos_sim_pearson
82
+ value: 69.65040443392367
83
+ - type: cos_sim_spearman
84
+ value: 69.35579718635816
85
+ - type: euclidean_pearson
86
+ value: 68.74078260783044
87
+ - type: euclidean_spearman
88
+ value: 69.35579718635816
89
+ - type: manhattan_pearson
90
+ value: 68.97023207188357
91
+ - type: manhattan_spearman
92
+ value: 69.2063961917937
93
+ - task:
94
+ type: Classification
95
+ dataset:
96
+ type: mteb/banking77
97
+ name: MTEB Banking77Classification
98
+ config: default
99
+ split: test
100
+ revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
101
+ metrics:
102
+ - type: accuracy
103
+ value: 78.12987012987013
104
+ - type: f1
105
+ value: 77.40193921057201
106
+ - task:
107
+ type: Clustering
108
+ dataset:
109
+ type: mteb/biorxiv-clustering-p2p
110
+ name: MTEB BiorxivClusteringP2P
111
+ config: default
112
+ split: test
113
+ revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
114
+ metrics:
115
+ - type: v_measure
116
+ value: 28.39184796722482
117
+ - task:
118
+ type: Clustering
119
+ dataset:
120
+ type: mteb/biorxiv-clustering-s2s
121
+ name: MTEB BiorxivClusteringS2S
122
+ config: default
123
+ split: test
124
+ revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
125
+ metrics:
126
+ - type: v_measure
127
+ value: 20.5151608432177
128
+ - task:
129
+ type: Classification
130
+ dataset:
131
+ type: mteb/emotion
132
+ name: MTEB EmotionClassification
133
+ config: default
134
+ split: test
135
+ revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
136
+ metrics:
137
+ - type: accuracy
138
+ value: 45.48
139
+ - type: f1
140
+ value: 41.2632839288363
141
+ - task:
142
+ type: Classification
143
+ dataset:
144
+ type: mteb/imdb
145
+ name: MTEB ImdbClassification
146
+ config: default
147
+ split: test
148
+ revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
149
+ metrics:
150
+ - type: accuracy
151
+ value: 64.0552
152
+ - type: ap
153
+ value: 59.25851636836455
154
+ - type: f1
155
+ value: 63.90501571634165
156
+ - task:
157
+ type: Classification
158
+ dataset:
159
+ type: mteb/mtop_domain
160
+ name: MTEB MTOPDomainClassification (en)
161
+ config: en
162
+ split: test
163
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
164
+ metrics:
165
+ - type: accuracy
166
+ value: 92.94117647058823
167
+ - type: f1
168
+ value: 92.7110107115347
169
+ - task:
170
+ type: Classification
171
+ dataset:
172
+ type: mteb/mtop_intent
173
+ name: MTEB MTOPIntentClassification (en)
174
+ config: en
175
+ split: test
176
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
177
+ metrics:
178
+ - type: accuracy
179
+ value: 74.43456452348381
180
+ - type: f1
181
+ value: 52.53178214778298
182
+ - task:
183
+ type: Classification
184
+ dataset:
185
+ type: mteb/amazon_massive_intent
186
+ name: MTEB MassiveIntentClassification (en)
187
+ config: en
188
+ split: test
189
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
190
+ metrics:
191
+ - type: accuracy
192
+ value: 71.68796234028245
193
+ - type: f1
194
+ value: 68.47828954699564
195
+ - task:
196
+ type: Classification
197
+ dataset:
198
+ type: mteb/amazon_massive_scenario
199
+ name: MTEB MassiveScenarioClassification (en)
200
+ config: en
201
+ split: test
202
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
203
+ metrics:
204
+ - type: accuracy
205
+ value: 77.20242098184264
206
+ - type: f1
207
+ value: 76.27977367157321
208
+ - task:
209
+ type: Clustering
210
+ dataset:
211
+ type: mteb/medrxiv-clustering-p2p
212
+ name: MTEB MedrxivClusteringP2P
213
+ config: default
214
+ split: test
215
+ revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
216
+ metrics:
217
+ - type: v_measure
218
+ value: 30.266855488757034
219
+ - task:
220
+ type: Clustering
221
+ dataset:
222
+ type: mteb/medrxiv-clustering-s2s
223
+ name: MTEB MedrxivClusteringS2S
224
+ config: default
225
+ split: test
226
+ revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
227
+ metrics:
228
+ - type: v_measure
229
+ value: 24.580327378539057
230
+ - task:
231
+ type: Clustering
232
+ dataset:
233
+ type: mteb/reddit-clustering
234
+ name: MTEB RedditClustering
235
+ config: default
236
+ split: test
237
+ revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
238
+ metrics:
239
+ - type: v_measure
240
+ value: 56.928616405043684
241
+ - task:
242
+ type: Clustering
243
+ dataset:
244
+ type: mteb/reddit-clustering-p2p
245
+ name: MTEB RedditClusteringP2P
246
+ config: default
247
+ split: test
248
+ revision: 282350215ef01743dc01b456c7f5241fa8937f16
249
+ metrics:
250
+ - type: v_measure
251
+ value: 58.94536303256525
252
+ - task:
253
+ type: STS
254
+ dataset:
255
+ type: mteb/sickr-sts
256
+ name: MTEB SICK-R
257
+ config: default
258
+ split: test
259
+ revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
260
+ metrics:
261
+ - type: cos_sim_pearson
262
+ value: 82.43899708996477
263
+ - type: cos_sim_spearman
264
+ value: 76.84011555220044
265
+ - type: euclidean_pearson
266
+ value: 79.6116260676631
267
+ - type: euclidean_spearman
268
+ value: 76.84012073472658
269
+ - type: manhattan_pearson
270
+ value: 78.49980966442152
271
+ - type: manhattan_spearman
272
+ value: 75.49233078465171
273
+ - task:
274
+ type: STS
275
+ dataset:
276
+ type: mteb/sts12-sts
277
+ name: MTEB STS12
278
+ config: default
279
+ split: test
280
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
281
+ metrics:
282
+ - type: cos_sim_pearson
283
+ value: 79.8291506264289
284
+ - type: cos_sim_spearman
285
+ value: 72.49093632759003
286
+ - type: euclidean_pearson
287
+ value: 75.42130137819414
288
+ - type: euclidean_spearman
289
+ value: 72.49048089395136
290
+ - type: manhattan_pearson
291
+ value: 74.17957476459091
292
+ - type: manhattan_spearman
293
+ value: 71.6143674273714
294
+ - task:
295
+ type: STS
296
+ dataset:
297
+ type: mteb/sts13-sts
298
+ name: MTEB STS13
299
+ config: default
300
+ split: test
301
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
302
+ metrics:
303
+ - type: cos_sim_pearson
304
+ value: 70.91903439531401
305
+ - type: cos_sim_spearman
306
+ value: 73.65106317244273
307
+ - type: euclidean_pearson
308
+ value: 73.22383725261588
309
+ - type: euclidean_spearman
310
+ value: 73.65106317244273
311
+ - type: manhattan_pearson
312
+ value: 72.98314057093636
313
+ - type: manhattan_spearman
314
+ value: 73.52101907069579
315
+ - task:
316
+ type: STS
317
+ dataset:
318
+ type: mteb/sts14-sts
319
+ name: MTEB STS14
320
+ config: default
321
+ split: test
322
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
323
+ metrics:
324
+ - type: cos_sim_pearson
325
+ value: 75.19632733755482
326
+ - type: cos_sim_spearman
327
+ value: 71.88328402076041
328
+ - type: euclidean_pearson
329
+ value: 74.02395011081532
330
+ - type: euclidean_spearman
331
+ value: 71.88328903479953
332
+ - type: manhattan_pearson
333
+ value: 73.52941749980135
334
+ - type: manhattan_spearman
335
+ value: 71.32905921324534
336
+ - task:
337
+ type: STS
338
+ dataset:
339
+ type: mteb/sts15-sts
340
+ name: MTEB STS15
341
+ config: default
342
+ split: test
343
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
344
+ metrics:
345
+ - type: cos_sim_pearson
346
+ value: 82.42736501667461
347
+ - type: cos_sim_spearman
348
+ value: 82.89997148218205
349
+ - type: euclidean_pearson
350
+ value: 82.3189209945513
351
+ - type: euclidean_spearman
352
+ value: 82.89997089267106
353
+ - type: manhattan_pearson
354
+ value: 81.78597437071429
355
+ - type: manhattan_spearman
356
+ value: 82.21582873302081
357
+ - task:
358
+ type: STS
359
+ dataset:
360
+ type: mteb/sts16-sts
361
+ name: MTEB STS16
362
+ config: default
363
+ split: test
364
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
365
+ metrics:
366
+ - type: cos_sim_pearson
367
+ value: 78.44968010602165
368
+ - type: cos_sim_spearman
369
+ value: 79.82626284236876
370
+ - type: euclidean_pearson
371
+ value: 79.4157474030238
372
+ - type: euclidean_spearman
373
+ value: 79.82626269881543
374
+ - type: manhattan_pearson
375
+ value: 79.13275737559012
376
+ - type: manhattan_spearman
377
+ value: 79.4847570398719
378
+ - task:
379
+ type: STS
380
+ dataset:
381
+ type: mteb/sts17-crosslingual-sts
382
+ name: MTEB STS17 (en-en)
383
+ config: en-en
384
+ split: test
385
+ revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
386
+ metrics:
387
+ - type: cos_sim_pearson
388
+ value: 84.51882547098218
389
+ - type: cos_sim_spearman
390
+ value: 85.19309361840223
391
+ - type: euclidean_pearson
392
+ value: 84.78417242196153
393
+ - type: euclidean_spearman
394
+ value: 85.19307726106497
395
+ - type: manhattan_pearson
396
+ value: 84.09108278425708
397
+ - type: manhattan_spearman
398
+ value: 84.13590986630149
399
+ - task:
400
+ type: STS
401
+ dataset:
402
+ type: mteb/sts22-crosslingual-sts
403
+ name: MTEB STS22 (en)
404
+ config: en
405
+ split: test
406
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
407
+ metrics:
408
+ - type: cos_sim_pearson
409
+ value: 44.814384769251085
410
+ - type: cos_sim_spearman
411
+ value: 48.43949857027059
412
+ - type: euclidean_pearson
413
+ value: 47.479132435178855
414
+ - type: euclidean_spearman
415
+ value: 48.43949857027059
416
+ - type: manhattan_pearson
417
+ value: 47.16203934707649
418
+ - type: manhattan_spearman
419
+ value: 48.289920897667095
420
+ - task:
421
+ type: STS
422
+ dataset:
423
+ type: mteb/stsbenchmark-sts
424
+ name: MTEB STSBenchmark
425
+ config: default
426
+ split: test
427
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
428
+ metrics:
429
+ - type: cos_sim_pearson
430
+ value: 81.25646447054616
431
+ - type: cos_sim_spearman
432
+ value: 79.93231051166357
433
+ - type: euclidean_pearson
434
+ value: 80.65225742476945
435
+ - type: euclidean_spearman
436
+ value: 79.93231051166357
437
+ - type: manhattan_pearson
438
+ value: 79.84341819764376
439
+ - type: manhattan_spearman
440
+ value: 79.07650150491334
441
+ - task:
442
+ type: PairClassification
443
+ dataset:
444
+ type: mteb/sprintduplicatequestions-pairclassification
445
+ name: MTEB SprintDuplicateQuestions
446
+ config: default
447
+ split: test
448
+ revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
449
+ metrics:
450
+ - type: cos_sim_accuracy
451
+ value: 99.5910891089109
452
+ - type: cos_sim_ap
453
+ value: 84.37184771930944
454
+ - type: cos_sim_f1
455
+ value: 78.78787878787878
456
+ - type: cos_sim_precision
457
+ value: 80.99260823653644
458
+ - type: cos_sim_recall
459
+ value: 76.7
460
+ - type: dot_accuracy
461
+ value: 99.5910891089109
462
+ - type: dot_ap
463
+ value: 84.37184771930944
464
+ - type: dot_f1
465
+ value: 78.78787878787878
466
+ - type: dot_precision
467
+ value: 80.99260823653644
468
+ - type: dot_recall
469
+ value: 76.7
470
+ - type: euclidean_accuracy
471
+ value: 99.5910891089109
472
+ - type: euclidean_ap
473
+ value: 84.37185436709098
474
+ - type: euclidean_f1
475
+ value: 78.78787878787878
476
+ - type: euclidean_precision
477
+ value: 80.99260823653644
478
+ - type: euclidean_recall
479
+ value: 76.7
480
+ - type: manhattan_accuracy
481
+ value: 99.6108910891089
482
+ - type: manhattan_ap
483
+ value: 85.13355467581354
484
+ - type: manhattan_f1
485
+ value: 80.2788844621514
486
+ - type: manhattan_precision
487
+ value: 79.96031746031747
488
+ - type: manhattan_recall
489
+ value: 80.60000000000001
490
+ - type: max_accuracy
491
+ value: 99.6108910891089
492
+ - type: max_ap
493
+ value: 85.13355467581354
494
+ - type: max_f1
495
+ value: 80.2788844621514
496
+ - task:
497
+ type: Clustering
498
+ dataset:
499
+ type: mteb/stackexchange-clustering
500
+ name: MTEB StackExchangeClustering
501
+ config: default
502
+ split: test
503
+ revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
504
+ metrics:
505
+ - type: v_measure
506
+ value: 60.8469558550317
507
+ - task:
508
+ type: Clustering
509
+ dataset:
510
+ type: mteb/stackexchange-clustering-p2p
511
+ name: MTEB StackExchangeClusteringP2P
512
+ config: default
513
+ split: test
514
+ revision: 815ca46b2622cec33ccafc3735d572c266efdb44
515
+ metrics:
516
+ - type: v_measure
517
+ value: 33.14392913702168
518
+ - task:
519
+ type: Summarization
520
+ dataset:
521
+ type: mteb/summeval
522
+ name: MTEB SummEval
523
+ config: default
524
+ split: test
525
+ revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
526
+ metrics:
527
+ - type: cos_sim_pearson
528
+ value: 29.566148619704457
529
+ - type: cos_sim_spearman
530
+ value: 29.01201818902588
531
+ - type: dot_pearson
532
+ value: 29.566149876183374
533
+ - type: dot_spearman
534
+ value: 29.014046950422795
535
+ - task:
536
+ type: Classification
537
+ dataset:
538
+ type: mteb/toxic_conversations_50k
539
+ name: MTEB ToxicConversationsClassification
540
+ config: default
541
+ split: test
542
+ revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
543
+ metrics:
544
+ - type: accuracy
545
+ value: 70.17420000000001
546
+ - type: ap
547
+ value: 13.49623412034604
548
+ - type: f1
549
+ value: 53.7079366494688
550
+ - task:
551
+ type: Classification
552
+ dataset:
553
+ type: mteb/tweet_sentiment_extraction
554
+ name: MTEB TweetSentimentExtractionClassification
555
+ config: default
556
+ split: test
557
+ revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
558
+ metrics:
559
+ - type: accuracy
560
+ value: 59.309564233163556
561
+ - type: f1
562
+ value: 59.33623172630094
563
+ - task:
564
+ type: Clustering
565
+ dataset:
566
+ type: mteb/twentynewsgroups-clustering
567
+ name: MTEB TwentyNewsgroupsClustering
568
+ config: default
569
+ split: test
570
+ revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
571
+ metrics:
572
+ - type: v_measure
573
+ value: 42.42960819361032
574
+ - task:
575
+ type: PairClassification
576
+ dataset:
577
+ type: mteb/twittersemeval2015-pairclassification
578
+ name: MTEB TwitterSemEval2015
579
+ config: default
580
+ split: test
581
+ revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
582
+ metrics:
583
+ - type: cos_sim_accuracy
584
+ value: 85.04500208618943
585
+ - type: cos_sim_ap
586
+ value: 70.12785509302904
587
+ - type: cos_sim_f1
588
+ value: 65.36573392243496
589
+ - type: cos_sim_precision
590
+ value: 61.10601193207894
591
+ - type: cos_sim_recall
592
+ value: 70.26385224274406
593
+ - type: dot_accuracy
594
+ value: 85.04500208618943
595
+ - type: dot_ap
596
+ value: 70.12785837450095
597
+ - type: dot_f1
598
+ value: 65.36573392243496
599
+ - type: dot_precision
600
+ value: 61.10601193207894
601
+ - type: dot_recall
602
+ value: 70.26385224274406
603
+ - type: euclidean_accuracy
604
+ value: 85.04500208618943
605
+ - type: euclidean_ap
606
+ value: 70.1278575285826
607
+ - type: euclidean_f1
608
+ value: 65.36573392243496
609
+ - type: euclidean_precision
610
+ value: 61.10601193207894
611
+ - type: euclidean_recall
612
+ value: 70.26385224274406
613
+ - type: manhattan_accuracy
614
+ value: 85.03308100375514
615
+ - type: manhattan_ap
616
+ value: 69.67192372362932
617
+ - type: manhattan_f1
618
+ value: 64.95726495726495
619
+ - type: manhattan_precision
620
+ value: 61.218771888862946
621
+ - type: manhattan_recall
622
+ value: 69.1820580474934
623
+ - type: max_accuracy
624
+ value: 85.04500208618943
625
+ - type: max_ap
626
+ value: 70.12785837450095
627
+ - type: max_f1
628
+ value: 65.36573392243496
629
+ - task:
630
+ type: PairClassification
631
+ dataset:
632
+ type: mteb/twitterurlcorpus-pairclassification
633
+ name: MTEB TwitterURLCorpus
634
+ config: default
635
+ split: test
636
+ revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
637
+ metrics:
638
+ - type: cos_sim_accuracy
639
+ value: 88.18644002018085
640
+ - type: cos_sim_ap
641
+ value: 84.09120337117118
642
+ - type: cos_sim_f1
643
+ value: 76.33478718604302
644
+ - type: cos_sim_precision
645
+ value: 74.59582598471486
646
+ - type: cos_sim_recall
647
+ value: 78.15676008623345
648
+ - type: dot_accuracy
649
+ value: 88.18644002018085
650
+ - type: dot_ap
651
+ value: 84.09120289232122
652
+ - type: dot_f1
653
+ value: 76.33478718604302
654
+ - type: dot_precision
655
+ value: 74.59582598471486
656
+ - type: dot_recall
657
+ value: 78.15676008623345
658
+ - type: euclidean_accuracy
659
+ value: 88.18644002018085
660
+ - type: euclidean_ap
661
+ value: 84.091202102378
662
+ - type: euclidean_f1
663
+ value: 76.33478718604302
664
+ - type: euclidean_precision
665
+ value: 74.59582598471486
666
+ - type: euclidean_recall
667
+ value: 78.15676008623345
668
+ - type: manhattan_accuracy
669
+ value: 88.19032095315714
670
+ - type: manhattan_ap
671
+ value: 84.0865561436236
672
+ - type: manhattan_f1
673
+ value: 76.16665422235496
674
+ - type: manhattan_precision
675
+ value: 73.93100449340484
676
+ - type: manhattan_recall
677
+ value: 78.54173082845703
678
+ - type: max_accuracy
679
+ value: 88.19032095315714
680
+ - type: max_ap
681
+ value: 84.09120337117118
682
+ - type: max_f1
683
+ value: 76.33478718604302
684
+ ---
685
  This is a part of the [MTEB test](https://huggingface.co/spaces/mteb/leaderboard).
686
 
687
  ```