Jinkin commited on
Commit
4b065b2
1 Parent(s): 211a15a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1058 -0
README.md ADDED
@@ -0,0 +1,1058 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ model-index:
5
+ - name: piccolo-large-zh-v2
6
+ results:
7
+ - task:
8
+ type: STS
9
+ dataset:
10
+ type: C-MTEB/AFQMC
11
+ name: MTEB AFQMC
12
+ config: default
13
+ split: validation
14
+ revision: None
15
+ metrics:
16
+ - type: cos_sim_pearson
17
+ value: 56.76055988260572
18
+ - type: cos_sim_spearman
19
+ value: 61.49271876861677
20
+ - type: euclidean_pearson
21
+ value: 59.14524585320711
22
+ - type: euclidean_spearman
23
+ value: 60.63579339225774
24
+ - type: manhattan_pearson
25
+ value: 59.14662752965445
26
+ - type: manhattan_spearman
27
+ value: 60.635190265737904
28
+ - task:
29
+ type: STS
30
+ dataset:
31
+ type: C-MTEB/ATEC
32
+ name: MTEB ATEC
33
+ config: default
34
+ split: test
35
+ revision: None
36
+ metrics:
37
+ - type: cos_sim_pearson
38
+ value: 56.21706298831197
39
+ - type: cos_sim_spearman
40
+ value: 59.19831457688953
41
+ - type: euclidean_pearson
42
+ value: 62.37752017633299
43
+ - type: euclidean_spearman
44
+ value: 58.79400967473204
45
+ - type: manhattan_pearson
46
+ value: 62.37015943212308
47
+ - type: manhattan_spearman
48
+ value: 58.79232537600814
49
+ - task:
50
+ type: Classification
51
+ dataset:
52
+ type: mteb/amazon_reviews_multi
53
+ name: MTEB AmazonReviewsClassification (zh)
54
+ config: zh
55
+ split: test
56
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
57
+ metrics:
58
+ - type: accuracy
59
+ value: 49.440000000000005
60
+ - type: f1
61
+ value: 46.67381446305019
62
+ - task:
63
+ type: STS
64
+ dataset:
65
+ type: C-MTEB/BQ
66
+ name: MTEB BQ
67
+ config: default
68
+ split: test
69
+ revision: None
70
+ metrics:
71
+ - type: cos_sim_pearson
72
+ value: 70.99026329599994
73
+ - type: cos_sim_spearman
74
+ value: 72.87565357908989
75
+ - type: euclidean_pearson
76
+ value: 71.17690439270028
77
+ - type: euclidean_spearman
78
+ value: 72.50428109969029
79
+ - type: manhattan_pearson
80
+ value: 71.17262321033088
81
+ - type: manhattan_spearman
82
+ value: 72.49845447987437
83
+ - task:
84
+ type: Clustering
85
+ dataset:
86
+ type: C-MTEB/CLSClusteringP2P
87
+ name: MTEB CLSClusteringP2P
88
+ config: default
89
+ split: test
90
+ revision: None
91
+ metrics:
92
+ - type: v_measure
93
+ value: 57.92713421071616
94
+ - task:
95
+ type: Clustering
96
+ dataset:
97
+ type: C-MTEB/CLSClusteringS2S
98
+ name: MTEB CLSClusteringS2S
99
+ config: default
100
+ split: test
101
+ revision: None
102
+ metrics:
103
+ - type: v_measure
104
+ value: 48.096546680932235
105
+ - task:
106
+ type: Reranking
107
+ dataset:
108
+ type: C-MTEB/CMedQAv1-reranking
109
+ name: MTEB CMedQAv1
110
+ config: default
111
+ split: test
112
+ revision: None
113
+ metrics:
114
+ - type: map
115
+ value: 89.31003741715936
116
+ - type: mrr
117
+ value: 91.38075396825397
118
+ - task:
119
+ type: Reranking
120
+ dataset:
121
+ type: C-MTEB/CMedQAv2-reranking
122
+ name: MTEB CMedQAv2
123
+ config: default
124
+ split: test
125
+ revision: None
126
+ metrics:
127
+ - type: map
128
+ value: 90.13769781784876
129
+ - type: mrr
130
+ value: 92.14329365079365
131
+ - task:
132
+ type: Retrieval
133
+ dataset:
134
+ type: C-MTEB/CmedqaRetrieval
135
+ name: MTEB CmedqaRetrieval
136
+ config: default
137
+ split: dev
138
+ revision: None
139
+ metrics:
140
+ - type: map_at_1
141
+ value: 26.931
142
+ - type: map_at_10
143
+ value: 40.647
144
+ - type: map_at_100
145
+ value: 42.519
146
+ - type: map_at_1000
147
+ value: 42.616
148
+ - type: map_at_3
149
+ value: 36.144999999999996
150
+ - type: map_at_5
151
+ value: 38.717
152
+ - type: mrr_at_1
153
+ value: 40.935
154
+ - type: mrr_at_10
155
+ value: 49.684
156
+ - type: mrr_at_100
157
+ value: 50.598
158
+ - type: mrr_at_1000
159
+ value: 50.632999999999996
160
+ - type: mrr_at_3
161
+ value: 47.07
162
+ - type: mrr_at_5
163
+ value: 48.49
164
+ - type: ndcg_at_1
165
+ value: 40.935
166
+ - type: ndcg_at_10
167
+ value: 47.583999999999996
168
+ - type: ndcg_at_100
169
+ value: 54.69199999999999
170
+ - type: ndcg_at_1000
171
+ value: 56.314
172
+ - type: ndcg_at_3
173
+ value: 41.973
174
+ - type: ndcg_at_5
175
+ value: 44.334
176
+ - type: precision_at_1
177
+ value: 40.935
178
+ - type: precision_at_10
179
+ value: 10.585
180
+ - type: precision_at_100
181
+ value: 1.637
182
+ - type: precision_at_1000
183
+ value: 0.184
184
+ - type: precision_at_3
185
+ value: 23.881
186
+ - type: precision_at_5
187
+ value: 17.399
188
+ - type: recall_at_1
189
+ value: 26.931
190
+ - type: recall_at_10
191
+ value: 59.006
192
+ - type: recall_at_100
193
+ value: 88.247
194
+ - type: recall_at_1000
195
+ value: 99.045
196
+ - type: recall_at_3
197
+ value: 42.064
198
+ - type: recall_at_5
199
+ value: 49.266
200
+ - task:
201
+ type: PairClassification
202
+ dataset:
203
+ type: C-MTEB/CMNLI
204
+ name: MTEB Cmnli
205
+ config: default
206
+ split: validation
207
+ revision: None
208
+ metrics:
209
+ - type: cos_sim_accuracy
210
+ value: 86.08538785327721
211
+ - type: cos_sim_ap
212
+ value: 92.64373114205229
213
+ - type: cos_sim_f1
214
+ value: 86.89951395953432
215
+ - type: cos_sim_precision
216
+ value: 84.11378555798687
217
+ - type: cos_sim_recall
218
+ value: 89.87608136544307
219
+ - type: dot_accuracy
220
+ value: 72.66386049308478
221
+ - type: dot_ap
222
+ value: 81.053422935767
223
+ - type: dot_f1
224
+ value: 75.19933726830277
225
+ - type: dot_precision
226
+ value: 67.4907063197026
227
+ - type: dot_recall
228
+ value: 84.89595510872107
229
+ - type: euclidean_accuracy
230
+ value: 85.52014431749849
231
+ - type: euclidean_ap
232
+ value: 91.90647782899615
233
+ - type: euclidean_f1
234
+ value: 86.26361413647477
235
+ - type: euclidean_precision
236
+ value: 82.2071595001059
237
+ - type: euclidean_recall
238
+ value: 90.74117371989713
239
+ - type: manhattan_accuracy
240
+ value: 85.48406494287433
241
+ - type: manhattan_ap
242
+ value: 91.89657919524385
243
+ - type: manhattan_f1
244
+ value: 86.20413761572752
245
+ - type: manhattan_precision
246
+ value: 84.324686940966
247
+ - type: manhattan_recall
248
+ value: 88.16927753097966
249
+ - type: max_accuracy
250
+ value: 86.08538785327721
251
+ - type: max_ap
252
+ value: 92.64373114205229
253
+ - type: max_f1
254
+ value: 86.89951395953432
255
+ - task:
256
+ type: Retrieval
257
+ dataset:
258
+ type: C-MTEB/CovidRetrieval
259
+ name: MTEB CovidRetrieval
260
+ config: default
261
+ split: dev
262
+ revision: None
263
+ metrics:
264
+ - type: map_at_1
265
+ value: 75.50099999999999
266
+ - type: map_at_10
267
+ value: 83.43
268
+ - type: map_at_100
269
+ value: 83.577
270
+ - type: map_at_1000
271
+ value: 83.57900000000001
272
+ - type: map_at_3
273
+ value: 82.06400000000001
274
+ - type: map_at_5
275
+ value: 82.88600000000001
276
+ - type: mrr_at_1
277
+ value: 75.869
278
+ - type: mrr_at_10
279
+ value: 83.536
280
+ - type: mrr_at_100
281
+ value: 83.682
282
+ - type: mrr_at_1000
283
+ value: 83.68299999999999
284
+ - type: mrr_at_3
285
+ value: 82.244
286
+ - type: mrr_at_5
287
+ value: 82.998
288
+ - type: ndcg_at_1
289
+ value: 75.764
290
+ - type: ndcg_at_10
291
+ value: 86.777
292
+ - type: ndcg_at_100
293
+ value: 87.36
294
+ - type: ndcg_at_1000
295
+ value: 87.424
296
+ - type: ndcg_at_3
297
+ value: 84.10300000000001
298
+ - type: ndcg_at_5
299
+ value: 85.532
300
+ - type: precision_at_1
301
+ value: 75.764
302
+ - type: precision_at_10
303
+ value: 9.8
304
+ - type: precision_at_100
305
+ value: 1.005
306
+ - type: precision_at_1000
307
+ value: 0.101
308
+ - type: precision_at_3
309
+ value: 30.207
310
+ - type: precision_at_5
311
+ value: 18.82
312
+ - type: recall_at_1
313
+ value: 75.50099999999999
314
+ - type: recall_at_10
315
+ value: 96.997
316
+ - type: recall_at_100
317
+ value: 99.473
318
+ - type: recall_at_1000
319
+ value: 100.0
320
+ - type: recall_at_3
321
+ value: 89.831
322
+ - type: recall_at_5
323
+ value: 93.256
324
+ - task:
325
+ type: Retrieval
326
+ dataset:
327
+ type: C-MTEB/DuRetrieval
328
+ name: MTEB DuRetrieval
329
+ config: default
330
+ split: dev
331
+ revision: None
332
+ metrics:
333
+ - type: map_at_1
334
+ value: 27.094
335
+ - type: map_at_10
336
+ value: 82.418
337
+ - type: map_at_100
338
+ value: 85.05
339
+ - type: map_at_1000
340
+ value: 85.083
341
+ - type: map_at_3
342
+ value: 57.68600000000001
343
+ - type: map_at_5
344
+ value: 72.476
345
+ - type: mrr_at_1
346
+ value: 92.25
347
+ - type: mrr_at_10
348
+ value: 94.621
349
+ - type: mrr_at_100
350
+ value: 94.675
351
+ - type: mrr_at_1000
352
+ value: 94.677
353
+ - type: mrr_at_3
354
+ value: 94.375
355
+ - type: mrr_at_5
356
+ value: 94.52199999999999
357
+ - type: ndcg_at_1
358
+ value: 92.25
359
+ - type: ndcg_at_10
360
+ value: 89.13600000000001
361
+ - type: ndcg_at_100
362
+ value: 91.532
363
+ - type: ndcg_at_1000
364
+ value: 91.836
365
+ - type: ndcg_at_3
366
+ value: 88.50099999999999
367
+ - type: ndcg_at_5
368
+ value: 87.251
369
+ - type: precision_at_1
370
+ value: 92.25
371
+ - type: precision_at_10
372
+ value: 42.295
373
+ - type: precision_at_100
374
+ value: 4.812
375
+ - type: precision_at_1000
376
+ value: 0.48900000000000005
377
+ - type: precision_at_3
378
+ value: 79.167
379
+ - type: precision_at_5
380
+ value: 66.56
381
+ - type: recall_at_1
382
+ value: 27.094
383
+ - type: recall_at_10
384
+ value: 89.816
385
+ - type: recall_at_100
386
+ value: 97.855
387
+ - type: recall_at_1000
388
+ value: 99.384
389
+ - type: recall_at_3
390
+ value: 59.557
391
+ - type: recall_at_5
392
+ value: 76.395
393
+ - task:
394
+ type: Retrieval
395
+ dataset:
396
+ type: C-MTEB/EcomRetrieval
397
+ name: MTEB EcomRetrieval
398
+ config: default
399
+ split: dev
400
+ revision: None
401
+ metrics:
402
+ - type: map_at_1
403
+ value: 53.6
404
+ - type: map_at_10
405
+ value: 62.985
406
+ - type: map_at_100
407
+ value: 63.532999999999994
408
+ - type: map_at_1000
409
+ value: 63.546
410
+ - type: map_at_3
411
+ value: 60.617
412
+ - type: map_at_5
413
+ value: 62.017
414
+ - type: mrr_at_1
415
+ value: 53.6
416
+ - type: mrr_at_10
417
+ value: 62.985
418
+ - type: mrr_at_100
419
+ value: 63.532999999999994
420
+ - type: mrr_at_1000
421
+ value: 63.546
422
+ - type: mrr_at_3
423
+ value: 60.617
424
+ - type: mrr_at_5
425
+ value: 62.017
426
+ - type: ndcg_at_1
427
+ value: 53.6
428
+ - type: ndcg_at_10
429
+ value: 67.755
430
+ - type: ndcg_at_100
431
+ value: 70.366
432
+ - type: ndcg_at_1000
433
+ value: 70.696
434
+ - type: ndcg_at_3
435
+ value: 62.89900000000001
436
+ - type: ndcg_at_5
437
+ value: 65.437
438
+ - type: precision_at_1
439
+ value: 53.6
440
+ - type: precision_at_10
441
+ value: 8.28
442
+ - type: precision_at_100
443
+ value: 0.9490000000000001
444
+ - type: precision_at_1000
445
+ value: 0.098
446
+ - type: precision_at_3
447
+ value: 23.166999999999998
448
+ - type: precision_at_5
449
+ value: 15.14
450
+ - type: recall_at_1
451
+ value: 53.6
452
+ - type: recall_at_10
453
+ value: 82.8
454
+ - type: recall_at_100
455
+ value: 94.89999999999999
456
+ - type: recall_at_1000
457
+ value: 97.5
458
+ - type: recall_at_3
459
+ value: 69.5
460
+ - type: recall_at_5
461
+ value: 75.7
462
+ - task:
463
+ type: Classification
464
+ dataset:
465
+ type: C-MTEB/IFlyTek-classification
466
+ name: MTEB IFlyTek
467
+ config: default
468
+ split: validation
469
+ revision: None
470
+ metrics:
471
+ - type: accuracy
472
+ value: 52.104655636783384
473
+ - type: f1
474
+ value: 41.025743582860514
475
+ - task:
476
+ type: Classification
477
+ dataset:
478
+ type: C-MTEB/JDReview-classification
479
+ name: MTEB JDReview
480
+ config: default
481
+ split: test
482
+ revision: None
483
+ metrics:
484
+ - type: accuracy
485
+ value: 88.57410881801127
486
+ - type: ap
487
+ value: 59.49612312498937
488
+ - type: f1
489
+ value: 83.70595013666741
490
+ - task:
491
+ type: STS
492
+ dataset:
493
+ type: C-MTEB/LCQMC
494
+ name: MTEB LCQMC
495
+ config: default
496
+ split: test
497
+ revision: None
498
+ metrics:
499
+ - type: cos_sim_pearson
500
+ value: 74.00327736048256
501
+ - type: cos_sim_spearman
502
+ value: 79.5459672237356
503
+ - type: euclidean_pearson
504
+ value: 79.18300205389669
505
+ - type: euclidean_spearman
506
+ value: 79.21872988987533
507
+ - type: manhattan_pearson
508
+ value: 79.1715470733081
509
+ - type: manhattan_spearman
510
+ value: 79.20756273498812
511
+ - task:
512
+ type: Retrieval
513
+ dataset:
514
+ type: C-MTEB/MMarcoRetrieval
515
+ name: MTEB MMarcoRetrieval
516
+ config: default
517
+ split: dev
518
+ revision: None
519
+ metrics:
520
+ - type: map_at_1
521
+ value: 66.94600000000001
522
+ - type: map_at_10
523
+ value: 75.947
524
+ - type: map_at_100
525
+ value: 76.268
526
+ - type: map_at_1000
527
+ value: 76.28
528
+ - type: map_at_3
529
+ value: 74.13300000000001
530
+ - type: map_at_5
531
+ value: 75.28399999999999
532
+ - type: mrr_at_1
533
+ value: 69.241
534
+ - type: mrr_at_10
535
+ value: 76.532
536
+ - type: mrr_at_100
537
+ value: 76.816
538
+ - type: mrr_at_1000
539
+ value: 76.827
540
+ - type: mrr_at_3
541
+ value: 74.95
542
+ - type: mrr_at_5
543
+ value: 75.957
544
+ - type: ndcg_at_1
545
+ value: 69.241
546
+ - type: ndcg_at_10
547
+ value: 79.54299999999999
548
+ - type: ndcg_at_100
549
+ value: 80.95
550
+ - type: ndcg_at_1000
551
+ value: 81.252
552
+ - type: ndcg_at_3
553
+ value: 76.119
554
+ - type: ndcg_at_5
555
+ value: 78.069
556
+ - type: precision_at_1
557
+ value: 69.241
558
+ - type: precision_at_10
559
+ value: 9.576
560
+ - type: precision_at_100
561
+ value: 1.026
562
+ - type: precision_at_1000
563
+ value: 0.105
564
+ - type: precision_at_3
565
+ value: 28.571999999999996
566
+ - type: precision_at_5
567
+ value: 18.181
568
+ - type: recall_at_1
569
+ value: 66.94600000000001
570
+ - type: recall_at_10
571
+ value: 90.024
572
+ - type: recall_at_100
573
+ value: 96.3
574
+ - type: recall_at_1000
575
+ value: 98.656
576
+ - type: recall_at_3
577
+ value: 81.026
578
+ - type: recall_at_5
579
+ value: 85.658
580
+ - task:
581
+ type: Classification
582
+ dataset:
583
+ type: mteb/amazon_massive_intent
584
+ name: MTEB MassiveIntentClassification (zh-CN)
585
+ config: zh-CN
586
+ split: test
587
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
588
+ metrics:
589
+ - type: accuracy
590
+ value: 77.71015467383997
591
+ - type: f1
592
+ value: 74.32345894845358
593
+ - task:
594
+ type: Classification
595
+ dataset:
596
+ type: mteb/amazon_massive_scenario
597
+ name: MTEB MassiveScenarioClassification (zh-CN)
598
+ config: zh-CN
599
+ split: test
600
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
601
+ metrics:
602
+ - type: accuracy
603
+ value: 85.63214525891055
604
+ - type: f1
605
+ value: 84.65303466003252
606
+ - task:
607
+ type: Retrieval
608
+ dataset:
609
+ type: C-MTEB/MedicalRetrieval
610
+ name: MTEB MedicalRetrieval
611
+ config: default
612
+ split: dev
613
+ revision: None
614
+ metrics:
615
+ - type: map_at_1
616
+ value: 55.50000000000001
617
+ - type: map_at_10
618
+ value: 61.66199999999999
619
+ - type: map_at_100
620
+ value: 62.13999999999999
621
+ - type: map_at_1000
622
+ value: 62.187000000000005
623
+ - type: map_at_3
624
+ value: 59.967000000000006
625
+ - type: map_at_5
626
+ value: 60.927
627
+ - type: mrr_at_1
628
+ value: 55.7
629
+ - type: mrr_at_10
630
+ value: 61.76199999999999
631
+ - type: mrr_at_100
632
+ value: 62.241
633
+ - type: mrr_at_1000
634
+ value: 62.287000000000006
635
+ - type: mrr_at_3
636
+ value: 60.06700000000001
637
+ - type: mrr_at_5
638
+ value: 61.027
639
+ - type: ndcg_at_1
640
+ value: 55.50000000000001
641
+ - type: ndcg_at_10
642
+ value: 64.878
643
+ - type: ndcg_at_100
644
+ value: 67.464
645
+ - type: ndcg_at_1000
646
+ value: 68.745
647
+ - type: ndcg_at_3
648
+ value: 61.367000000000004
649
+ - type: ndcg_at_5
650
+ value: 63.117999999999995
651
+ - type: precision_at_1
652
+ value: 55.50000000000001
653
+ - type: precision_at_10
654
+ value: 7.51
655
+ - type: precision_at_100
656
+ value: 0.878
657
+ - type: precision_at_1000
658
+ value: 0.098
659
+ - type: precision_at_3
660
+ value: 21.8
661
+ - type: precision_at_5
662
+ value: 13.94
663
+ - type: recall_at_1
664
+ value: 55.50000000000001
665
+ - type: recall_at_10
666
+ value: 75.1
667
+ - type: recall_at_100
668
+ value: 87.8
669
+ - type: recall_at_1000
670
+ value: 97.89999999999999
671
+ - type: recall_at_3
672
+ value: 65.4
673
+ - type: recall_at_5
674
+ value: 69.69999999999999
675
+ - task:
676
+ type: Reranking
677
+ dataset:
678
+ type: C-MTEB/Mmarco-reranking
679
+ name: MTEB MMarcoReranking
680
+ config: default
681
+ split: dev
682
+ revision: None
683
+ metrics:
684
+ - type: map
685
+ value: 33.386980266936106
686
+ - type: mrr
687
+ value: 32.11904761904762
688
+ - task:
689
+ type: Classification
690
+ dataset:
691
+ type: C-MTEB/MultilingualSentiment-classification
692
+ name: MTEB MultilingualSentiment
693
+ config: default
694
+ split: validation
695
+ revision: None
696
+ metrics:
697
+ - type: accuracy
698
+ value: 79.08666666666666
699
+ - type: f1
700
+ value: 78.93142205976953
701
+ - task:
702
+ type: PairClassification
703
+ dataset:
704
+ type: C-MTEB/OCNLI
705
+ name: MTEB Ocnli
706
+ config: default
707
+ split: validation
708
+ revision: None
709
+ metrics:
710
+ - type: cos_sim_accuracy
711
+ value: 84.35300487276665
712
+ - type: cos_sim_ap
713
+ value: 87.83572265803564
714
+ - type: cos_sim_f1
715
+ value: 85.42713567839195
716
+ - type: cos_sim_precision
717
+ value: 81.49568552253116
718
+ - type: cos_sim_recall
719
+ value: 89.7571277719113
720
+ - type: dot_accuracy
721
+ value: 72.87493232268544
722
+ - type: dot_ap
723
+ value: 80.29032993894747
724
+ - type: dot_f1
725
+ value: 76.5938475256353
726
+ - type: dot_precision
727
+ value: 66.28086419753086
728
+ - type: dot_recall
729
+ value: 90.70749736008447
730
+ - type: euclidean_accuracy
731
+ value: 82.34975636166757
732
+ - type: euclidean_ap
733
+ value: 85.73873757468064
734
+ - type: euclidean_f1
735
+ value: 83.56713426853707
736
+ - type: euclidean_precision
737
+ value: 79.50428979980934
738
+ - type: euclidean_recall
739
+ value: 88.0675818373812
740
+ - type: manhattan_accuracy
741
+ value: 82.45804006497022
742
+ - type: manhattan_ap
743
+ value: 85.7176464290469
744
+ - type: manhattan_f1
745
+ value: 83.65095285857572
746
+ - type: manhattan_precision
747
+ value: 79.65616045845272
748
+ - type: manhattan_recall
749
+ value: 88.0675818373812
750
+ - type: max_accuracy
751
+ value: 84.35300487276665
752
+ - type: max_ap
753
+ value: 87.83572265803564
754
+ - type: max_f1
755
+ value: 85.42713567839195
756
+ - task:
757
+ type: Classification
758
+ dataset:
759
+ type: C-MTEB/OnlineShopping-classification
760
+ name: MTEB OnlineShopping
761
+ config: default
762
+ split: test
763
+ revision: None
764
+ metrics:
765
+ - type: accuracy
766
+ value: 94.61999999999999
767
+ - type: ap
768
+ value: 92.74140430219491
769
+ - type: f1
770
+ value: 94.60775857122515
771
+ - task:
772
+ type: STS
773
+ dataset:
774
+ type: C-MTEB/PAWSX
775
+ name: MTEB PAWSX
776
+ config: default
777
+ split: test
778
+ revision: None
779
+ metrics:
780
+ - type: cos_sim_pearson
781
+ value: 39.75749234575995
782
+ - type: cos_sim_spearman
783
+ value: 46.48035295363829
784
+ - type: euclidean_pearson
785
+ value: 45.38711981599582
786
+ - type: euclidean_spearman
787
+ value: 46.13915356562481
788
+ - type: manhattan_pearson
789
+ value: 45.420770530489065
790
+ - type: manhattan_spearman
791
+ value: 46.179913441143775
792
+ - task:
793
+ type: STS
794
+ dataset:
795
+ type: C-MTEB/QBQTC
796
+ name: MTEB QBQTC
797
+ config: default
798
+ split: test
799
+ revision: None
800
+ metrics:
801
+ - type: cos_sim_pearson
802
+ value: 44.02008249965321
803
+ - type: cos_sim_spearman
804
+ value: 45.906917552219156
805
+ - type: euclidean_pearson
806
+ value: 36.600317631983316
807
+ - type: euclidean_spearman
808
+ value: 41.97740958824762
809
+ - type: manhattan_pearson
810
+ value: 36.54329048509785
811
+ - type: manhattan_spearman
812
+ value: 41.91222171040451
813
+ - task:
814
+ type: STS
815
+ dataset:
816
+ type: mteb/sts22-crosslingual-sts
817
+ name: MTEB STS22 (zh)
818
+ config: zh
819
+ split: test
820
+ revision: None
821
+ metrics:
822
+ - type: cos_sim_pearson
823
+ value: 60.97044608578288
824
+ - type: cos_sim_spearman
825
+ value: 63.76187490245927
826
+ - type: euclidean_pearson
827
+ value: 60.74245987426317
828
+ - type: euclidean_spearman
829
+ value: 63.32990713078846
830
+ - type: manhattan_pearson
831
+ value: 60.62422616577702
832
+ - type: manhattan_spearman
833
+ value: 63.256612476686826
834
+ - task:
835
+ type: STS
836
+ dataset:
837
+ type: C-MTEB/STSB
838
+ name: MTEB STSB
839
+ config: default
840
+ split: test
841
+ revision: None
842
+ metrics:
843
+ - type: cos_sim_pearson
844
+ value: 76.28185867362305
845
+ - type: cos_sim_spearman
846
+ value: 78.71478656159289
847
+ - type: euclidean_pearson
848
+ value: 79.80734359535234
849
+ - type: euclidean_spearman
850
+ value: 79.85403491297063
851
+ - type: manhattan_pearson
852
+ value: 79.79454037962215
853
+ - type: manhattan_spearman
854
+ value: 79.82796402623201
855
+ - task:
856
+ type: Reranking
857
+ dataset:
858
+ type: C-MTEB/T2Reranking
859
+ name: MTEB T2Reranking
860
+ config: default
861
+ split: dev
862
+ revision: None
863
+ metrics:
864
+ - type: map
865
+ value: 67.14759526113295
866
+ - type: mrr
867
+ value: 77.36422096484723
868
+ - task:
869
+ type: Retrieval
870
+ dataset:
871
+ type: C-MTEB/T2Retrieval
872
+ name: MTEB T2Retrieval
873
+ config: default
874
+ split: dev
875
+ revision: None
876
+ metrics:
877
+ - type: map_at_1
878
+ value: 28.177999999999997
879
+ - type: map_at_10
880
+ value: 78.77199999999999
881
+ - type: map_at_100
882
+ value: 82.365
883
+ - type: map_at_1000
884
+ value: 82.422
885
+ - type: map_at_3
886
+ value: 55.452999999999996
887
+ - type: map_at_5
888
+ value: 68.12700000000001
889
+ - type: mrr_at_1
890
+ value: 91.097
891
+ - type: mrr_at_10
892
+ value: 93.52000000000001
893
+ - type: mrr_at_100
894
+ value: 93.587
895
+ - type: mrr_at_1000
896
+ value: 93.589
897
+ - type: mrr_at_3
898
+ value: 93.136
899
+ - type: mrr_at_5
900
+ value: 93.381
901
+ - type: ndcg_at_1
902
+ value: 91.097
903
+ - type: ndcg_at_10
904
+ value: 86.136
905
+ - type: ndcg_at_100
906
+ value: 89.515
907
+ - type: ndcg_at_1000
908
+ value: 90.049
909
+ - type: ndcg_at_3
910
+ value: 87.41600000000001
911
+ - type: ndcg_at_5
912
+ value: 86.115
913
+ - type: precision_at_1
914
+ value: 91.097
915
+ - type: precision_at_10
916
+ value: 42.597
917
+ - type: precision_at_100
918
+ value: 5.043
919
+ - type: precision_at_1000
920
+ value: 0.517
921
+ - type: precision_at_3
922
+ value: 76.239
923
+ - type: precision_at_5
924
+ value: 63.93
925
+ - type: recall_at_1
926
+ value: 28.177999999999997
927
+ - type: recall_at_10
928
+ value: 85.182
929
+ - type: recall_at_100
930
+ value: 96.174
931
+ - type: recall_at_1000
932
+ value: 98.848
933
+ - type: recall_at_3
934
+ value: 57.150999999999996
935
+ - type: recall_at_5
936
+ value: 71.50999999999999
937
+ - task:
938
+ type: Classification
939
+ dataset:
940
+ type: C-MTEB/TNews-classification
941
+ name: MTEB TNews
942
+ config: default
943
+ split: validation
944
+ revision: None
945
+ metrics:
946
+ - type: accuracy
947
+ value: 54.521
948
+ - type: f1
949
+ value: 52.53528052282081
950
+ - task:
951
+ type: Clustering
952
+ dataset:
953
+ type: C-MTEB/ThuNewsClusteringP2P
954
+ name: MTEB ThuNewsClusteringP2P
955
+ config: default
956
+ split: test
957
+ revision: None
958
+ metrics:
959
+ - type: v_measure
960
+ value: 74.2003249023509
961
+ - task:
962
+ type: Clustering
963
+ dataset:
964
+ type: C-MTEB/ThuNewsClusteringS2S
965
+ name: MTEB ThuNewsClusteringS2S
966
+ config: default
967
+ split: test
968
+ revision: None
969
+ metrics:
970
+ - type: v_measure
971
+ value: 68.4277378629746
972
+ - task:
973
+ type: Retrieval
974
+ dataset:
975
+ type: C-MTEB/VideoRetrieval
976
+ name: MTEB VideoRetrieval
977
+ config: default
978
+ split: dev
979
+ revision: None
980
+ metrics:
981
+ - type: map_at_1
982
+ value: 58.599999999999994
983
+ - type: map_at_10
984
+ value: 68.671
985
+ - type: map_at_100
986
+ value: 69.148
987
+ - type: map_at_1000
988
+ value: 69.157
989
+ - type: map_at_3
990
+ value: 66.9
991
+ - type: map_at_5
992
+ value: 68.045
993
+ - type: mrr_at_1
994
+ value: 58.599999999999994
995
+ - type: mrr_at_10
996
+ value: 68.671
997
+ - type: mrr_at_100
998
+ value: 69.148
999
+ - type: mrr_at_1000
1000
+ value: 69.157
1001
+ - type: mrr_at_3
1002
+ value: 66.9
1003
+ - type: mrr_at_5
1004
+ value: 68.045
1005
+ - type: ndcg_at_1
1006
+ value: 58.599999999999994
1007
+ - type: ndcg_at_10
1008
+ value: 73.099
1009
+ - type: ndcg_at_100
1010
+ value: 75.33
1011
+ - type: ndcg_at_1000
1012
+ value: 75.58500000000001
1013
+ - type: ndcg_at_3
1014
+ value: 69.502
1015
+ - type: ndcg_at_5
1016
+ value: 71.542
1017
+ - type: precision_at_1
1018
+ value: 58.599999999999994
1019
+ - type: precision_at_10
1020
+ value: 8.68
1021
+ - type: precision_at_100
1022
+ value: 0.97
1023
+ - type: precision_at_1000
1024
+ value: 0.099
1025
+ - type: precision_at_3
1026
+ value: 25.667
1027
+ - type: precision_at_5
1028
+ value: 16.38
1029
+ - type: recall_at_1
1030
+ value: 58.599999999999994
1031
+ - type: recall_at_10
1032
+ value: 86.8
1033
+ - type: recall_at_100
1034
+ value: 97.0
1035
+ - type: recall_at_1000
1036
+ value: 99.1
1037
+ - type: recall_at_3
1038
+ value: 77.0
1039
+ - type: recall_at_5
1040
+ value: 81.89999999999999
1041
+ - task:
1042
+ type: Classification
1043
+ dataset:
1044
+ type: C-MTEB/waimai-classification
1045
+ name: MTEB Waimai
1046
+ config: default
1047
+ split: test
1048
+ revision: None
1049
+ metrics:
1050
+ - type: accuracy
1051
+ value: 89.58999999999999
1052
+ - type: ap
1053
+ value: 75.69899834265364
1054
+ - type: f1
1055
+ value: 88.2026184757175
1056
+ ---
1057
+
1058
+ ## piccolo-large-zh-v2