nanmoon commited on
Commit
99e557e
1 Parent(s): 06b6ddd
.gitattributes CHANGED
@@ -25,7 +25,6 @@
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
  *.tgz filter=lfs diff=lfs merge=lfs -text
31
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
28
  *.tflite filter=lfs diff=lfs merge=lfs -text
29
  *.tgz filter=lfs diff=lfs merge=lfs -text
30
  *.wasm filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md CHANGED
@@ -1,3 +1,2725 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - mteb
4
+ - Sentence Transformers
5
+ - sentence-similarity
6
+ - sentence-transformers
7
+ model-index:
8
+ - name: e5-base
9
+ results:
10
+ - task:
11
+ type: Classification
12
+ dataset:
13
+ type: mteb/amazon_counterfactual
14
+ name: MTEB AmazonCounterfactualClassification (en)
15
+ config: en
16
+ split: test
17
+ revision: e8379541af4e31359cca9fbcf4b00f2671dba205
18
+ metrics:
19
+ - type: accuracy
20
+ value: 79.71641791044777
21
+ - type: ap
22
+ value: 44.15426065428253
23
+ - type: f1
24
+ value: 73.89474407693241
25
+ - task:
26
+ type: Classification
27
+ dataset:
28
+ type: mteb/amazon_polarity
29
+ name: MTEB AmazonPolarityClassification
30
+ config: default
31
+ split: test
32
+ revision: e2d317d38cd51312af73b3d32a06d1a08b442046
33
+ metrics:
34
+ - type: accuracy
35
+ value: 87.9649
36
+ - type: ap
37
+ value: 84.10171551915973
38
+ - type: f1
39
+ value: 87.94148377827356
40
+ - task:
41
+ type: Classification
42
+ dataset:
43
+ type: mteb/amazon_reviews_multi
44
+ name: MTEB AmazonReviewsClassification (en)
45
+ config: en
46
+ split: test
47
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
48
+ metrics:
49
+ - type: accuracy
50
+ value: 42.645999999999994
51
+ - type: f1
52
+ value: 42.230574673549
53
+ - task:
54
+ type: Retrieval
55
+ dataset:
56
+ type: arguana
57
+ name: MTEB ArguAna
58
+ config: default
59
+ split: test
60
+ revision: None
61
+ metrics:
62
+ - type: map_at_1
63
+ value: 26.814
64
+ - type: map_at_10
65
+ value: 42.681999999999995
66
+ - type: map_at_100
67
+ value: 43.714
68
+ - type: map_at_1000
69
+ value: 43.724000000000004
70
+ - type: map_at_3
71
+ value: 38.11
72
+ - type: map_at_5
73
+ value: 40.666999999999994
74
+ - type: mrr_at_1
75
+ value: 27.168999999999997
76
+ - type: mrr_at_10
77
+ value: 42.84
78
+ - type: mrr_at_100
79
+ value: 43.864
80
+ - type: mrr_at_1000
81
+ value: 43.875
82
+ - type: mrr_at_3
83
+ value: 38.193
84
+ - type: mrr_at_5
85
+ value: 40.793
86
+ - type: ndcg_at_1
87
+ value: 26.814
88
+ - type: ndcg_at_10
89
+ value: 51.410999999999994
90
+ - type: ndcg_at_100
91
+ value: 55.713
92
+ - type: ndcg_at_1000
93
+ value: 55.957
94
+ - type: ndcg_at_3
95
+ value: 41.955
96
+ - type: ndcg_at_5
97
+ value: 46.558
98
+ - type: precision_at_1
99
+ value: 26.814
100
+ - type: precision_at_10
101
+ value: 7.922999999999999
102
+ - type: precision_at_100
103
+ value: 0.9780000000000001
104
+ - type: precision_at_1000
105
+ value: 0.1
106
+ - type: precision_at_3
107
+ value: 17.71
108
+ - type: precision_at_5
109
+ value: 12.859000000000002
110
+ - type: recall_at_1
111
+ value: 26.814
112
+ - type: recall_at_10
113
+ value: 79.232
114
+ - type: recall_at_100
115
+ value: 97.795
116
+ - type: recall_at_1000
117
+ value: 99.644
118
+ - type: recall_at_3
119
+ value: 53.129000000000005
120
+ - type: recall_at_5
121
+ value: 64.29599999999999
122
+ - task:
123
+ type: Clustering
124
+ dataset:
125
+ type: mteb/arxiv-clustering-p2p
126
+ name: MTEB ArxivClusteringP2P
127
+ config: default
128
+ split: test
129
+ revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
130
+ metrics:
131
+ - type: v_measure
132
+ value: 44.56933066536439
133
+ - task:
134
+ type: Clustering
135
+ dataset:
136
+ type: mteb/arxiv-clustering-s2s
137
+ name: MTEB ArxivClusteringS2S
138
+ config: default
139
+ split: test
140
+ revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
141
+ metrics:
142
+ - type: v_measure
143
+ value: 40.47647746165173
144
+ - task:
145
+ type: Reranking
146
+ dataset:
147
+ type: mteb/askubuntudupquestions-reranking
148
+ name: MTEB AskUbuntuDupQuestions
149
+ config: default
150
+ split: test
151
+ revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
152
+ metrics:
153
+ - type: map
154
+ value: 59.65675531567043
155
+ - type: mrr
156
+ value: 72.95255683067317
157
+ - task:
158
+ type: STS
159
+ dataset:
160
+ type: mteb/biosses-sts
161
+ name: MTEB BIOSSES
162
+ config: default
163
+ split: test
164
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
165
+ metrics:
166
+ - type: cos_sim_pearson
167
+ value: 85.83147014162338
168
+ - type: cos_sim_spearman
169
+ value: 85.1031439521441
170
+ - type: euclidean_pearson
171
+ value: 83.53609085510973
172
+ - type: euclidean_spearman
173
+ value: 84.59650590202833
174
+ - type: manhattan_pearson
175
+ value: 83.14611947586386
176
+ - type: manhattan_spearman
177
+ value: 84.13384475757064
178
+ - task:
179
+ type: Classification
180
+ dataset:
181
+ type: mteb/banking77
182
+ name: MTEB Banking77Classification
183
+ config: default
184
+ split: test
185
+ revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
186
+ metrics:
187
+ - type: accuracy
188
+ value: 83.32792207792208
189
+ - type: f1
190
+ value: 83.32037485050513
191
+ - task:
192
+ type: Clustering
193
+ dataset:
194
+ type: mteb/biorxiv-clustering-p2p
195
+ name: MTEB BiorxivClusteringP2P
196
+ config: default
197
+ split: test
198
+ revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
199
+ metrics:
200
+ - type: v_measure
201
+ value: 36.18605446588703
202
+ - task:
203
+ type: Clustering
204
+ dataset:
205
+ type: mteb/biorxiv-clustering-s2s
206
+ name: MTEB BiorxivClusteringS2S
207
+ config: default
208
+ split: test
209
+ revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
210
+ metrics:
211
+ - type: v_measure
212
+ value: 32.72379130181917
213
+ - task:
214
+ type: Retrieval
215
+ dataset:
216
+ type: BeIR/cqadupstack
217
+ name: MTEB CQADupstackAndroidRetrieval
218
+ config: default
219
+ split: test
220
+ revision: None
221
+ metrics:
222
+ - type: map_at_1
223
+ value: 30.659
224
+ - type: map_at_10
225
+ value: 40.333999999999996
226
+ - type: map_at_100
227
+ value: 41.763
228
+ - type: map_at_1000
229
+ value: 41.894
230
+ - type: map_at_3
231
+ value: 37.561
232
+ - type: map_at_5
233
+ value: 39.084
234
+ - type: mrr_at_1
235
+ value: 37.482
236
+ - type: mrr_at_10
237
+ value: 45.736
238
+ - type: mrr_at_100
239
+ value: 46.591
240
+ - type: mrr_at_1000
241
+ value: 46.644999999999996
242
+ - type: mrr_at_3
243
+ value: 43.491
244
+ - type: mrr_at_5
245
+ value: 44.75
246
+ - type: ndcg_at_1
247
+ value: 37.482
248
+ - type: ndcg_at_10
249
+ value: 45.606
250
+ - type: ndcg_at_100
251
+ value: 51.172
252
+ - type: ndcg_at_1000
253
+ value: 53.407000000000004
254
+ - type: ndcg_at_3
255
+ value: 41.808
256
+ - type: ndcg_at_5
257
+ value: 43.449
258
+ - type: precision_at_1
259
+ value: 37.482
260
+ - type: precision_at_10
261
+ value: 8.254999999999999
262
+ - type: precision_at_100
263
+ value: 1.3719999999999999
264
+ - type: precision_at_1000
265
+ value: 0.186
266
+ - type: precision_at_3
267
+ value: 19.695
268
+ - type: precision_at_5
269
+ value: 13.847999999999999
270
+ - type: recall_at_1
271
+ value: 30.659
272
+ - type: recall_at_10
273
+ value: 55.409
274
+ - type: recall_at_100
275
+ value: 78.687
276
+ - type: recall_at_1000
277
+ value: 93.068
278
+ - type: recall_at_3
279
+ value: 43.891999999999996
280
+ - type: recall_at_5
281
+ value: 48.678
282
+ - task:
283
+ type: Retrieval
284
+ dataset:
285
+ type: BeIR/cqadupstack
286
+ name: MTEB CQADupstackEnglishRetrieval
287
+ config: default
288
+ split: test
289
+ revision: None
290
+ metrics:
291
+ - type: map_at_1
292
+ value: 30.977
293
+ - type: map_at_10
294
+ value: 40.296
295
+ - type: map_at_100
296
+ value: 41.453
297
+ - type: map_at_1000
298
+ value: 41.581
299
+ - type: map_at_3
300
+ value: 37.619
301
+ - type: map_at_5
302
+ value: 39.181
303
+ - type: mrr_at_1
304
+ value: 39.108
305
+ - type: mrr_at_10
306
+ value: 46.894000000000005
307
+ - type: mrr_at_100
308
+ value: 47.55
309
+ - type: mrr_at_1000
310
+ value: 47.598
311
+ - type: mrr_at_3
312
+ value: 44.766
313
+ - type: mrr_at_5
314
+ value: 46.062999999999995
315
+ - type: ndcg_at_1
316
+ value: 39.108
317
+ - type: ndcg_at_10
318
+ value: 45.717
319
+ - type: ndcg_at_100
320
+ value: 49.941
321
+ - type: ndcg_at_1000
322
+ value: 52.138
323
+ - type: ndcg_at_3
324
+ value: 42.05
325
+ - type: ndcg_at_5
326
+ value: 43.893
327
+ - type: precision_at_1
328
+ value: 39.108
329
+ - type: precision_at_10
330
+ value: 8.306
331
+ - type: precision_at_100
332
+ value: 1.3419999999999999
333
+ - type: precision_at_1000
334
+ value: 0.184
335
+ - type: precision_at_3
336
+ value: 19.979
337
+ - type: precision_at_5
338
+ value: 14.038
339
+ - type: recall_at_1
340
+ value: 30.977
341
+ - type: recall_at_10
342
+ value: 54.688
343
+ - type: recall_at_100
344
+ value: 72.556
345
+ - type: recall_at_1000
346
+ value: 86.53800000000001
347
+ - type: recall_at_3
348
+ value: 43.388
349
+ - type: recall_at_5
350
+ value: 48.717
351
+ - task:
352
+ type: Retrieval
353
+ dataset:
354
+ type: BeIR/cqadupstack
355
+ name: MTEB CQADupstackGamingRetrieval
356
+ config: default
357
+ split: test
358
+ revision: None
359
+ metrics:
360
+ - type: map_at_1
361
+ value: 39.812
362
+ - type: map_at_10
363
+ value: 50.1
364
+ - type: map_at_100
365
+ value: 51.193999999999996
366
+ - type: map_at_1000
367
+ value: 51.258
368
+ - type: map_at_3
369
+ value: 47.510999999999996
370
+ - type: map_at_5
371
+ value: 48.891
372
+ - type: mrr_at_1
373
+ value: 45.266
374
+ - type: mrr_at_10
375
+ value: 53.459999999999994
376
+ - type: mrr_at_100
377
+ value: 54.19199999999999
378
+ - type: mrr_at_1000
379
+ value: 54.228
380
+ - type: mrr_at_3
381
+ value: 51.296
382
+ - type: mrr_at_5
383
+ value: 52.495999999999995
384
+ - type: ndcg_at_1
385
+ value: 45.266
386
+ - type: ndcg_at_10
387
+ value: 55.034000000000006
388
+ - type: ndcg_at_100
389
+ value: 59.458
390
+ - type: ndcg_at_1000
391
+ value: 60.862
392
+ - type: ndcg_at_3
393
+ value: 50.52799999999999
394
+ - type: ndcg_at_5
395
+ value: 52.564
396
+ - type: precision_at_1
397
+ value: 45.266
398
+ - type: precision_at_10
399
+ value: 8.483
400
+ - type: precision_at_100
401
+ value: 1.162
402
+ - type: precision_at_1000
403
+ value: 0.133
404
+ - type: precision_at_3
405
+ value: 21.944
406
+ - type: precision_at_5
407
+ value: 14.721
408
+ - type: recall_at_1
409
+ value: 39.812
410
+ - type: recall_at_10
411
+ value: 66.36
412
+ - type: recall_at_100
413
+ value: 85.392
414
+ - type: recall_at_1000
415
+ value: 95.523
416
+ - type: recall_at_3
417
+ value: 54.127
418
+ - type: recall_at_5
419
+ value: 59.245000000000005
420
+ - task:
421
+ type: Retrieval
422
+ dataset:
423
+ type: BeIR/cqadupstack
424
+ name: MTEB CQADupstackGisRetrieval
425
+ config: default
426
+ split: test
427
+ revision: None
428
+ metrics:
429
+ - type: map_at_1
430
+ value: 26.186
431
+ - type: map_at_10
432
+ value: 33.18
433
+ - type: map_at_100
434
+ value: 34.052
435
+ - type: map_at_1000
436
+ value: 34.149
437
+ - type: map_at_3
438
+ value: 31.029
439
+ - type: map_at_5
440
+ value: 32.321
441
+ - type: mrr_at_1
442
+ value: 28.136
443
+ - type: mrr_at_10
444
+ value: 35.195
445
+ - type: mrr_at_100
446
+ value: 35.996
447
+ - type: mrr_at_1000
448
+ value: 36.076
449
+ - type: mrr_at_3
450
+ value: 33.051
451
+ - type: mrr_at_5
452
+ value: 34.407
453
+ - type: ndcg_at_1
454
+ value: 28.136
455
+ - type: ndcg_at_10
456
+ value: 37.275999999999996
457
+ - type: ndcg_at_100
458
+ value: 41.935
459
+ - type: ndcg_at_1000
460
+ value: 44.389
461
+ - type: ndcg_at_3
462
+ value: 33.059
463
+ - type: ndcg_at_5
464
+ value: 35.313
465
+ - type: precision_at_1
466
+ value: 28.136
467
+ - type: precision_at_10
468
+ value: 5.457999999999999
469
+ - type: precision_at_100
470
+ value: 0.826
471
+ - type: precision_at_1000
472
+ value: 0.107
473
+ - type: precision_at_3
474
+ value: 13.522
475
+ - type: precision_at_5
476
+ value: 9.424000000000001
477
+ - type: recall_at_1
478
+ value: 26.186
479
+ - type: recall_at_10
480
+ value: 47.961999999999996
481
+ - type: recall_at_100
482
+ value: 70.072
483
+ - type: recall_at_1000
484
+ value: 88.505
485
+ - type: recall_at_3
486
+ value: 36.752
487
+ - type: recall_at_5
488
+ value: 42.168
489
+ - task:
490
+ type: Retrieval
491
+ dataset:
492
+ type: BeIR/cqadupstack
493
+ name: MTEB CQADupstackMathematicaRetrieval
494
+ config: default
495
+ split: test
496
+ revision: None
497
+ metrics:
498
+ - type: map_at_1
499
+ value: 16.586000000000002
500
+ - type: map_at_10
501
+ value: 23.637
502
+ - type: map_at_100
503
+ value: 24.82
504
+ - type: map_at_1000
505
+ value: 24.95
506
+ - type: map_at_3
507
+ value: 21.428
508
+ - type: map_at_5
509
+ value: 22.555
510
+ - type: mrr_at_1
511
+ value: 20.771
512
+ - type: mrr_at_10
513
+ value: 27.839999999999996
514
+ - type: mrr_at_100
515
+ value: 28.887
516
+ - type: mrr_at_1000
517
+ value: 28.967
518
+ - type: mrr_at_3
519
+ value: 25.56
520
+ - type: mrr_at_5
521
+ value: 26.723000000000003
522
+ - type: ndcg_at_1
523
+ value: 20.771
524
+ - type: ndcg_at_10
525
+ value: 28.255000000000003
526
+ - type: ndcg_at_100
527
+ value: 33.886
528
+ - type: ndcg_at_1000
529
+ value: 36.963
530
+ - type: ndcg_at_3
531
+ value: 24.056
532
+ - type: ndcg_at_5
533
+ value: 25.818
534
+ - type: precision_at_1
535
+ value: 20.771
536
+ - type: precision_at_10
537
+ value: 5.1
538
+ - type: precision_at_100
539
+ value: 0.9119999999999999
540
+ - type: precision_at_1000
541
+ value: 0.132
542
+ - type: precision_at_3
543
+ value: 11.526
544
+ - type: precision_at_5
545
+ value: 8.158999999999999
546
+ - type: recall_at_1
547
+ value: 16.586000000000002
548
+ - type: recall_at_10
549
+ value: 38.456
550
+ - type: recall_at_100
551
+ value: 62.666
552
+ - type: recall_at_1000
553
+ value: 84.47
554
+ - type: recall_at_3
555
+ value: 26.765
556
+ - type: recall_at_5
557
+ value: 31.297000000000004
558
+ - task:
559
+ type: Retrieval
560
+ dataset:
561
+ type: BeIR/cqadupstack
562
+ name: MTEB CQADupstackPhysicsRetrieval
563
+ config: default
564
+ split: test
565
+ revision: None
566
+ metrics:
567
+ - type: map_at_1
568
+ value: 28.831
569
+ - type: map_at_10
570
+ value: 37.545
571
+ - type: map_at_100
572
+ value: 38.934999999999995
573
+ - type: map_at_1000
574
+ value: 39.044000000000004
575
+ - type: map_at_3
576
+ value: 34.601
577
+ - type: map_at_5
578
+ value: 36.302
579
+ - type: mrr_at_1
580
+ value: 34.264
581
+ - type: mrr_at_10
582
+ value: 42.569
583
+ - type: mrr_at_100
584
+ value: 43.514
585
+ - type: mrr_at_1000
586
+ value: 43.561
587
+ - type: mrr_at_3
588
+ value: 40.167
589
+ - type: mrr_at_5
590
+ value: 41.678
591
+ - type: ndcg_at_1
592
+ value: 34.264
593
+ - type: ndcg_at_10
594
+ value: 42.914
595
+ - type: ndcg_at_100
596
+ value: 48.931999999999995
597
+ - type: ndcg_at_1000
598
+ value: 51.004000000000005
599
+ - type: ndcg_at_3
600
+ value: 38.096999999999994
601
+ - type: ndcg_at_5
602
+ value: 40.509
603
+ - type: precision_at_1
604
+ value: 34.264
605
+ - type: precision_at_10
606
+ value: 7.642
607
+ - type: precision_at_100
608
+ value: 1.258
609
+ - type: precision_at_1000
610
+ value: 0.161
611
+ - type: precision_at_3
612
+ value: 17.453
613
+ - type: precision_at_5
614
+ value: 12.608
615
+ - type: recall_at_1
616
+ value: 28.831
617
+ - type: recall_at_10
618
+ value: 53.56999999999999
619
+ - type: recall_at_100
620
+ value: 79.26100000000001
621
+ - type: recall_at_1000
622
+ value: 92.862
623
+ - type: recall_at_3
624
+ value: 40.681
625
+ - type: recall_at_5
626
+ value: 46.597
627
+ - task:
628
+ type: Retrieval
629
+ dataset:
630
+ type: BeIR/cqadupstack
631
+ name: MTEB CQADupstackProgrammersRetrieval
632
+ config: default
633
+ split: test
634
+ revision: None
635
+ metrics:
636
+ - type: map_at_1
637
+ value: 27.461000000000002
638
+ - type: map_at_10
639
+ value: 35.885
640
+ - type: map_at_100
641
+ value: 37.039
642
+ - type: map_at_1000
643
+ value: 37.16
644
+ - type: map_at_3
645
+ value: 33.451
646
+ - type: map_at_5
647
+ value: 34.807
648
+ - type: mrr_at_1
649
+ value: 34.018
650
+ - type: mrr_at_10
651
+ value: 41.32
652
+ - type: mrr_at_100
653
+ value: 42.157
654
+ - type: mrr_at_1000
655
+ value: 42.223
656
+ - type: mrr_at_3
657
+ value: 39.288000000000004
658
+ - type: mrr_at_5
659
+ value: 40.481
660
+ - type: ndcg_at_1
661
+ value: 34.018
662
+ - type: ndcg_at_10
663
+ value: 40.821000000000005
664
+ - type: ndcg_at_100
665
+ value: 46.053
666
+ - type: ndcg_at_1000
667
+ value: 48.673
668
+ - type: ndcg_at_3
669
+ value: 36.839
670
+ - type: ndcg_at_5
671
+ value: 38.683
672
+ - type: precision_at_1
673
+ value: 34.018
674
+ - type: precision_at_10
675
+ value: 7.009
676
+ - type: precision_at_100
677
+ value: 1.123
678
+ - type: precision_at_1000
679
+ value: 0.153
680
+ - type: precision_at_3
681
+ value: 16.933
682
+ - type: precision_at_5
683
+ value: 11.826
684
+ - type: recall_at_1
685
+ value: 27.461000000000002
686
+ - type: recall_at_10
687
+ value: 50.285000000000004
688
+ - type: recall_at_100
689
+ value: 73.25500000000001
690
+ - type: recall_at_1000
691
+ value: 91.17699999999999
692
+ - type: recall_at_3
693
+ value: 39.104
694
+ - type: recall_at_5
695
+ value: 43.968
696
+ - task:
697
+ type: Retrieval
698
+ dataset:
699
+ type: BeIR/cqadupstack
700
+ name: MTEB CQADupstackRetrieval
701
+ config: default
702
+ split: test
703
+ revision: None
704
+ metrics:
705
+ - type: map_at_1
706
+ value: 26.980083333333337
707
+ - type: map_at_10
708
+ value: 34.47208333333333
709
+ - type: map_at_100
710
+ value: 35.609249999999996
711
+ - type: map_at_1000
712
+ value: 35.72833333333333
713
+ - type: map_at_3
714
+ value: 32.189416666666666
715
+ - type: map_at_5
716
+ value: 33.44683333333334
717
+ - type: mrr_at_1
718
+ value: 31.731666666666662
719
+ - type: mrr_at_10
720
+ value: 38.518
721
+ - type: mrr_at_100
722
+ value: 39.38166666666667
723
+ - type: mrr_at_1000
724
+ value: 39.446999999999996
725
+ - type: mrr_at_3
726
+ value: 36.49966666666668
727
+ - type: mrr_at_5
728
+ value: 37.639916666666664
729
+ - type: ndcg_at_1
730
+ value: 31.731666666666662
731
+ - type: ndcg_at_10
732
+ value: 38.92033333333333
733
+ - type: ndcg_at_100
734
+ value: 44.01675
735
+ - type: ndcg_at_1000
736
+ value: 46.51075
737
+ - type: ndcg_at_3
738
+ value: 35.09766666666667
739
+ - type: ndcg_at_5
740
+ value: 36.842999999999996
741
+ - type: precision_at_1
742
+ value: 31.731666666666662
743
+ - type: precision_at_10
744
+ value: 6.472583333333332
745
+ - type: precision_at_100
746
+ value: 1.0665
747
+ - type: precision_at_1000
748
+ value: 0.14725000000000002
749
+ - type: precision_at_3
750
+ value: 15.659083333333331
751
+ - type: precision_at_5
752
+ value: 10.878833333333333
753
+ - type: recall_at_1
754
+ value: 26.980083333333337
755
+ - type: recall_at_10
756
+ value: 48.13925
757
+ - type: recall_at_100
758
+ value: 70.70149999999998
759
+ - type: recall_at_1000
760
+ value: 88.10775000000001
761
+ - type: recall_at_3
762
+ value: 37.30091666666667
763
+ - type: recall_at_5
764
+ value: 41.90358333333333
765
+ - task:
766
+ type: Retrieval
767
+ dataset:
768
+ type: BeIR/cqadupstack
769
+ name: MTEB CQADupstackStatsRetrieval
770
+ config: default
771
+ split: test
772
+ revision: None
773
+ metrics:
774
+ - type: map_at_1
775
+ value: 25.607999999999997
776
+ - type: map_at_10
777
+ value: 30.523
778
+ - type: map_at_100
779
+ value: 31.409
780
+ - type: map_at_1000
781
+ value: 31.507
782
+ - type: map_at_3
783
+ value: 28.915000000000003
784
+ - type: map_at_5
785
+ value: 29.756
786
+ - type: mrr_at_1
787
+ value: 28.681
788
+ - type: mrr_at_10
789
+ value: 33.409
790
+ - type: mrr_at_100
791
+ value: 34.241
792
+ - type: mrr_at_1000
793
+ value: 34.313
794
+ - type: mrr_at_3
795
+ value: 32.029999999999994
796
+ - type: mrr_at_5
797
+ value: 32.712
798
+ - type: ndcg_at_1
799
+ value: 28.681
800
+ - type: ndcg_at_10
801
+ value: 33.733000000000004
802
+ - type: ndcg_at_100
803
+ value: 38.32
804
+ - type: ndcg_at_1000
805
+ value: 40.937
806
+ - type: ndcg_at_3
807
+ value: 30.898999999999997
808
+ - type: ndcg_at_5
809
+ value: 32.088
810
+ - type: precision_at_1
811
+ value: 28.681
812
+ - type: precision_at_10
813
+ value: 4.968999999999999
814
+ - type: precision_at_100
815
+ value: 0.79
816
+ - type: precision_at_1000
817
+ value: 0.11
818
+ - type: precision_at_3
819
+ value: 12.73
820
+ - type: precision_at_5
821
+ value: 8.558
822
+ - type: recall_at_1
823
+ value: 25.607999999999997
824
+ - type: recall_at_10
825
+ value: 40.722
826
+ - type: recall_at_100
827
+ value: 61.956999999999994
828
+ - type: recall_at_1000
829
+ value: 81.43
830
+ - type: recall_at_3
831
+ value: 32.785
832
+ - type: recall_at_5
833
+ value: 35.855
834
+ - task:
835
+ type: Retrieval
836
+ dataset:
837
+ type: BeIR/cqadupstack
838
+ name: MTEB CQADupstackTexRetrieval
839
+ config: default
840
+ split: test
841
+ revision: None
842
+ metrics:
843
+ - type: map_at_1
844
+ value: 20.399
845
+ - type: map_at_10
846
+ value: 25.968000000000004
847
+ - type: map_at_100
848
+ value: 26.985999999999997
849
+ - type: map_at_1000
850
+ value: 27.105
851
+ - type: map_at_3
852
+ value: 24.215
853
+ - type: map_at_5
854
+ value: 25.157
855
+ - type: mrr_at_1
856
+ value: 24.708
857
+ - type: mrr_at_10
858
+ value: 29.971999999999998
859
+ - type: mrr_at_100
860
+ value: 30.858
861
+ - type: mrr_at_1000
862
+ value: 30.934
863
+ - type: mrr_at_3
864
+ value: 28.304000000000002
865
+ - type: mrr_at_5
866
+ value: 29.183999999999997
867
+ - type: ndcg_at_1
868
+ value: 24.708
869
+ - type: ndcg_at_10
870
+ value: 29.676000000000002
871
+ - type: ndcg_at_100
872
+ value: 34.656
873
+ - type: ndcg_at_1000
874
+ value: 37.588
875
+ - type: ndcg_at_3
876
+ value: 26.613
877
+ - type: ndcg_at_5
878
+ value: 27.919
879
+ - type: precision_at_1
880
+ value: 24.708
881
+ - type: precision_at_10
882
+ value: 5.01
883
+ - type: precision_at_100
884
+ value: 0.876
885
+ - type: precision_at_1000
886
+ value: 0.13
887
+ - type: precision_at_3
888
+ value: 11.975
889
+ - type: precision_at_5
890
+ value: 8.279
891
+ - type: recall_at_1
892
+ value: 20.399
893
+ - type: recall_at_10
894
+ value: 36.935
895
+ - type: recall_at_100
896
+ value: 59.532
897
+ - type: recall_at_1000
898
+ value: 80.58
899
+ - type: recall_at_3
900
+ value: 27.979
901
+ - type: recall_at_5
902
+ value: 31.636999999999997
903
+ - task:
904
+ type: Retrieval
905
+ dataset:
906
+ type: BeIR/cqadupstack
907
+ name: MTEB CQADupstackUnixRetrieval
908
+ config: default
909
+ split: test
910
+ revision: None
911
+ metrics:
912
+ - type: map_at_1
913
+ value: 27.606
914
+ - type: map_at_10
915
+ value: 34.213
916
+ - type: map_at_100
917
+ value: 35.339999999999996
918
+ - type: map_at_1000
919
+ value: 35.458
920
+ - type: map_at_3
921
+ value: 31.987
922
+ - type: map_at_5
923
+ value: 33.322
924
+ - type: mrr_at_1
925
+ value: 31.53
926
+ - type: mrr_at_10
927
+ value: 37.911
928
+ - type: mrr_at_100
929
+ value: 38.879000000000005
930
+ - type: mrr_at_1000
931
+ value: 38.956
932
+ - type: mrr_at_3
933
+ value: 35.868
934
+ - type: mrr_at_5
935
+ value: 37.047999999999995
936
+ - type: ndcg_at_1
937
+ value: 31.53
938
+ - type: ndcg_at_10
939
+ value: 38.312000000000005
940
+ - type: ndcg_at_100
941
+ value: 43.812
942
+ - type: ndcg_at_1000
943
+ value: 46.414
944
+ - type: ndcg_at_3
945
+ value: 34.319
946
+ - type: ndcg_at_5
947
+ value: 36.312
948
+ - type: precision_at_1
949
+ value: 31.53
950
+ - type: precision_at_10
951
+ value: 5.970000000000001
952
+ - type: precision_at_100
953
+ value: 0.9939999999999999
954
+ - type: precision_at_1000
955
+ value: 0.133
956
+ - type: precision_at_3
957
+ value: 14.738999999999999
958
+ - type: precision_at_5
959
+ value: 10.242999999999999
960
+ - type: recall_at_1
961
+ value: 27.606
962
+ - type: recall_at_10
963
+ value: 47.136
964
+ - type: recall_at_100
965
+ value: 71.253
966
+ - type: recall_at_1000
967
+ value: 89.39399999999999
968
+ - type: recall_at_3
969
+ value: 36.342
970
+ - type: recall_at_5
971
+ value: 41.388999999999996
972
+ - task:
973
+ type: Retrieval
974
+ dataset:
975
+ type: BeIR/cqadupstack
976
+ name: MTEB CQADupstackWebmastersRetrieval
977
+ config: default
978
+ split: test
979
+ revision: None
980
+ metrics:
981
+ - type: map_at_1
982
+ value: 24.855
983
+ - type: map_at_10
984
+ value: 31.963
985
+ - type: map_at_100
986
+ value: 33.371
987
+ - type: map_at_1000
988
+ value: 33.584
989
+ - type: map_at_3
990
+ value: 29.543999999999997
991
+ - type: map_at_5
992
+ value: 30.793
993
+ - type: mrr_at_1
994
+ value: 29.644
995
+ - type: mrr_at_10
996
+ value: 35.601
997
+ - type: mrr_at_100
998
+ value: 36.551
999
+ - type: mrr_at_1000
1000
+ value: 36.623
1001
+ - type: mrr_at_3
1002
+ value: 33.399
1003
+ - type: mrr_at_5
1004
+ value: 34.575
1005
+ - type: ndcg_at_1
1006
+ value: 29.644
1007
+ - type: ndcg_at_10
1008
+ value: 36.521
1009
+ - type: ndcg_at_100
1010
+ value: 42.087
1011
+ - type: ndcg_at_1000
1012
+ value: 45.119
1013
+ - type: ndcg_at_3
1014
+ value: 32.797
1015
+ - type: ndcg_at_5
1016
+ value: 34.208
1017
+ - type: precision_at_1
1018
+ value: 29.644
1019
+ - type: precision_at_10
1020
+ value: 6.7
1021
+ - type: precision_at_100
1022
+ value: 1.374
1023
+ - type: precision_at_1000
1024
+ value: 0.22899999999999998
1025
+ - type: precision_at_3
1026
+ value: 15.152
1027
+ - type: precision_at_5
1028
+ value: 10.671999999999999
1029
+ - type: recall_at_1
1030
+ value: 24.855
1031
+ - type: recall_at_10
1032
+ value: 45.449
1033
+ - type: recall_at_100
1034
+ value: 70.921
1035
+ - type: recall_at_1000
1036
+ value: 90.629
1037
+ - type: recall_at_3
1038
+ value: 33.526
1039
+ - type: recall_at_5
1040
+ value: 37.848
1041
+ - task:
1042
+ type: Retrieval
1043
+ dataset:
1044
+ type: BeIR/cqadupstack
1045
+ name: MTEB CQADupstackWordpressRetrieval
1046
+ config: default
1047
+ split: test
1048
+ revision: None
1049
+ metrics:
1050
+ - type: map_at_1
1051
+ value: 24.781
1052
+ - type: map_at_10
1053
+ value: 30.020999999999997
1054
+ - type: map_at_100
1055
+ value: 30.948999999999998
1056
+ - type: map_at_1000
1057
+ value: 31.05
1058
+ - type: map_at_3
1059
+ value: 28.412
1060
+ - type: map_at_5
1061
+ value: 29.193
1062
+ - type: mrr_at_1
1063
+ value: 27.172
1064
+ - type: mrr_at_10
1065
+ value: 32.309
1066
+ - type: mrr_at_100
1067
+ value: 33.164
1068
+ - type: mrr_at_1000
1069
+ value: 33.239999999999995
1070
+ - type: mrr_at_3
1071
+ value: 30.775999999999996
1072
+ - type: mrr_at_5
1073
+ value: 31.562
1074
+ - type: ndcg_at_1
1075
+ value: 27.172
1076
+ - type: ndcg_at_10
1077
+ value: 33.178999999999995
1078
+ - type: ndcg_at_100
1079
+ value: 37.949
1080
+ - type: ndcg_at_1000
1081
+ value: 40.635
1082
+ - type: ndcg_at_3
1083
+ value: 30.107
1084
+ - type: ndcg_at_5
1085
+ value: 31.36
1086
+ - type: precision_at_1
1087
+ value: 27.172
1088
+ - type: precision_at_10
1089
+ value: 4.769
1090
+ - type: precision_at_100
1091
+ value: 0.769
1092
+ - type: precision_at_1000
1093
+ value: 0.109
1094
+ - type: precision_at_3
1095
+ value: 12.261
1096
+ - type: precision_at_5
1097
+ value: 8.17
1098
+ - type: recall_at_1
1099
+ value: 24.781
1100
+ - type: recall_at_10
1101
+ value: 40.699000000000005
1102
+ - type: recall_at_100
1103
+ value: 62.866
1104
+ - type: recall_at_1000
1105
+ value: 83.11699999999999
1106
+ - type: recall_at_3
1107
+ value: 32.269999999999996
1108
+ - type: recall_at_5
1109
+ value: 35.443999999999996
1110
+ - task:
1111
+ type: Retrieval
1112
+ dataset:
1113
+ type: climate-fever
1114
+ name: MTEB ClimateFEVER
1115
+ config: default
1116
+ split: test
1117
+ revision: None
1118
+ metrics:
1119
+ - type: map_at_1
1120
+ value: 5.2139999999999995
1121
+ - type: map_at_10
1122
+ value: 9.986
1123
+ - type: map_at_100
1124
+ value: 11.343
1125
+ - type: map_at_1000
1126
+ value: 11.55
1127
+ - type: map_at_3
1128
+ value: 7.961
1129
+ - type: map_at_5
1130
+ value: 8.967
1131
+ - type: mrr_at_1
1132
+ value: 12.052
1133
+ - type: mrr_at_10
1134
+ value: 20.165
1135
+ - type: mrr_at_100
1136
+ value: 21.317
1137
+ - type: mrr_at_1000
1138
+ value: 21.399
1139
+ - type: mrr_at_3
1140
+ value: 17.079
1141
+ - type: mrr_at_5
1142
+ value: 18.695
1143
+ - type: ndcg_at_1
1144
+ value: 12.052
1145
+ - type: ndcg_at_10
1146
+ value: 15.375
1147
+ - type: ndcg_at_100
1148
+ value: 21.858
1149
+ - type: ndcg_at_1000
1150
+ value: 26.145000000000003
1151
+ - type: ndcg_at_3
1152
+ value: 11.334
1153
+ - type: ndcg_at_5
1154
+ value: 12.798000000000002
1155
+ - type: precision_at_1
1156
+ value: 12.052
1157
+ - type: precision_at_10
1158
+ value: 5.16
1159
+ - type: precision_at_100
1160
+ value: 1.206
1161
+ - type: precision_at_1000
1162
+ value: 0.198
1163
+ - type: precision_at_3
1164
+ value: 8.73
1165
+ - type: precision_at_5
1166
+ value: 7.114
1167
+ - type: recall_at_1
1168
+ value: 5.2139999999999995
1169
+ - type: recall_at_10
1170
+ value: 20.669999999999998
1171
+ - type: recall_at_100
1172
+ value: 43.901
1173
+ - type: recall_at_1000
1174
+ value: 68.447
1175
+ - type: recall_at_3
1176
+ value: 11.049000000000001
1177
+ - type: recall_at_5
1178
+ value: 14.652999999999999
1179
+ - task:
1180
+ type: Retrieval
1181
+ dataset:
1182
+ type: dbpedia-entity
1183
+ name: MTEB DBPedia
1184
+ config: default
1185
+ split: test
1186
+ revision: None
1187
+ metrics:
1188
+ - type: map_at_1
1189
+ value: 8.511000000000001
1190
+ - type: map_at_10
1191
+ value: 19.503
1192
+ - type: map_at_100
1193
+ value: 27.46
1194
+ - type: map_at_1000
1195
+ value: 29.187
1196
+ - type: map_at_3
1197
+ value: 14.030999999999999
1198
+ - type: map_at_5
1199
+ value: 16.329
1200
+ - type: mrr_at_1
1201
+ value: 63.74999999999999
1202
+ - type: mrr_at_10
1203
+ value: 73.419
1204
+ - type: mrr_at_100
1205
+ value: 73.691
1206
+ - type: mrr_at_1000
1207
+ value: 73.697
1208
+ - type: mrr_at_3
1209
+ value: 71.792
1210
+ - type: mrr_at_5
1211
+ value: 72.979
1212
+ - type: ndcg_at_1
1213
+ value: 53.125
1214
+ - type: ndcg_at_10
1215
+ value: 41.02
1216
+ - type: ndcg_at_100
1217
+ value: 45.407
1218
+ - type: ndcg_at_1000
1219
+ value: 52.68000000000001
1220
+ - type: ndcg_at_3
1221
+ value: 46.088
1222
+ - type: ndcg_at_5
1223
+ value: 43.236000000000004
1224
+ - type: precision_at_1
1225
+ value: 63.74999999999999
1226
+ - type: precision_at_10
1227
+ value: 32.35
1228
+ - type: precision_at_100
1229
+ value: 10.363
1230
+ - type: precision_at_1000
1231
+ value: 2.18
1232
+ - type: precision_at_3
1233
+ value: 49.667
1234
+ - type: precision_at_5
1235
+ value: 41.5
1236
+ - type: recall_at_1
1237
+ value: 8.511000000000001
1238
+ - type: recall_at_10
1239
+ value: 24.851
1240
+ - type: recall_at_100
1241
+ value: 50.745
1242
+ - type: recall_at_1000
1243
+ value: 73.265
1244
+ - type: recall_at_3
1245
+ value: 15.716
1246
+ - type: recall_at_5
1247
+ value: 19.256
1248
+ - task:
1249
+ type: Classification
1250
+ dataset:
1251
+ type: mteb/emotion
1252
+ name: MTEB EmotionClassification
1253
+ config: default
1254
+ split: test
1255
+ revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
1256
+ metrics:
1257
+ - type: accuracy
1258
+ value: 49.43500000000001
1259
+ - type: f1
1260
+ value: 44.56288273966374
1261
+ - task:
1262
+ type: Retrieval
1263
+ dataset:
1264
+ type: fever
1265
+ name: MTEB FEVER
1266
+ config: default
1267
+ split: test
1268
+ revision: None
1269
+ metrics:
1270
+ - type: map_at_1
1271
+ value: 40.858
1272
+ - type: map_at_10
1273
+ value: 52.276
1274
+ - type: map_at_100
1275
+ value: 52.928
1276
+ - type: map_at_1000
1277
+ value: 52.966
1278
+ - type: map_at_3
1279
+ value: 49.729
1280
+ - type: map_at_5
1281
+ value: 51.27
1282
+ - type: mrr_at_1
1283
+ value: 43.624
1284
+ - type: mrr_at_10
1285
+ value: 55.22899999999999
1286
+ - type: mrr_at_100
1287
+ value: 55.823
1288
+ - type: mrr_at_1000
1289
+ value: 55.85
1290
+ - type: mrr_at_3
1291
+ value: 52.739999999999995
1292
+ - type: mrr_at_5
1293
+ value: 54.251000000000005
1294
+ - type: ndcg_at_1
1295
+ value: 43.624
1296
+ - type: ndcg_at_10
1297
+ value: 58.23500000000001
1298
+ - type: ndcg_at_100
1299
+ value: 61.315
1300
+ - type: ndcg_at_1000
1301
+ value: 62.20099999999999
1302
+ - type: ndcg_at_3
1303
+ value: 53.22
1304
+ - type: ndcg_at_5
1305
+ value: 55.88999999999999
1306
+ - type: precision_at_1
1307
+ value: 43.624
1308
+ - type: precision_at_10
1309
+ value: 8.068999999999999
1310
+ - type: precision_at_100
1311
+ value: 0.975
1312
+ - type: precision_at_1000
1313
+ value: 0.107
1314
+ - type: precision_at_3
1315
+ value: 21.752
1316
+ - type: precision_at_5
1317
+ value: 14.515
1318
+ - type: recall_at_1
1319
+ value: 40.858
1320
+ - type: recall_at_10
1321
+ value: 73.744
1322
+ - type: recall_at_100
1323
+ value: 87.667
1324
+ - type: recall_at_1000
1325
+ value: 94.15599999999999
1326
+ - type: recall_at_3
1327
+ value: 60.287
1328
+ - type: recall_at_5
1329
+ value: 66.703
1330
+ - task:
1331
+ type: Retrieval
1332
+ dataset:
1333
+ type: fiqa
1334
+ name: MTEB FiQA2018
1335
+ config: default
1336
+ split: test
1337
+ revision: None
1338
+ metrics:
1339
+ - type: map_at_1
1340
+ value: 17.864
1341
+ - type: map_at_10
1342
+ value: 28.592000000000002
1343
+ - type: map_at_100
1344
+ value: 30.165
1345
+ - type: map_at_1000
1346
+ value: 30.364
1347
+ - type: map_at_3
1348
+ value: 24.586
1349
+ - type: map_at_5
1350
+ value: 26.717000000000002
1351
+ - type: mrr_at_1
1352
+ value: 35.031
1353
+ - type: mrr_at_10
1354
+ value: 43.876
1355
+ - type: mrr_at_100
1356
+ value: 44.683
1357
+ - type: mrr_at_1000
1358
+ value: 44.736
1359
+ - type: mrr_at_3
1360
+ value: 40.998000000000005
1361
+ - type: mrr_at_5
1362
+ value: 42.595
1363
+ - type: ndcg_at_1
1364
+ value: 35.031
1365
+ - type: ndcg_at_10
1366
+ value: 36.368
1367
+ - type: ndcg_at_100
1368
+ value: 42.472
1369
+ - type: ndcg_at_1000
1370
+ value: 45.973000000000006
1371
+ - type: ndcg_at_3
1372
+ value: 31.915
1373
+ - type: ndcg_at_5
1374
+ value: 33.394
1375
+ - type: precision_at_1
1376
+ value: 35.031
1377
+ - type: precision_at_10
1378
+ value: 10.139
1379
+ - type: precision_at_100
1380
+ value: 1.6420000000000001
1381
+ - type: precision_at_1000
1382
+ value: 0.22699999999999998
1383
+ - type: precision_at_3
1384
+ value: 21.142
1385
+ - type: precision_at_5
1386
+ value: 15.772
1387
+ - type: recall_at_1
1388
+ value: 17.864
1389
+ - type: recall_at_10
1390
+ value: 43.991
1391
+ - type: recall_at_100
1392
+ value: 66.796
1393
+ - type: recall_at_1000
1394
+ value: 87.64
1395
+ - type: recall_at_3
1396
+ value: 28.915999999999997
1397
+ - type: recall_at_5
1398
+ value: 35.185
1399
+ - task:
1400
+ type: Retrieval
1401
+ dataset:
1402
+ type: hotpotqa
1403
+ name: MTEB HotpotQA
1404
+ config: default
1405
+ split: test
1406
+ revision: None
1407
+ metrics:
1408
+ - type: map_at_1
1409
+ value: 36.556
1410
+ - type: map_at_10
1411
+ value: 53.056000000000004
1412
+ - type: map_at_100
1413
+ value: 53.909
1414
+ - type: map_at_1000
1415
+ value: 53.98
1416
+ - type: map_at_3
1417
+ value: 49.982
1418
+ - type: map_at_5
1419
+ value: 51.9
1420
+ - type: mrr_at_1
1421
+ value: 73.113
1422
+ - type: mrr_at_10
1423
+ value: 79.381
1424
+ - type: mrr_at_100
1425
+ value: 79.60300000000001
1426
+ - type: mrr_at_1000
1427
+ value: 79.617
1428
+ - type: mrr_at_3
1429
+ value: 78.298
1430
+ - type: mrr_at_5
1431
+ value: 78.995
1432
+ - type: ndcg_at_1
1433
+ value: 73.113
1434
+ - type: ndcg_at_10
1435
+ value: 62.21
1436
+ - type: ndcg_at_100
1437
+ value: 65.242
1438
+ - type: ndcg_at_1000
1439
+ value: 66.667
1440
+ - type: ndcg_at_3
1441
+ value: 57.717
1442
+ - type: ndcg_at_5
1443
+ value: 60.224
1444
+ - type: precision_at_1
1445
+ value: 73.113
1446
+ - type: precision_at_10
1447
+ value: 12.842999999999998
1448
+ - type: precision_at_100
1449
+ value: 1.522
1450
+ - type: precision_at_1000
1451
+ value: 0.17099999999999999
1452
+ - type: precision_at_3
1453
+ value: 36.178
1454
+ - type: precision_at_5
1455
+ value: 23.695
1456
+ - type: recall_at_1
1457
+ value: 36.556
1458
+ - type: recall_at_10
1459
+ value: 64.213
1460
+ - type: recall_at_100
1461
+ value: 76.077
1462
+ - type: recall_at_1000
1463
+ value: 85.53699999999999
1464
+ - type: recall_at_3
1465
+ value: 54.266999999999996
1466
+ - type: recall_at_5
1467
+ value: 59.236999999999995
1468
+ - task:
1469
+ type: Classification
1470
+ dataset:
1471
+ type: mteb/imdb
1472
+ name: MTEB ImdbClassification
1473
+ config: default
1474
+ split: test
1475
+ revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
1476
+ metrics:
1477
+ - type: accuracy
1478
+ value: 75.958
1479
+ - type: ap
1480
+ value: 69.82869527654348
1481
+ - type: f1
1482
+ value: 75.89120903005633
1483
+ - task:
1484
+ type: Retrieval
1485
+ dataset:
1486
+ type: msmarco
1487
+ name: MTEB MSMARCO
1488
+ config: default
1489
+ split: dev
1490
+ revision: None
1491
+ metrics:
1492
+ - type: map_at_1
1493
+ value: 23.608
1494
+ - type: map_at_10
1495
+ value: 36.144
1496
+ - type: map_at_100
1497
+ value: 37.244
1498
+ - type: map_at_1000
1499
+ value: 37.291999999999994
1500
+ - type: map_at_3
1501
+ value: 32.287
1502
+ - type: map_at_5
1503
+ value: 34.473
1504
+ - type: mrr_at_1
1505
+ value: 24.226
1506
+ - type: mrr_at_10
1507
+ value: 36.711
1508
+ - type: mrr_at_100
1509
+ value: 37.758
1510
+ - type: mrr_at_1000
1511
+ value: 37.8
1512
+ - type: mrr_at_3
1513
+ value: 32.92
1514
+ - type: mrr_at_5
1515
+ value: 35.104
1516
+ - type: ndcg_at_1
1517
+ value: 24.269
1518
+ - type: ndcg_at_10
1519
+ value: 43.138
1520
+ - type: ndcg_at_100
1521
+ value: 48.421
1522
+ - type: ndcg_at_1000
1523
+ value: 49.592000000000006
1524
+ - type: ndcg_at_3
1525
+ value: 35.269
1526
+ - type: ndcg_at_5
1527
+ value: 39.175
1528
+ - type: precision_at_1
1529
+ value: 24.269
1530
+ - type: precision_at_10
1531
+ value: 6.755999999999999
1532
+ - type: precision_at_100
1533
+ value: 0.941
1534
+ - type: precision_at_1000
1535
+ value: 0.104
1536
+ - type: precision_at_3
1537
+ value: 14.938
1538
+ - type: precision_at_5
1539
+ value: 10.934000000000001
1540
+ - type: recall_at_1
1541
+ value: 23.608
1542
+ - type: recall_at_10
1543
+ value: 64.679
1544
+ - type: recall_at_100
1545
+ value: 89.027
1546
+ - type: recall_at_1000
1547
+ value: 97.91
1548
+ - type: recall_at_3
1549
+ value: 43.25
1550
+ - type: recall_at_5
1551
+ value: 52.617000000000004
1552
+ - task:
1553
+ type: Classification
1554
+ dataset:
1555
+ type: mteb/mtop_domain
1556
+ name: MTEB MTOPDomainClassification (en)
1557
+ config: en
1558
+ split: test
1559
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
1560
+ metrics:
1561
+ - type: accuracy
1562
+ value: 93.21477428180576
1563
+ - type: f1
1564
+ value: 92.92502305092152
1565
+ - task:
1566
+ type: Classification
1567
+ dataset:
1568
+ type: mteb/mtop_intent
1569
+ name: MTEB MTOPIntentClassification (en)
1570
+ config: en
1571
+ split: test
1572
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
1573
+ metrics:
1574
+ - type: accuracy
1575
+ value: 74.76744186046511
1576
+ - type: f1
1577
+ value: 59.19855520057899
1578
+ - task:
1579
+ type: Classification
1580
+ dataset:
1581
+ type: mteb/amazon_massive_intent
1582
+ name: MTEB MassiveIntentClassification (en)
1583
+ config: en
1584
+ split: test
1585
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
1586
+ metrics:
1587
+ - type: accuracy
1588
+ value: 72.24613315400134
1589
+ - type: f1
1590
+ value: 70.19950395651232
1591
+ - task:
1592
+ type: Classification
1593
+ dataset:
1594
+ type: mteb/amazon_massive_scenario
1595
+ name: MTEB MassiveScenarioClassification (en)
1596
+ config: en
1597
+ split: test
1598
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
1599
+ metrics:
1600
+ - type: accuracy
1601
+ value: 76.75857431069268
1602
+ - type: f1
1603
+ value: 76.5433450230191
1604
+ - task:
1605
+ type: Clustering
1606
+ dataset:
1607
+ type: mteb/medrxiv-clustering-p2p
1608
+ name: MTEB MedrxivClusteringP2P
1609
+ config: default
1610
+ split: test
1611
+ revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
1612
+ metrics:
1613
+ - type: v_measure
1614
+ value: 31.525463791623604
1615
+ - task:
1616
+ type: Clustering
1617
+ dataset:
1618
+ type: mteb/medrxiv-clustering-s2s
1619
+ name: MTEB MedrxivClusteringS2S
1620
+ config: default
1621
+ split: test
1622
+ revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
1623
+ metrics:
1624
+ - type: v_measure
1625
+ value: 28.28695907385136
1626
+ - task:
1627
+ type: Reranking
1628
+ dataset:
1629
+ type: mteb/mind_small
1630
+ name: MTEB MindSmallReranking
1631
+ config: default
1632
+ split: test
1633
+ revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
1634
+ metrics:
1635
+ - type: map
1636
+ value: 30.068174046665224
1637
+ - type: mrr
1638
+ value: 30.827586642840803
1639
+ - task:
1640
+ type: Retrieval
1641
+ dataset:
1642
+ type: nfcorpus
1643
+ name: MTEB NFCorpus
1644
+ config: default
1645
+ split: test
1646
+ revision: None
1647
+ metrics:
1648
+ - type: map_at_1
1649
+ value: 6.322
1650
+ - type: map_at_10
1651
+ value: 13.919999999999998
1652
+ - type: map_at_100
1653
+ value: 17.416
1654
+ - type: map_at_1000
1655
+ value: 18.836
1656
+ - type: map_at_3
1657
+ value: 10.111
1658
+ - type: map_at_5
1659
+ value: 11.991999999999999
1660
+ - type: mrr_at_1
1661
+ value: 48.297000000000004
1662
+ - type: mrr_at_10
1663
+ value: 57.114
1664
+ - type: mrr_at_100
1665
+ value: 57.713
1666
+ - type: mrr_at_1000
1667
+ value: 57.751
1668
+ - type: mrr_at_3
1669
+ value: 55.108000000000004
1670
+ - type: mrr_at_5
1671
+ value: 56.533
1672
+ - type: ndcg_at_1
1673
+ value: 46.44
1674
+ - type: ndcg_at_10
1675
+ value: 36.589
1676
+ - type: ndcg_at_100
1677
+ value: 33.202
1678
+ - type: ndcg_at_1000
1679
+ value: 41.668
1680
+ - type: ndcg_at_3
1681
+ value: 41.302
1682
+ - type: ndcg_at_5
1683
+ value: 39.829
1684
+ - type: precision_at_1
1685
+ value: 47.988
1686
+ - type: precision_at_10
1687
+ value: 27.059
1688
+ - type: precision_at_100
1689
+ value: 8.235000000000001
1690
+ - type: precision_at_1000
1691
+ value: 2.091
1692
+ - type: precision_at_3
1693
+ value: 38.184000000000005
1694
+ - type: precision_at_5
1695
+ value: 34.365
1696
+ - type: recall_at_1
1697
+ value: 6.322
1698
+ - type: recall_at_10
1699
+ value: 18.288
1700
+ - type: recall_at_100
1701
+ value: 32.580999999999996
1702
+ - type: recall_at_1000
1703
+ value: 63.605999999999995
1704
+ - type: recall_at_3
1705
+ value: 11.266
1706
+ - type: recall_at_5
1707
+ value: 14.69
1708
+ - task:
1709
+ type: Retrieval
1710
+ dataset:
1711
+ type: nq
1712
+ name: MTEB NQ
1713
+ config: default
1714
+ split: test
1715
+ revision: None
1716
+ metrics:
1717
+ - type: map_at_1
1718
+ value: 36.586999999999996
1719
+ - type: map_at_10
1720
+ value: 52.464
1721
+ - type: map_at_100
1722
+ value: 53.384
1723
+ - type: map_at_1000
1724
+ value: 53.405
1725
+ - type: map_at_3
1726
+ value: 48.408
1727
+ - type: map_at_5
1728
+ value: 50.788999999999994
1729
+ - type: mrr_at_1
1730
+ value: 40.904
1731
+ - type: mrr_at_10
1732
+ value: 54.974000000000004
1733
+ - type: mrr_at_100
1734
+ value: 55.60699999999999
1735
+ - type: mrr_at_1000
1736
+ value: 55.623
1737
+ - type: mrr_at_3
1738
+ value: 51.73799999999999
1739
+ - type: mrr_at_5
1740
+ value: 53.638
1741
+ - type: ndcg_at_1
1742
+ value: 40.904
1743
+ - type: ndcg_at_10
1744
+ value: 59.965999999999994
1745
+ - type: ndcg_at_100
1746
+ value: 63.613
1747
+ - type: ndcg_at_1000
1748
+ value: 64.064
1749
+ - type: ndcg_at_3
1750
+ value: 52.486
1751
+ - type: ndcg_at_5
1752
+ value: 56.377
1753
+ - type: precision_at_1
1754
+ value: 40.904
1755
+ - type: precision_at_10
1756
+ value: 9.551
1757
+ - type: precision_at_100
1758
+ value: 1.162
1759
+ - type: precision_at_1000
1760
+ value: 0.12
1761
+ - type: precision_at_3
1762
+ value: 23.552
1763
+ - type: precision_at_5
1764
+ value: 16.436999999999998
1765
+ - type: recall_at_1
1766
+ value: 36.586999999999996
1767
+ - type: recall_at_10
1768
+ value: 80.094
1769
+ - type: recall_at_100
1770
+ value: 95.515
1771
+ - type: recall_at_1000
1772
+ value: 98.803
1773
+ - type: recall_at_3
1774
+ value: 60.907
1775
+ - type: recall_at_5
1776
+ value: 69.817
1777
+ - task:
1778
+ type: Retrieval
1779
+ dataset:
1780
+ type: quora
1781
+ name: MTEB QuoraRetrieval
1782
+ config: default
1783
+ split: test
1784
+ revision: None
1785
+ metrics:
1786
+ - type: map_at_1
1787
+ value: 70.422
1788
+ - type: map_at_10
1789
+ value: 84.113
1790
+ - type: map_at_100
1791
+ value: 84.744
1792
+ - type: map_at_1000
1793
+ value: 84.762
1794
+ - type: map_at_3
1795
+ value: 81.171
1796
+ - type: map_at_5
1797
+ value: 83.039
1798
+ - type: mrr_at_1
1799
+ value: 81.12
1800
+ - type: mrr_at_10
1801
+ value: 87.277
1802
+ - type: mrr_at_100
1803
+ value: 87.384
1804
+ - type: mrr_at_1000
1805
+ value: 87.385
1806
+ - type: mrr_at_3
1807
+ value: 86.315
1808
+ - type: mrr_at_5
1809
+ value: 86.981
1810
+ - type: ndcg_at_1
1811
+ value: 81.12
1812
+ - type: ndcg_at_10
1813
+ value: 87.92
1814
+ - type: ndcg_at_100
1815
+ value: 89.178
1816
+ - type: ndcg_at_1000
1817
+ value: 89.29899999999999
1818
+ - type: ndcg_at_3
1819
+ value: 85.076
1820
+ - type: ndcg_at_5
1821
+ value: 86.67099999999999
1822
+ - type: precision_at_1
1823
+ value: 81.12
1824
+ - type: precision_at_10
1825
+ value: 13.325999999999999
1826
+ - type: precision_at_100
1827
+ value: 1.524
1828
+ - type: precision_at_1000
1829
+ value: 0.157
1830
+ - type: precision_at_3
1831
+ value: 37.16
1832
+ - type: precision_at_5
1833
+ value: 24.456
1834
+ - type: recall_at_1
1835
+ value: 70.422
1836
+ - type: recall_at_10
1837
+ value: 95.00800000000001
1838
+ - type: recall_at_100
1839
+ value: 99.38
1840
+ - type: recall_at_1000
1841
+ value: 99.94800000000001
1842
+ - type: recall_at_3
1843
+ value: 86.809
1844
+ - type: recall_at_5
1845
+ value: 91.334
1846
+ - task:
1847
+ type: Clustering
1848
+ dataset:
1849
+ type: mteb/reddit-clustering
1850
+ name: MTEB RedditClustering
1851
+ config: default
1852
+ split: test
1853
+ revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
1854
+ metrics:
1855
+ - type: v_measure
1856
+ value: 48.18491891699636
1857
+ - task:
1858
+ type: Clustering
1859
+ dataset:
1860
+ type: mteb/reddit-clustering-p2p
1861
+ name: MTEB RedditClusteringP2P
1862
+ config: default
1863
+ split: test
1864
+ revision: 282350215ef01743dc01b456c7f5241fa8937f16
1865
+ metrics:
1866
+ - type: v_measure
1867
+ value: 62.190639679711914
1868
+ - task:
1869
+ type: Retrieval
1870
+ dataset:
1871
+ type: scidocs
1872
+ name: MTEB SCIDOCS
1873
+ config: default
1874
+ split: test
1875
+ revision: None
1876
+ metrics:
1877
+ - type: map_at_1
1878
+ value: 4.478
1879
+ - type: map_at_10
1880
+ value: 11.268
1881
+ - type: map_at_100
1882
+ value: 13.129
1883
+ - type: map_at_1000
1884
+ value: 13.41
1885
+ - type: map_at_3
1886
+ value: 8.103
1887
+ - type: map_at_5
1888
+ value: 9.609
1889
+ - type: mrr_at_1
1890
+ value: 22
1891
+ - type: mrr_at_10
1892
+ value: 32.248
1893
+ - type: mrr_at_100
1894
+ value: 33.355000000000004
1895
+ - type: mrr_at_1000
1896
+ value: 33.42
1897
+ - type: mrr_at_3
1898
+ value: 29.15
1899
+ - type: mrr_at_5
1900
+ value: 30.785
1901
+ - type: ndcg_at_1
1902
+ value: 22
1903
+ - type: ndcg_at_10
1904
+ value: 18.990000000000002
1905
+ - type: ndcg_at_100
1906
+ value: 26.302999999999997
1907
+ - type: ndcg_at_1000
1908
+ value: 31.537
1909
+ - type: ndcg_at_3
1910
+ value: 18.034
1911
+ - type: ndcg_at_5
1912
+ value: 15.655
1913
+ - type: precision_at_1
1914
+ value: 22
1915
+ - type: precision_at_10
1916
+ value: 9.91
1917
+ - type: precision_at_100
1918
+ value: 2.0420000000000003
1919
+ - type: precision_at_1000
1920
+ value: 0.33
1921
+ - type: precision_at_3
1922
+ value: 16.933
1923
+ - type: precision_at_5
1924
+ value: 13.719999999999999
1925
+ - type: recall_at_1
1926
+ value: 4.478
1927
+ - type: recall_at_10
1928
+ value: 20.087
1929
+ - type: recall_at_100
1930
+ value: 41.457
1931
+ - type: recall_at_1000
1932
+ value: 67.10199999999999
1933
+ - type: recall_at_3
1934
+ value: 10.313
1935
+ - type: recall_at_5
1936
+ value: 13.927999999999999
1937
+ - task:
1938
+ type: STS
1939
+ dataset:
1940
+ type: mteb/sickr-sts
1941
+ name: MTEB SICK-R
1942
+ config: default
1943
+ split: test
1944
+ revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
1945
+ metrics:
1946
+ - type: cos_sim_pearson
1947
+ value: 84.27341574565806
1948
+ - type: cos_sim_spearman
1949
+ value: 79.66419880841734
1950
+ - type: euclidean_pearson
1951
+ value: 81.32473321838208
1952
+ - type: euclidean_spearman
1953
+ value: 79.29828832085133
1954
+ - type: manhattan_pearson
1955
+ value: 81.25554065883132
1956
+ - type: manhattan_spearman
1957
+ value: 79.23275543279853
1958
+ - task:
1959
+ type: STS
1960
+ dataset:
1961
+ type: mteb/sts12-sts
1962
+ name: MTEB STS12
1963
+ config: default
1964
+ split: test
1965
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
1966
+ metrics:
1967
+ - type: cos_sim_pearson
1968
+ value: 83.40468875905418
1969
+ - type: cos_sim_spearman
1970
+ value: 74.2189990321174
1971
+ - type: euclidean_pearson
1972
+ value: 80.74376966290956
1973
+ - type: euclidean_spearman
1974
+ value: 74.97663839079335
1975
+ - type: manhattan_pearson
1976
+ value: 80.69779331646207
1977
+ - type: manhattan_spearman
1978
+ value: 75.00225252917613
1979
+ - task:
1980
+ type: STS
1981
+ dataset:
1982
+ type: mteb/sts13-sts
1983
+ name: MTEB STS13
1984
+ config: default
1985
+ split: test
1986
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
1987
+ metrics:
1988
+ - type: cos_sim_pearson
1989
+ value: 82.5745290053095
1990
+ - type: cos_sim_spearman
1991
+ value: 83.31401180333397
1992
+ - type: euclidean_pearson
1993
+ value: 82.96500607325534
1994
+ - type: euclidean_spearman
1995
+ value: 83.8534967935793
1996
+ - type: manhattan_pearson
1997
+ value: 82.83112050632508
1998
+ - type: manhattan_spearman
1999
+ value: 83.70877296557838
2000
+ - task:
2001
+ type: STS
2002
+ dataset:
2003
+ type: mteb/sts14-sts
2004
+ name: MTEB STS14
2005
+ config: default
2006
+ split: test
2007
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
2008
+ metrics:
2009
+ - type: cos_sim_pearson
2010
+ value: 80.67833656607704
2011
+ - type: cos_sim_spearman
2012
+ value: 78.52252410630707
2013
+ - type: euclidean_pearson
2014
+ value: 80.071189514343
2015
+ - type: euclidean_spearman
2016
+ value: 78.95143545742796
2017
+ - type: manhattan_pearson
2018
+ value: 80.0128926165121
2019
+ - type: manhattan_spearman
2020
+ value: 78.91236678732628
2021
+ - task:
2022
+ type: STS
2023
+ dataset:
2024
+ type: mteb/sts15-sts
2025
+ name: MTEB STS15
2026
+ config: default
2027
+ split: test
2028
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
2029
+ metrics:
2030
+ - type: cos_sim_pearson
2031
+ value: 87.48437639980746
2032
+ - type: cos_sim_spearman
2033
+ value: 88.34876527774259
2034
+ - type: euclidean_pearson
2035
+ value: 87.64898081823888
2036
+ - type: euclidean_spearman
2037
+ value: 88.58937180804213
2038
+ - type: manhattan_pearson
2039
+ value: 87.5942417815288
2040
+ - type: manhattan_spearman
2041
+ value: 88.53013922267687
2042
+ - task:
2043
+ type: STS
2044
+ dataset:
2045
+ type: mteb/sts16-sts
2046
+ name: MTEB STS16
2047
+ config: default
2048
+ split: test
2049
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
2050
+ metrics:
2051
+ - type: cos_sim_pearson
2052
+ value: 82.69189187164781
2053
+ - type: cos_sim_spearman
2054
+ value: 84.15327883572112
2055
+ - type: euclidean_pearson
2056
+ value: 83.64202266685898
2057
+ - type: euclidean_spearman
2058
+ value: 84.6219602318862
2059
+ - type: manhattan_pearson
2060
+ value: 83.53256698709998
2061
+ - type: manhattan_spearman
2062
+ value: 84.49260712904946
2063
+ - task:
2064
+ type: STS
2065
+ dataset:
2066
+ type: mteb/sts17-crosslingual-sts
2067
+ name: MTEB STS17 (en-en)
2068
+ config: en-en
2069
+ split: test
2070
+ revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
2071
+ metrics:
2072
+ - type: cos_sim_pearson
2073
+ value: 87.09508017611589
2074
+ - type: cos_sim_spearman
2075
+ value: 87.23010990417097
2076
+ - type: euclidean_pearson
2077
+ value: 87.62545569077133
2078
+ - type: euclidean_spearman
2079
+ value: 86.71152051711714
2080
+ - type: manhattan_pearson
2081
+ value: 87.5057154278377
2082
+ - type: manhattan_spearman
2083
+ value: 86.60611898281267
2084
+ - task:
2085
+ type: STS
2086
+ dataset:
2087
+ type: mteb/sts22-crosslingual-sts
2088
+ name: MTEB STS22 (en)
2089
+ config: en
2090
+ split: test
2091
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
2092
+ metrics:
2093
+ - type: cos_sim_pearson
2094
+ value: 61.72129893941176
2095
+ - type: cos_sim_spearman
2096
+ value: 62.87871412069194
2097
+ - type: euclidean_pearson
2098
+ value: 63.21077648290454
2099
+ - type: euclidean_spearman
2100
+ value: 63.03263080805978
2101
+ - type: manhattan_pearson
2102
+ value: 63.20740860135976
2103
+ - type: manhattan_spearman
2104
+ value: 62.89930471802817
2105
+ - task:
2106
+ type: STS
2107
+ dataset:
2108
+ type: mteb/stsbenchmark-sts
2109
+ name: MTEB STSBenchmark
2110
+ config: default
2111
+ split: test
2112
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
2113
+ metrics:
2114
+ - type: cos_sim_pearson
2115
+ value: 85.039118236799
2116
+ - type: cos_sim_spearman
2117
+ value: 86.18102563389962
2118
+ - type: euclidean_pearson
2119
+ value: 85.62977041471879
2120
+ - type: euclidean_spearman
2121
+ value: 86.02478990544347
2122
+ - type: manhattan_pearson
2123
+ value: 85.60786740521806
2124
+ - type: manhattan_spearman
2125
+ value: 85.99546210442547
2126
+ - task:
2127
+ type: Reranking
2128
+ dataset:
2129
+ type: mteb/scidocs-reranking
2130
+ name: MTEB SciDocsRR
2131
+ config: default
2132
+ split: test
2133
+ revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
2134
+ metrics:
2135
+ - type: map
2136
+ value: 82.89875069737266
2137
+ - type: mrr
2138
+ value: 95.42621322033087
2139
+ - task:
2140
+ type: Retrieval
2141
+ dataset:
2142
+ type: scifact
2143
+ name: MTEB SciFact
2144
+ config: default
2145
+ split: test
2146
+ revision: None
2147
+ metrics:
2148
+ - type: map_at_1
2149
+ value: 58.660999999999994
2150
+ - type: map_at_10
2151
+ value: 68.738
2152
+ - type: map_at_100
2153
+ value: 69.33200000000001
2154
+ - type: map_at_1000
2155
+ value: 69.352
2156
+ - type: map_at_3
2157
+ value: 66.502
2158
+ - type: map_at_5
2159
+ value: 67.686
2160
+ - type: mrr_at_1
2161
+ value: 61.667
2162
+ - type: mrr_at_10
2163
+ value: 70.003
2164
+ - type: mrr_at_100
2165
+ value: 70.441
2166
+ - type: mrr_at_1000
2167
+ value: 70.46
2168
+ - type: mrr_at_3
2169
+ value: 68.278
2170
+ - type: mrr_at_5
2171
+ value: 69.194
2172
+ - type: ndcg_at_1
2173
+ value: 61.667
2174
+ - type: ndcg_at_10
2175
+ value: 73.083
2176
+ - type: ndcg_at_100
2177
+ value: 75.56
2178
+ - type: ndcg_at_1000
2179
+ value: 76.01400000000001
2180
+ - type: ndcg_at_3
2181
+ value: 69.28699999999999
2182
+ - type: ndcg_at_5
2183
+ value: 70.85000000000001
2184
+ - type: precision_at_1
2185
+ value: 61.667
2186
+ - type: precision_at_10
2187
+ value: 9.6
2188
+ - type: precision_at_100
2189
+ value: 1.087
2190
+ - type: precision_at_1000
2191
+ value: 0.11199999999999999
2192
+ - type: precision_at_3
2193
+ value: 27.111
2194
+ - type: precision_at_5
2195
+ value: 17.467
2196
+ - type: recall_at_1
2197
+ value: 58.660999999999994
2198
+ - type: recall_at_10
2199
+ value: 85.02199999999999
2200
+ - type: recall_at_100
2201
+ value: 95.933
2202
+ - type: recall_at_1000
2203
+ value: 99.333
2204
+ - type: recall_at_3
2205
+ value: 74.506
2206
+ - type: recall_at_5
2207
+ value: 78.583
2208
+ - task:
2209
+ type: PairClassification
2210
+ dataset:
2211
+ type: mteb/sprintduplicatequestions-pairclassification
2212
+ name: MTEB SprintDuplicateQuestions
2213
+ config: default
2214
+ split: test
2215
+ revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
2216
+ metrics:
2217
+ - type: cos_sim_accuracy
2218
+ value: 99.8029702970297
2219
+ - type: cos_sim_ap
2220
+ value: 94.87673936635738
2221
+ - type: cos_sim_f1
2222
+ value: 90.00502260170768
2223
+ - type: cos_sim_precision
2224
+ value: 90.41372351160445
2225
+ - type: cos_sim_recall
2226
+ value: 89.60000000000001
2227
+ - type: dot_accuracy
2228
+ value: 99.57524752475247
2229
+ - type: dot_ap
2230
+ value: 84.81717934496321
2231
+ - type: dot_f1
2232
+ value: 78.23026646556059
2233
+ - type: dot_precision
2234
+ value: 78.66531850353893
2235
+ - type: dot_recall
2236
+ value: 77.8
2237
+ - type: euclidean_accuracy
2238
+ value: 99.8029702970297
2239
+ - type: euclidean_ap
2240
+ value: 94.74658253135284
2241
+ - type: euclidean_f1
2242
+ value: 90.08470353761834
2243
+ - type: euclidean_precision
2244
+ value: 89.77159880834161
2245
+ - type: euclidean_recall
2246
+ value: 90.4
2247
+ - type: manhattan_accuracy
2248
+ value: 99.8
2249
+ - type: manhattan_ap
2250
+ value: 94.69224030742787
2251
+ - type: manhattan_f1
2252
+ value: 89.9502487562189
2253
+ - type: manhattan_precision
2254
+ value: 89.50495049504951
2255
+ - type: manhattan_recall
2256
+ value: 90.4
2257
+ - type: max_accuracy
2258
+ value: 99.8029702970297
2259
+ - type: max_ap
2260
+ value: 94.87673936635738
2261
+ - type: max_f1
2262
+ value: 90.08470353761834
2263
+ - task:
2264
+ type: Clustering
2265
+ dataset:
2266
+ type: mteb/stackexchange-clustering
2267
+ name: MTEB StackExchangeClustering
2268
+ config: default
2269
+ split: test
2270
+ revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
2271
+ metrics:
2272
+ - type: v_measure
2273
+ value: 63.906039623153035
2274
+ - task:
2275
+ type: Clustering
2276
+ dataset:
2277
+ type: mteb/stackexchange-clustering-p2p
2278
+ name: MTEB StackExchangeClusteringP2P
2279
+ config: default
2280
+ split: test
2281
+ revision: 815ca46b2622cec33ccafc3735d572c266efdb44
2282
+ metrics:
2283
+ - type: v_measure
2284
+ value: 32.56053830923281
2285
+ - task:
2286
+ type: Reranking
2287
+ dataset:
2288
+ type: mteb/stackoverflowdupquestions-reranking
2289
+ name: MTEB StackOverflowDupQuestions
2290
+ config: default
2291
+ split: test
2292
+ revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
2293
+ metrics:
2294
+ - type: map
2295
+ value: 50.15326538775145
2296
+ - type: mrr
2297
+ value: 50.99279295051355
2298
+ - task:
2299
+ type: Summarization
2300
+ dataset:
2301
+ type: mteb/summeval
2302
+ name: MTEB SummEval
2303
+ config: default
2304
+ split: test
2305
+ revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
2306
+ metrics:
2307
+ - type: cos_sim_pearson
2308
+ value: 31.44030762047337
2309
+ - type: cos_sim_spearman
2310
+ value: 31.00910300264562
2311
+ - type: dot_pearson
2312
+ value: 26.88257194766013
2313
+ - type: dot_spearman
2314
+ value: 27.646202679013577
2315
+ - task:
2316
+ type: Retrieval
2317
+ dataset:
2318
+ type: trec-covid
2319
+ name: MTEB TRECCOVID
2320
+ config: default
2321
+ split: test
2322
+ revision: None
2323
+ metrics:
2324
+ - type: map_at_1
2325
+ value: 0.247
2326
+ - type: map_at_10
2327
+ value: 1.9429999999999998
2328
+ - type: map_at_100
2329
+ value: 10.82
2330
+ - type: map_at_1000
2331
+ value: 25.972
2332
+ - type: map_at_3
2333
+ value: 0.653
2334
+ - type: map_at_5
2335
+ value: 1.057
2336
+ - type: mrr_at_1
2337
+ value: 94
2338
+ - type: mrr_at_10
2339
+ value: 96.333
2340
+ - type: mrr_at_100
2341
+ value: 96.333
2342
+ - type: mrr_at_1000
2343
+ value: 96.333
2344
+ - type: mrr_at_3
2345
+ value: 96.333
2346
+ - type: mrr_at_5
2347
+ value: 96.333
2348
+ - type: ndcg_at_1
2349
+ value: 89
2350
+ - type: ndcg_at_10
2351
+ value: 79.63799999999999
2352
+ - type: ndcg_at_100
2353
+ value: 57.961
2354
+ - type: ndcg_at_1000
2355
+ value: 50.733
2356
+ - type: ndcg_at_3
2357
+ value: 84.224
2358
+ - type: ndcg_at_5
2359
+ value: 82.528
2360
+ - type: precision_at_1
2361
+ value: 94
2362
+ - type: precision_at_10
2363
+ value: 84.2
2364
+ - type: precision_at_100
2365
+ value: 59.36
2366
+ - type: precision_at_1000
2367
+ value: 22.738
2368
+ - type: precision_at_3
2369
+ value: 88
2370
+ - type: precision_at_5
2371
+ value: 86.8
2372
+ - type: recall_at_1
2373
+ value: 0.247
2374
+ - type: recall_at_10
2375
+ value: 2.131
2376
+ - type: recall_at_100
2377
+ value: 14.035
2378
+ - type: recall_at_1000
2379
+ value: 47.457
2380
+ - type: recall_at_3
2381
+ value: 0.6779999999999999
2382
+ - type: recall_at_5
2383
+ value: 1.124
2384
+ - task:
2385
+ type: Retrieval
2386
+ dataset:
2387
+ type: webis-touche2020
2388
+ name: MTEB Touche2020
2389
+ config: default
2390
+ split: test
2391
+ revision: None
2392
+ metrics:
2393
+ - type: map_at_1
2394
+ value: 2.603
2395
+ - type: map_at_10
2396
+ value: 11.667
2397
+ - type: map_at_100
2398
+ value: 16.474
2399
+ - type: map_at_1000
2400
+ value: 18.074
2401
+ - type: map_at_3
2402
+ value: 6.03
2403
+ - type: map_at_5
2404
+ value: 8.067
2405
+ - type: mrr_at_1
2406
+ value: 34.694
2407
+ - type: mrr_at_10
2408
+ value: 51.063
2409
+ - type: mrr_at_100
2410
+ value: 51.908
2411
+ - type: mrr_at_1000
2412
+ value: 51.908
2413
+ - type: mrr_at_3
2414
+ value: 47.959
2415
+ - type: mrr_at_5
2416
+ value: 49.694
2417
+ - type: ndcg_at_1
2418
+ value: 32.653
2419
+ - type: ndcg_at_10
2420
+ value: 28.305000000000003
2421
+ - type: ndcg_at_100
2422
+ value: 35.311
2423
+ - type: ndcg_at_1000
2424
+ value: 47.644999999999996
2425
+ - type: ndcg_at_3
2426
+ value: 32.187
2427
+ - type: ndcg_at_5
2428
+ value: 29.134999999999998
2429
+ - type: precision_at_1
2430
+ value: 34.694
2431
+ - type: precision_at_10
2432
+ value: 26.122
2433
+ - type: precision_at_100
2434
+ value: 6.755
2435
+ - type: precision_at_1000
2436
+ value: 1.467
2437
+ - type: precision_at_3
2438
+ value: 34.694
2439
+ - type: precision_at_5
2440
+ value: 30.203999999999997
2441
+ - type: recall_at_1
2442
+ value: 2.603
2443
+ - type: recall_at_10
2444
+ value: 18.716
2445
+ - type: recall_at_100
2446
+ value: 42.512
2447
+ - type: recall_at_1000
2448
+ value: 79.32000000000001
2449
+ - type: recall_at_3
2450
+ value: 7.59
2451
+ - type: recall_at_5
2452
+ value: 10.949
2453
+ - task:
2454
+ type: Classification
2455
+ dataset:
2456
+ type: mteb/toxic_conversations_50k
2457
+ name: MTEB ToxicConversationsClassification
2458
+ config: default
2459
+ split: test
2460
+ revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
2461
+ metrics:
2462
+ - type: accuracy
2463
+ value: 74.117
2464
+ - type: ap
2465
+ value: 15.89357321699319
2466
+ - type: f1
2467
+ value: 57.14385866369257
2468
+ - task:
2469
+ type: Classification
2470
+ dataset:
2471
+ type: mteb/tweet_sentiment_extraction
2472
+ name: MTEB TweetSentimentExtractionClassification
2473
+ config: default
2474
+ split: test
2475
+ revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
2476
+ metrics:
2477
+ - type: accuracy
2478
+ value: 61.38370118845502
2479
+ - type: f1
2480
+ value: 61.67038693866553
2481
+ - task:
2482
+ type: Clustering
2483
+ dataset:
2484
+ type: mteb/twentynewsgroups-clustering
2485
+ name: MTEB TwentyNewsgroupsClustering
2486
+ config: default
2487
+ split: test
2488
+ revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
2489
+ metrics:
2490
+ - type: v_measure
2491
+ value: 42.57754941537969
2492
+ - task:
2493
+ type: PairClassification
2494
+ dataset:
2495
+ type: mteb/twittersemeval2015-pairclassification
2496
+ name: MTEB TwitterSemEval2015
2497
+ config: default
2498
+ split: test
2499
+ revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
2500
+ metrics:
2501
+ - type: cos_sim_accuracy
2502
+ value: 86.1775049174465
2503
+ - type: cos_sim_ap
2504
+ value: 74.3994879581554
2505
+ - type: cos_sim_f1
2506
+ value: 69.32903671308551
2507
+ - type: cos_sim_precision
2508
+ value: 61.48193508879363
2509
+ - type: cos_sim_recall
2510
+ value: 79.47229551451187
2511
+ - type: dot_accuracy
2512
+ value: 81.65345413363534
2513
+ - type: dot_ap
2514
+ value: 59.690898346685096
2515
+ - type: dot_f1
2516
+ value: 57.27622826467499
2517
+ - type: dot_precision
2518
+ value: 51.34965473948525
2519
+ - type: dot_recall
2520
+ value: 64.74934036939314
2521
+ - type: euclidean_accuracy
2522
+ value: 86.04637301066937
2523
+ - type: euclidean_ap
2524
+ value: 74.33009001775268
2525
+ - type: euclidean_f1
2526
+ value: 69.2458374142997
2527
+ - type: euclidean_precision
2528
+ value: 64.59570580173595
2529
+ - type: euclidean_recall
2530
+ value: 74.6174142480211
2531
+ - type: manhattan_accuracy
2532
+ value: 86.11193896405793
2533
+ - type: manhattan_ap
2534
+ value: 74.2964140130421
2535
+ - type: manhattan_f1
2536
+ value: 69.11601528788066
2537
+ - type: manhattan_precision
2538
+ value: 64.86924323073363
2539
+ - type: manhattan_recall
2540
+ value: 73.95778364116094
2541
+ - type: max_accuracy
2542
+ value: 86.1775049174465
2543
+ - type: max_ap
2544
+ value: 74.3994879581554
2545
+ - type: max_f1
2546
+ value: 69.32903671308551
2547
+ - task:
2548
+ type: PairClassification
2549
+ dataset:
2550
+ type: mteb/twitterurlcorpus-pairclassification
2551
+ name: MTEB TwitterURLCorpus
2552
+ config: default
2553
+ split: test
2554
+ revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
2555
+ metrics:
2556
+ - type: cos_sim_accuracy
2557
+ value: 89.01501921061823
2558
+ - type: cos_sim_ap
2559
+ value: 85.97819287477351
2560
+ - type: cos_sim_f1
2561
+ value: 78.33882858518875
2562
+ - type: cos_sim_precision
2563
+ value: 75.49446626204926
2564
+ - type: cos_sim_recall
2565
+ value: 81.40591315060055
2566
+ - type: dot_accuracy
2567
+ value: 86.47494857763806
2568
+ - type: dot_ap
2569
+ value: 78.77420360340282
2570
+ - type: dot_f1
2571
+ value: 73.06433247936238
2572
+ - type: dot_precision
2573
+ value: 67.92140777983595
2574
+ - type: dot_recall
2575
+ value: 79.04989220819218
2576
+ - type: euclidean_accuracy
2577
+ value: 88.7297706368611
2578
+ - type: euclidean_ap
2579
+ value: 85.61550568529317
2580
+ - type: euclidean_f1
2581
+ value: 77.84805525263539
2582
+ - type: euclidean_precision
2583
+ value: 73.73639994491117
2584
+ - type: euclidean_recall
2585
+ value: 82.44533415460425
2586
+ - type: manhattan_accuracy
2587
+ value: 88.75111576823068
2588
+ - type: manhattan_ap
2589
+ value: 85.58701671476263
2590
+ - type: manhattan_f1
2591
+ value: 77.70169909067856
2592
+ - type: manhattan_precision
2593
+ value: 73.37666780704755
2594
+ - type: manhattan_recall
2595
+ value: 82.5685247921158
2596
+ - type: max_accuracy
2597
+ value: 89.01501921061823
2598
+ - type: max_ap
2599
+ value: 85.97819287477351
2600
+ - type: max_f1
2601
+ value: 78.33882858518875
2602
+ language:
2603
+ - en
2604
  license: mit
2605
  ---
2606
+
2607
+ ## E5-base
2608
+
2609
+ **News (May 2023): please switch to [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2), which has better performance and same method of usage.**
2610
+
2611
+ [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
2612
+ Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
2613
+
2614
+ This model has 12 layers and the embedding size is 768.
2615
+
2616
+ ## Usage
2617
+
2618
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
2619
+
2620
+ ```python
2621
+ import torch.nn.functional as F
2622
+
2623
+ from torch import Tensor
2624
+ from transformers import AutoTokenizer, AutoModel
2625
+
2626
+
2627
+ def average_pool(last_hidden_states: Tensor,
2628
+ attention_mask: Tensor) -> Tensor:
2629
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
2630
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
2631
+
2632
+
2633
+ # Each input text should start with "query: " or "passage: ".
2634
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
2635
+ input_texts = ['query: how much protein should a female eat',
2636
+ 'query: summit define',
2637
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
2638
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
2639
+
2640
+ tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base')
2641
+ model = AutoModel.from_pretrained('intfloat/e5-base')
2642
+
2643
+ # Tokenize the input texts
2644
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
2645
+
2646
+ outputs = model(**batch_dict)
2647
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
2648
+
2649
+ # normalize embeddings
2650
+ embeddings = F.normalize(embeddings, p=2, dim=1)
2651
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
2652
+ print(scores.tolist())
2653
+ ```
2654
+
2655
+ ## Training Details
2656
+
2657
+ Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
2658
+
2659
+ ## Benchmark Evaluation
2660
+
2661
+ Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
2662
+ on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
2663
+
2664
+ ## Support for Sentence Transformers
2665
+
2666
+ Below is an example for usage with sentence_transformers.
2667
+ ```python
2668
+ from sentence_transformers import SentenceTransformer
2669
+ model = SentenceTransformer('intfloat/e5-base')
2670
+ input_texts = [
2671
+ 'query: how much protein should a female eat',
2672
+ 'query: summit define',
2673
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
2674
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
2675
+ ]
2676
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
2677
+ ```
2678
+
2679
+ Package requirements
2680
+
2681
+ `pip install sentence_transformers~=2.2.2`
2682
+
2683
+ Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
2684
+
2685
+ ## FAQ
2686
+
2687
+ **1. Do I need to add the prefix "query: " and "passage: " to input texts?**
2688
+
2689
+ Yes, this is how the model is trained, otherwise you will see a performance degradation.
2690
+
2691
+ Here are some rules of thumb:
2692
+ - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
2693
+
2694
+ - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
2695
+
2696
+ - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
2697
+
2698
+ **2. Why are my reproduced results slightly different from reported in the model card?**
2699
+
2700
+ Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
2701
+
2702
+ **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
2703
+
2704
+ This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
2705
+
2706
+ For text embedding tasks like text retrieval or semantic similarity,
2707
+ what matters is the relative order of the scores instead of the absolute values,
2708
+ so this should not be an issue.
2709
+
2710
+ ## Citation
2711
+
2712
+ If you find our paper or models helpful, please consider cite as follows:
2713
+
2714
+ ```
2715
+ @article{wang2022text,
2716
+ title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
2717
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
2718
+ journal={arXiv preprint arXiv:2212.03533},
2719
+ year={2022}
2720
+ }
2721
+ ```
2722
+
2723
+ ## Limitations
2724
+
2725
+ This model only works for English texts. Long texts will be truncated to at most 512 tokens.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "tmp/",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.15.0",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1661c4d3b0de6a7e37821bcbab5f066c3499346e05cae7918094b0cd8dd34a02
3
+ size 437955512
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0c66860595a23e369b583295e7a21ef18fd5a44ebe52a8fdbeb58e1e08be875
3
+ size 435811539
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5dac57e73b6097d8335432adaaac25b517ef363c39f6399b3b7167bbda37e5be
3
+ size 110083338
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd623a40c8b841b7c99a464e32e6629d19935a52d123d1ebda7b26606b5de637
3
+ size 438007537
quantize_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "per_channel": true,
3
+ "reduce_range": true,
4
+ "per_model_config": {
5
+ "model": {
6
+ "op_types": [
7
+ "Gather",
8
+ "Pow",
9
+ "Concat",
10
+ "Softmax",
11
+ "Unsqueeze",
12
+ "Sub",
13
+ "Reshape",
14
+ "Mul",
15
+ "Erf",
16
+ "ReduceMean",
17
+ "Sqrt",
18
+ "Constant",
19
+ "Add",
20
+ "Shape",
21
+ "MatMul",
22
+ "Div",
23
+ "Transpose",
24
+ "Slice",
25
+ "Cast"
26
+ ],
27
+ "weight_type": "QInt8"
28
+ }
29
+ }
30
+ }
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "amlt/1031_add_qd_prompt_ft_random_swap_nli/all_kd_ft", "tokenizer_class": "BertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff