nanmoon commited on
Commit
7242800
1 Parent(s): 41b0193

e5-base-v2

Browse files
.gitattributes CHANGED
@@ -25,7 +25,6 @@
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
  *.tgz filter=lfs diff=lfs merge=lfs -text
31
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
28
  *.tflite filter=lfs diff=lfs merge=lfs -text
29
  *.tgz filter=lfs diff=lfs merge=lfs -text
30
  *.wasm filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md CHANGED
@@ -1,3 +1,2723 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - mteb
4
+ - Sentence Transformers
5
+ - sentence-similarity
6
+ - sentence-transformers
7
+ model-index:
8
+ - name: e5-base-v2
9
+ results:
10
+ - task:
11
+ type: Classification
12
+ dataset:
13
+ type: mteb/amazon_counterfactual
14
+ name: MTEB AmazonCounterfactualClassification (en)
15
+ config: en
16
+ split: test
17
+ revision: e8379541af4e31359cca9fbcf4b00f2671dba205
18
+ metrics:
19
+ - type: accuracy
20
+ value: 77.77611940298506
21
+ - type: ap
22
+ value: 42.052710266606056
23
+ - type: f1
24
+ value: 72.12040628266567
25
+ - task:
26
+ type: Classification
27
+ dataset:
28
+ type: mteb/amazon_polarity
29
+ name: MTEB AmazonPolarityClassification
30
+ config: default
31
+ split: test
32
+ revision: e2d317d38cd51312af73b3d32a06d1a08b442046
33
+ metrics:
34
+ - type: accuracy
35
+ value: 92.81012500000001
36
+ - type: ap
37
+ value: 89.4213700757244
38
+ - type: f1
39
+ value: 92.8039091197065
40
+ - task:
41
+ type: Classification
42
+ dataset:
43
+ type: mteb/amazon_reviews_multi
44
+ name: MTEB AmazonReviewsClassification (en)
45
+ config: en
46
+ split: test
47
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
48
+ metrics:
49
+ - type: accuracy
50
+ value: 46.711999999999996
51
+ - type: f1
52
+ value: 46.11544975436018
53
+ - task:
54
+ type: Retrieval
55
+ dataset:
56
+ type: arguana
57
+ name: MTEB ArguAna
58
+ config: default
59
+ split: test
60
+ revision: None
61
+ metrics:
62
+ - type: map_at_1
63
+ value: 23.186
64
+ - type: map_at_10
65
+ value: 36.632999999999996
66
+ - type: map_at_100
67
+ value: 37.842
68
+ - type: map_at_1000
69
+ value: 37.865
70
+ - type: map_at_3
71
+ value: 32.278
72
+ - type: map_at_5
73
+ value: 34.760999999999996
74
+ - type: mrr_at_1
75
+ value: 23.400000000000002
76
+ - type: mrr_at_10
77
+ value: 36.721
78
+ - type: mrr_at_100
79
+ value: 37.937
80
+ - type: mrr_at_1000
81
+ value: 37.96
82
+ - type: mrr_at_3
83
+ value: 32.302
84
+ - type: mrr_at_5
85
+ value: 34.894
86
+ - type: ndcg_at_1
87
+ value: 23.186
88
+ - type: ndcg_at_10
89
+ value: 44.49
90
+ - type: ndcg_at_100
91
+ value: 50.065000000000005
92
+ - type: ndcg_at_1000
93
+ value: 50.629999999999995
94
+ - type: ndcg_at_3
95
+ value: 35.461
96
+ - type: ndcg_at_5
97
+ value: 39.969
98
+ - type: precision_at_1
99
+ value: 23.186
100
+ - type: precision_at_10
101
+ value: 6.97
102
+ - type: precision_at_100
103
+ value: 0.951
104
+ - type: precision_at_1000
105
+ value: 0.099
106
+ - type: precision_at_3
107
+ value: 14.912
108
+ - type: precision_at_5
109
+ value: 11.152
110
+ - type: recall_at_1
111
+ value: 23.186
112
+ - type: recall_at_10
113
+ value: 69.70100000000001
114
+ - type: recall_at_100
115
+ value: 95.092
116
+ - type: recall_at_1000
117
+ value: 99.431
118
+ - type: recall_at_3
119
+ value: 44.737
120
+ - type: recall_at_5
121
+ value: 55.761
122
+ - task:
123
+ type: Clustering
124
+ dataset:
125
+ type: mteb/arxiv-clustering-p2p
126
+ name: MTEB ArxivClusteringP2P
127
+ config: default
128
+ split: test
129
+ revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
130
+ metrics:
131
+ - type: v_measure
132
+ value: 46.10312401440185
133
+ - task:
134
+ type: Clustering
135
+ dataset:
136
+ type: mteb/arxiv-clustering-s2s
137
+ name: MTEB ArxivClusteringS2S
138
+ config: default
139
+ split: test
140
+ revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
141
+ metrics:
142
+ - type: v_measure
143
+ value: 39.67275326095384
144
+ - task:
145
+ type: Reranking
146
+ dataset:
147
+ type: mteb/askubuntudupquestions-reranking
148
+ name: MTEB AskUbuntuDupQuestions
149
+ config: default
150
+ split: test
151
+ revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
152
+ metrics:
153
+ - type: map
154
+ value: 58.97793816337376
155
+ - type: mrr
156
+ value: 72.76832431957087
157
+ - task:
158
+ type: STS
159
+ dataset:
160
+ type: mteb/biosses-sts
161
+ name: MTEB BIOSSES
162
+ config: default
163
+ split: test
164
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
165
+ metrics:
166
+ - type: cos_sim_pearson
167
+ value: 83.11646947018187
168
+ - type: cos_sim_spearman
169
+ value: 81.40064994975234
170
+ - type: euclidean_pearson
171
+ value: 82.37355689019232
172
+ - type: euclidean_spearman
173
+ value: 81.6777646977348
174
+ - type: manhattan_pearson
175
+ value: 82.61101422716945
176
+ - type: manhattan_spearman
177
+ value: 81.80427360442245
178
+ - task:
179
+ type: Classification
180
+ dataset:
181
+ type: mteb/banking77
182
+ name: MTEB Banking77Classification
183
+ config: default
184
+ split: test
185
+ revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
186
+ metrics:
187
+ - type: accuracy
188
+ value: 83.52922077922076
189
+ - type: f1
190
+ value: 83.45298679360866
191
+ - task:
192
+ type: Clustering
193
+ dataset:
194
+ type: mteb/biorxiv-clustering-p2p
195
+ name: MTEB BiorxivClusteringP2P
196
+ config: default
197
+ split: test
198
+ revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
199
+ metrics:
200
+ - type: v_measure
201
+ value: 37.495115019668496
202
+ - task:
203
+ type: Clustering
204
+ dataset:
205
+ type: mteb/biorxiv-clustering-s2s
206
+ name: MTEB BiorxivClusteringS2S
207
+ config: default
208
+ split: test
209
+ revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
210
+ metrics:
211
+ - type: v_measure
212
+ value: 32.724792944166765
213
+ - task:
214
+ type: Retrieval
215
+ dataset:
216
+ type: BeIR/cqadupstack
217
+ name: MTEB CQADupstackAndroidRetrieval
218
+ config: default
219
+ split: test
220
+ revision: None
221
+ metrics:
222
+ - type: map_at_1
223
+ value: 32.361000000000004
224
+ - type: map_at_10
225
+ value: 43.765
226
+ - type: map_at_100
227
+ value: 45.224
228
+ - type: map_at_1000
229
+ value: 45.35
230
+ - type: map_at_3
231
+ value: 40.353
232
+ - type: map_at_5
233
+ value: 42.195
234
+ - type: mrr_at_1
235
+ value: 40.629
236
+ - type: mrr_at_10
237
+ value: 50.458000000000006
238
+ - type: mrr_at_100
239
+ value: 51.06699999999999
240
+ - type: mrr_at_1000
241
+ value: 51.12
242
+ - type: mrr_at_3
243
+ value: 47.902
244
+ - type: mrr_at_5
245
+ value: 49.447
246
+ - type: ndcg_at_1
247
+ value: 40.629
248
+ - type: ndcg_at_10
249
+ value: 50.376
250
+ - type: ndcg_at_100
251
+ value: 55.065
252
+ - type: ndcg_at_1000
253
+ value: 57.196000000000005
254
+ - type: ndcg_at_3
255
+ value: 45.616
256
+ - type: ndcg_at_5
257
+ value: 47.646
258
+ - type: precision_at_1
259
+ value: 40.629
260
+ - type: precision_at_10
261
+ value: 9.785
262
+ - type: precision_at_100
263
+ value: 1.562
264
+ - type: precision_at_1000
265
+ value: 0.2
266
+ - type: precision_at_3
267
+ value: 22.031
268
+ - type: precision_at_5
269
+ value: 15.737000000000002
270
+ - type: recall_at_1
271
+ value: 32.361000000000004
272
+ - type: recall_at_10
273
+ value: 62.214000000000006
274
+ - type: recall_at_100
275
+ value: 81.464
276
+ - type: recall_at_1000
277
+ value: 95.905
278
+ - type: recall_at_3
279
+ value: 47.5
280
+ - type: recall_at_5
281
+ value: 53.69500000000001
282
+ - task:
283
+ type: Retrieval
284
+ dataset:
285
+ type: BeIR/cqadupstack
286
+ name: MTEB CQADupstackEnglishRetrieval
287
+ config: default
288
+ split: test
289
+ revision: None
290
+ metrics:
291
+ - type: map_at_1
292
+ value: 27.971
293
+ - type: map_at_10
294
+ value: 37.444
295
+ - type: map_at_100
296
+ value: 38.607
297
+ - type: map_at_1000
298
+ value: 38.737
299
+ - type: map_at_3
300
+ value: 34.504000000000005
301
+ - type: map_at_5
302
+ value: 36.234
303
+ - type: mrr_at_1
304
+ value: 35.35
305
+ - type: mrr_at_10
306
+ value: 43.441
307
+ - type: mrr_at_100
308
+ value: 44.147999999999996
309
+ - type: mrr_at_1000
310
+ value: 44.196000000000005
311
+ - type: mrr_at_3
312
+ value: 41.285
313
+ - type: mrr_at_5
314
+ value: 42.552
315
+ - type: ndcg_at_1
316
+ value: 35.35
317
+ - type: ndcg_at_10
318
+ value: 42.903999999999996
319
+ - type: ndcg_at_100
320
+ value: 47.406
321
+ - type: ndcg_at_1000
322
+ value: 49.588
323
+ - type: ndcg_at_3
324
+ value: 38.778
325
+ - type: ndcg_at_5
326
+ value: 40.788000000000004
327
+ - type: precision_at_1
328
+ value: 35.35
329
+ - type: precision_at_10
330
+ value: 8.083
331
+ - type: precision_at_100
332
+ value: 1.313
333
+ - type: precision_at_1000
334
+ value: 0.18
335
+ - type: precision_at_3
336
+ value: 18.769
337
+ - type: precision_at_5
338
+ value: 13.439
339
+ - type: recall_at_1
340
+ value: 27.971
341
+ - type: recall_at_10
342
+ value: 52.492000000000004
343
+ - type: recall_at_100
344
+ value: 71.642
345
+ - type: recall_at_1000
346
+ value: 85.488
347
+ - type: recall_at_3
348
+ value: 40.1
349
+ - type: recall_at_5
350
+ value: 45.800000000000004
351
+ - task:
352
+ type: Retrieval
353
+ dataset:
354
+ type: BeIR/cqadupstack
355
+ name: MTEB CQADupstackGamingRetrieval
356
+ config: default
357
+ split: test
358
+ revision: None
359
+ metrics:
360
+ - type: map_at_1
361
+ value: 39.898
362
+ - type: map_at_10
363
+ value: 51.819
364
+ - type: map_at_100
365
+ value: 52.886
366
+ - type: map_at_1000
367
+ value: 52.941
368
+ - type: map_at_3
369
+ value: 48.619
370
+ - type: map_at_5
371
+ value: 50.493
372
+ - type: mrr_at_1
373
+ value: 45.391999999999996
374
+ - type: mrr_at_10
375
+ value: 55.230000000000004
376
+ - type: mrr_at_100
377
+ value: 55.887
378
+ - type: mrr_at_1000
379
+ value: 55.916
380
+ - type: mrr_at_3
381
+ value: 52.717000000000006
382
+ - type: mrr_at_5
383
+ value: 54.222
384
+ - type: ndcg_at_1
385
+ value: 45.391999999999996
386
+ - type: ndcg_at_10
387
+ value: 57.586999999999996
388
+ - type: ndcg_at_100
389
+ value: 61.745000000000005
390
+ - type: ndcg_at_1000
391
+ value: 62.83800000000001
392
+ - type: ndcg_at_3
393
+ value: 52.207
394
+ - type: ndcg_at_5
395
+ value: 54.925999999999995
396
+ - type: precision_at_1
397
+ value: 45.391999999999996
398
+ - type: precision_at_10
399
+ value: 9.21
400
+ - type: precision_at_100
401
+ value: 1.226
402
+ - type: precision_at_1000
403
+ value: 0.136
404
+ - type: precision_at_3
405
+ value: 23.177
406
+ - type: precision_at_5
407
+ value: 16.038
408
+ - type: recall_at_1
409
+ value: 39.898
410
+ - type: recall_at_10
411
+ value: 71.18900000000001
412
+ - type: recall_at_100
413
+ value: 89.082
414
+ - type: recall_at_1000
415
+ value: 96.865
416
+ - type: recall_at_3
417
+ value: 56.907
418
+ - type: recall_at_5
419
+ value: 63.397999999999996
420
+ - task:
421
+ type: Retrieval
422
+ dataset:
423
+ type: BeIR/cqadupstack
424
+ name: MTEB CQADupstackGisRetrieval
425
+ config: default
426
+ split: test
427
+ revision: None
428
+ metrics:
429
+ - type: map_at_1
430
+ value: 22.706
431
+ - type: map_at_10
432
+ value: 30.818
433
+ - type: map_at_100
434
+ value: 32.038
435
+ - type: map_at_1000
436
+ value: 32.123000000000005
437
+ - type: map_at_3
438
+ value: 28.077
439
+ - type: map_at_5
440
+ value: 29.709999999999997
441
+ - type: mrr_at_1
442
+ value: 24.407
443
+ - type: mrr_at_10
444
+ value: 32.555
445
+ - type: mrr_at_100
446
+ value: 33.692
447
+ - type: mrr_at_1000
448
+ value: 33.751
449
+ - type: mrr_at_3
450
+ value: 29.848999999999997
451
+ - type: mrr_at_5
452
+ value: 31.509999999999998
453
+ - type: ndcg_at_1
454
+ value: 24.407
455
+ - type: ndcg_at_10
456
+ value: 35.624
457
+ - type: ndcg_at_100
458
+ value: 41.454
459
+ - type: ndcg_at_1000
460
+ value: 43.556
461
+ - type: ndcg_at_3
462
+ value: 30.217
463
+ - type: ndcg_at_5
464
+ value: 33.111000000000004
465
+ - type: precision_at_1
466
+ value: 24.407
467
+ - type: precision_at_10
468
+ value: 5.548
469
+ - type: precision_at_100
470
+ value: 0.8869999999999999
471
+ - type: precision_at_1000
472
+ value: 0.11100000000000002
473
+ - type: precision_at_3
474
+ value: 12.731
475
+ - type: precision_at_5
476
+ value: 9.22
477
+ - type: recall_at_1
478
+ value: 22.706
479
+ - type: recall_at_10
480
+ value: 48.772
481
+ - type: recall_at_100
482
+ value: 75.053
483
+ - type: recall_at_1000
484
+ value: 90.731
485
+ - type: recall_at_3
486
+ value: 34.421
487
+ - type: recall_at_5
488
+ value: 41.427
489
+ - task:
490
+ type: Retrieval
491
+ dataset:
492
+ type: BeIR/cqadupstack
493
+ name: MTEB CQADupstackMathematicaRetrieval
494
+ config: default
495
+ split: test
496
+ revision: None
497
+ metrics:
498
+ - type: map_at_1
499
+ value: 13.424
500
+ - type: map_at_10
501
+ value: 21.09
502
+ - type: map_at_100
503
+ value: 22.264999999999997
504
+ - type: map_at_1000
505
+ value: 22.402
506
+ - type: map_at_3
507
+ value: 18.312
508
+ - type: map_at_5
509
+ value: 19.874
510
+ - type: mrr_at_1
511
+ value: 16.915
512
+ - type: mrr_at_10
513
+ value: 25.258000000000003
514
+ - type: mrr_at_100
515
+ value: 26.228
516
+ - type: mrr_at_1000
517
+ value: 26.31
518
+ - type: mrr_at_3
519
+ value: 22.492
520
+ - type: mrr_at_5
521
+ value: 24.04
522
+ - type: ndcg_at_1
523
+ value: 16.915
524
+ - type: ndcg_at_10
525
+ value: 26.266000000000002
526
+ - type: ndcg_at_100
527
+ value: 32.08
528
+ - type: ndcg_at_1000
529
+ value: 35.086
530
+ - type: ndcg_at_3
531
+ value: 21.049
532
+ - type: ndcg_at_5
533
+ value: 23.508000000000003
534
+ - type: precision_at_1
535
+ value: 16.915
536
+ - type: precision_at_10
537
+ value: 5.1
538
+ - type: precision_at_100
539
+ value: 0.9329999999999999
540
+ - type: precision_at_1000
541
+ value: 0.131
542
+ - type: precision_at_3
543
+ value: 10.282
544
+ - type: precision_at_5
545
+ value: 7.836
546
+ - type: recall_at_1
547
+ value: 13.424
548
+ - type: recall_at_10
549
+ value: 38.179
550
+ - type: recall_at_100
551
+ value: 63.906
552
+ - type: recall_at_1000
553
+ value: 84.933
554
+ - type: recall_at_3
555
+ value: 23.878
556
+ - type: recall_at_5
557
+ value: 30.037999999999997
558
+ - task:
559
+ type: Retrieval
560
+ dataset:
561
+ type: BeIR/cqadupstack
562
+ name: MTEB CQADupstackPhysicsRetrieval
563
+ config: default
564
+ split: test
565
+ revision: None
566
+ metrics:
567
+ - type: map_at_1
568
+ value: 26.154
569
+ - type: map_at_10
570
+ value: 35.912
571
+ - type: map_at_100
572
+ value: 37.211
573
+ - type: map_at_1000
574
+ value: 37.327
575
+ - type: map_at_3
576
+ value: 32.684999999999995
577
+ - type: map_at_5
578
+ value: 34.562
579
+ - type: mrr_at_1
580
+ value: 32.435
581
+ - type: mrr_at_10
582
+ value: 41.411
583
+ - type: mrr_at_100
584
+ value: 42.297000000000004
585
+ - type: mrr_at_1000
586
+ value: 42.345
587
+ - type: mrr_at_3
588
+ value: 38.771
589
+ - type: mrr_at_5
590
+ value: 40.33
591
+ - type: ndcg_at_1
592
+ value: 32.435
593
+ - type: ndcg_at_10
594
+ value: 41.785
595
+ - type: ndcg_at_100
596
+ value: 47.469
597
+ - type: ndcg_at_1000
598
+ value: 49.685
599
+ - type: ndcg_at_3
600
+ value: 36.618
601
+ - type: ndcg_at_5
602
+ value: 39.101
603
+ - type: precision_at_1
604
+ value: 32.435
605
+ - type: precision_at_10
606
+ value: 7.642
607
+ - type: precision_at_100
608
+ value: 1.244
609
+ - type: precision_at_1000
610
+ value: 0.163
611
+ - type: precision_at_3
612
+ value: 17.485
613
+ - type: precision_at_5
614
+ value: 12.57
615
+ - type: recall_at_1
616
+ value: 26.154
617
+ - type: recall_at_10
618
+ value: 54.111
619
+ - type: recall_at_100
620
+ value: 78.348
621
+ - type: recall_at_1000
622
+ value: 92.996
623
+ - type: recall_at_3
624
+ value: 39.189
625
+ - type: recall_at_5
626
+ value: 45.852
627
+ - task:
628
+ type: Retrieval
629
+ dataset:
630
+ type: BeIR/cqadupstack
631
+ name: MTEB CQADupstackProgrammersRetrieval
632
+ config: default
633
+ split: test
634
+ revision: None
635
+ metrics:
636
+ - type: map_at_1
637
+ value: 26.308999999999997
638
+ - type: map_at_10
639
+ value: 35.524
640
+ - type: map_at_100
641
+ value: 36.774
642
+ - type: map_at_1000
643
+ value: 36.891
644
+ - type: map_at_3
645
+ value: 32.561
646
+ - type: map_at_5
647
+ value: 34.034
648
+ - type: mrr_at_1
649
+ value: 31.735000000000003
650
+ - type: mrr_at_10
651
+ value: 40.391
652
+ - type: mrr_at_100
653
+ value: 41.227000000000004
654
+ - type: mrr_at_1000
655
+ value: 41.288000000000004
656
+ - type: mrr_at_3
657
+ value: 37.938
658
+ - type: mrr_at_5
659
+ value: 39.193
660
+ - type: ndcg_at_1
661
+ value: 31.735000000000003
662
+ - type: ndcg_at_10
663
+ value: 41.166000000000004
664
+ - type: ndcg_at_100
665
+ value: 46.702
666
+ - type: ndcg_at_1000
667
+ value: 49.157000000000004
668
+ - type: ndcg_at_3
669
+ value: 36.274
670
+ - type: ndcg_at_5
671
+ value: 38.177
672
+ - type: precision_at_1
673
+ value: 31.735000000000003
674
+ - type: precision_at_10
675
+ value: 7.5569999999999995
676
+ - type: precision_at_100
677
+ value: 1.2109999999999999
678
+ - type: precision_at_1000
679
+ value: 0.16
680
+ - type: precision_at_3
681
+ value: 17.199
682
+ - type: precision_at_5
683
+ value: 12.123000000000001
684
+ - type: recall_at_1
685
+ value: 26.308999999999997
686
+ - type: recall_at_10
687
+ value: 53.083000000000006
688
+ - type: recall_at_100
689
+ value: 76.922
690
+ - type: recall_at_1000
691
+ value: 93.767
692
+ - type: recall_at_3
693
+ value: 39.262
694
+ - type: recall_at_5
695
+ value: 44.413000000000004
696
+ - task:
697
+ type: Retrieval
698
+ dataset:
699
+ type: BeIR/cqadupstack
700
+ name: MTEB CQADupstackRetrieval
701
+ config: default
702
+ split: test
703
+ revision: None
704
+ metrics:
705
+ - type: map_at_1
706
+ value: 24.391250000000003
707
+ - type: map_at_10
708
+ value: 33.280166666666666
709
+ - type: map_at_100
710
+ value: 34.49566666666667
711
+ - type: map_at_1000
712
+ value: 34.61533333333333
713
+ - type: map_at_3
714
+ value: 30.52183333333333
715
+ - type: map_at_5
716
+ value: 32.06608333333333
717
+ - type: mrr_at_1
718
+ value: 29.105083333333337
719
+ - type: mrr_at_10
720
+ value: 37.44766666666666
721
+ - type: mrr_at_100
722
+ value: 38.32491666666667
723
+ - type: mrr_at_1000
724
+ value: 38.385666666666665
725
+ - type: mrr_at_3
726
+ value: 35.06883333333333
727
+ - type: mrr_at_5
728
+ value: 36.42066666666667
729
+ - type: ndcg_at_1
730
+ value: 29.105083333333337
731
+ - type: ndcg_at_10
732
+ value: 38.54358333333333
733
+ - type: ndcg_at_100
734
+ value: 43.833583333333344
735
+ - type: ndcg_at_1000
736
+ value: 46.215333333333334
737
+ - type: ndcg_at_3
738
+ value: 33.876
739
+ - type: ndcg_at_5
740
+ value: 36.05208333333333
741
+ - type: precision_at_1
742
+ value: 29.105083333333337
743
+ - type: precision_at_10
744
+ value: 6.823416666666665
745
+ - type: precision_at_100
746
+ value: 1.1270833333333334
747
+ - type: precision_at_1000
748
+ value: 0.15208333333333332
749
+ - type: precision_at_3
750
+ value: 15.696750000000002
751
+ - type: precision_at_5
752
+ value: 11.193499999999998
753
+ - type: recall_at_1
754
+ value: 24.391250000000003
755
+ - type: recall_at_10
756
+ value: 49.98808333333333
757
+ - type: recall_at_100
758
+ value: 73.31616666666666
759
+ - type: recall_at_1000
760
+ value: 89.96291666666667
761
+ - type: recall_at_3
762
+ value: 36.86666666666667
763
+ - type: recall_at_5
764
+ value: 42.54350000000001
765
+ - task:
766
+ type: Retrieval
767
+ dataset:
768
+ type: BeIR/cqadupstack
769
+ name: MTEB CQADupstackStatsRetrieval
770
+ config: default
771
+ split: test
772
+ revision: None
773
+ metrics:
774
+ - type: map_at_1
775
+ value: 21.995
776
+ - type: map_at_10
777
+ value: 28.807
778
+ - type: map_at_100
779
+ value: 29.813000000000002
780
+ - type: map_at_1000
781
+ value: 29.903000000000002
782
+ - type: map_at_3
783
+ value: 26.636
784
+ - type: map_at_5
785
+ value: 27.912
786
+ - type: mrr_at_1
787
+ value: 24.847
788
+ - type: mrr_at_10
789
+ value: 31.494
790
+ - type: mrr_at_100
791
+ value: 32.381
792
+ - type: mrr_at_1000
793
+ value: 32.446999999999996
794
+ - type: mrr_at_3
795
+ value: 29.473
796
+ - type: mrr_at_5
797
+ value: 30.7
798
+ - type: ndcg_at_1
799
+ value: 24.847
800
+ - type: ndcg_at_10
801
+ value: 32.818999999999996
802
+ - type: ndcg_at_100
803
+ value: 37.835
804
+ - type: ndcg_at_1000
805
+ value: 40.226
806
+ - type: ndcg_at_3
807
+ value: 28.811999999999998
808
+ - type: ndcg_at_5
809
+ value: 30.875999999999998
810
+ - type: precision_at_1
811
+ value: 24.847
812
+ - type: precision_at_10
813
+ value: 5.244999999999999
814
+ - type: precision_at_100
815
+ value: 0.856
816
+ - type: precision_at_1000
817
+ value: 0.11299999999999999
818
+ - type: precision_at_3
819
+ value: 12.577
820
+ - type: precision_at_5
821
+ value: 8.895999999999999
822
+ - type: recall_at_1
823
+ value: 21.995
824
+ - type: recall_at_10
825
+ value: 42.479
826
+ - type: recall_at_100
827
+ value: 65.337
828
+ - type: recall_at_1000
829
+ value: 83.23700000000001
830
+ - type: recall_at_3
831
+ value: 31.573
832
+ - type: recall_at_5
833
+ value: 36.684
834
+ - task:
835
+ type: Retrieval
836
+ dataset:
837
+ type: BeIR/cqadupstack
838
+ name: MTEB CQADupstackTexRetrieval
839
+ config: default
840
+ split: test
841
+ revision: None
842
+ metrics:
843
+ - type: map_at_1
844
+ value: 15.751000000000001
845
+ - type: map_at_10
846
+ value: 21.909
847
+ - type: map_at_100
848
+ value: 23.064
849
+ - type: map_at_1000
850
+ value: 23.205000000000002
851
+ - type: map_at_3
852
+ value: 20.138
853
+ - type: map_at_5
854
+ value: 20.973
855
+ - type: mrr_at_1
856
+ value: 19.305
857
+ - type: mrr_at_10
858
+ value: 25.647
859
+ - type: mrr_at_100
860
+ value: 26.659
861
+ - type: mrr_at_1000
862
+ value: 26.748
863
+ - type: mrr_at_3
864
+ value: 23.933
865
+ - type: mrr_at_5
866
+ value: 24.754
867
+ - type: ndcg_at_1
868
+ value: 19.305
869
+ - type: ndcg_at_10
870
+ value: 25.886
871
+ - type: ndcg_at_100
872
+ value: 31.56
873
+ - type: ndcg_at_1000
874
+ value: 34.799
875
+ - type: ndcg_at_3
876
+ value: 22.708000000000002
877
+ - type: ndcg_at_5
878
+ value: 23.838
879
+ - type: precision_at_1
880
+ value: 19.305
881
+ - type: precision_at_10
882
+ value: 4.677
883
+ - type: precision_at_100
884
+ value: 0.895
885
+ - type: precision_at_1000
886
+ value: 0.136
887
+ - type: precision_at_3
888
+ value: 10.771
889
+ - type: precision_at_5
890
+ value: 7.46
891
+ - type: recall_at_1
892
+ value: 15.751000000000001
893
+ - type: recall_at_10
894
+ value: 34.156
895
+ - type: recall_at_100
896
+ value: 59.899
897
+ - type: recall_at_1000
898
+ value: 83.08
899
+ - type: recall_at_3
900
+ value: 24.772
901
+ - type: recall_at_5
902
+ value: 28.009
903
+ - task:
904
+ type: Retrieval
905
+ dataset:
906
+ type: BeIR/cqadupstack
907
+ name: MTEB CQADupstackUnixRetrieval
908
+ config: default
909
+ split: test
910
+ revision: None
911
+ metrics:
912
+ - type: map_at_1
913
+ value: 23.34
914
+ - type: map_at_10
915
+ value: 32.383
916
+ - type: map_at_100
917
+ value: 33.629999999999995
918
+ - type: map_at_1000
919
+ value: 33.735
920
+ - type: map_at_3
921
+ value: 29.68
922
+ - type: map_at_5
923
+ value: 31.270999999999997
924
+ - type: mrr_at_1
925
+ value: 27.612
926
+ - type: mrr_at_10
927
+ value: 36.381
928
+ - type: mrr_at_100
929
+ value: 37.351
930
+ - type: mrr_at_1000
931
+ value: 37.411
932
+ - type: mrr_at_3
933
+ value: 33.893
934
+ - type: mrr_at_5
935
+ value: 35.353
936
+ - type: ndcg_at_1
937
+ value: 27.612
938
+ - type: ndcg_at_10
939
+ value: 37.714999999999996
940
+ - type: ndcg_at_100
941
+ value: 43.525000000000006
942
+ - type: ndcg_at_1000
943
+ value: 45.812999999999995
944
+ - type: ndcg_at_3
945
+ value: 32.796
946
+ - type: ndcg_at_5
947
+ value: 35.243
948
+ - type: precision_at_1
949
+ value: 27.612
950
+ - type: precision_at_10
951
+ value: 6.465
952
+ - type: precision_at_100
953
+ value: 1.0619999999999998
954
+ - type: precision_at_1000
955
+ value: 0.13699999999999998
956
+ - type: precision_at_3
957
+ value: 15.049999999999999
958
+ - type: precision_at_5
959
+ value: 10.764999999999999
960
+ - type: recall_at_1
961
+ value: 23.34
962
+ - type: recall_at_10
963
+ value: 49.856
964
+ - type: recall_at_100
965
+ value: 75.334
966
+ - type: recall_at_1000
967
+ value: 91.156
968
+ - type: recall_at_3
969
+ value: 36.497
970
+ - type: recall_at_5
971
+ value: 42.769
972
+ - task:
973
+ type: Retrieval
974
+ dataset:
975
+ type: BeIR/cqadupstack
976
+ name: MTEB CQADupstackWebmastersRetrieval
977
+ config: default
978
+ split: test
979
+ revision: None
980
+ metrics:
981
+ - type: map_at_1
982
+ value: 25.097
983
+ - type: map_at_10
984
+ value: 34.599999999999994
985
+ - type: map_at_100
986
+ value: 36.174
987
+ - type: map_at_1000
988
+ value: 36.398
989
+ - type: map_at_3
990
+ value: 31.781
991
+ - type: map_at_5
992
+ value: 33.22
993
+ - type: mrr_at_1
994
+ value: 31.225
995
+ - type: mrr_at_10
996
+ value: 39.873
997
+ - type: mrr_at_100
998
+ value: 40.853
999
+ - type: mrr_at_1000
1000
+ value: 40.904
1001
+ - type: mrr_at_3
1002
+ value: 37.681
1003
+ - type: mrr_at_5
1004
+ value: 38.669
1005
+ - type: ndcg_at_1
1006
+ value: 31.225
1007
+ - type: ndcg_at_10
1008
+ value: 40.586
1009
+ - type: ndcg_at_100
1010
+ value: 46.226
1011
+ - type: ndcg_at_1000
1012
+ value: 48.788
1013
+ - type: ndcg_at_3
1014
+ value: 36.258
1015
+ - type: ndcg_at_5
1016
+ value: 37.848
1017
+ - type: precision_at_1
1018
+ value: 31.225
1019
+ - type: precision_at_10
1020
+ value: 7.707999999999999
1021
+ - type: precision_at_100
1022
+ value: 1.536
1023
+ - type: precision_at_1000
1024
+ value: 0.242
1025
+ - type: precision_at_3
1026
+ value: 17.26
1027
+ - type: precision_at_5
1028
+ value: 12.253
1029
+ - type: recall_at_1
1030
+ value: 25.097
1031
+ - type: recall_at_10
1032
+ value: 51.602000000000004
1033
+ - type: recall_at_100
1034
+ value: 76.854
1035
+ - type: recall_at_1000
1036
+ value: 93.303
1037
+ - type: recall_at_3
1038
+ value: 38.68
1039
+ - type: recall_at_5
1040
+ value: 43.258
1041
+ - task:
1042
+ type: Retrieval
1043
+ dataset:
1044
+ type: BeIR/cqadupstack
1045
+ name: MTEB CQADupstackWordpressRetrieval
1046
+ config: default
1047
+ split: test
1048
+ revision: None
1049
+ metrics:
1050
+ - type: map_at_1
1051
+ value: 17.689
1052
+ - type: map_at_10
1053
+ value: 25.291000000000004
1054
+ - type: map_at_100
1055
+ value: 26.262
1056
+ - type: map_at_1000
1057
+ value: 26.372
1058
+ - type: map_at_3
1059
+ value: 22.916
1060
+ - type: map_at_5
1061
+ value: 24.315
1062
+ - type: mrr_at_1
1063
+ value: 19.409000000000002
1064
+ - type: mrr_at_10
1065
+ value: 27.233
1066
+ - type: mrr_at_100
1067
+ value: 28.109
1068
+ - type: mrr_at_1000
1069
+ value: 28.192
1070
+ - type: mrr_at_3
1071
+ value: 24.892
1072
+ - type: mrr_at_5
1073
+ value: 26.278000000000002
1074
+ - type: ndcg_at_1
1075
+ value: 19.409000000000002
1076
+ - type: ndcg_at_10
1077
+ value: 29.809
1078
+ - type: ndcg_at_100
1079
+ value: 34.936
1080
+ - type: ndcg_at_1000
1081
+ value: 37.852000000000004
1082
+ - type: ndcg_at_3
1083
+ value: 25.179000000000002
1084
+ - type: ndcg_at_5
1085
+ value: 27.563
1086
+ - type: precision_at_1
1087
+ value: 19.409000000000002
1088
+ - type: precision_at_10
1089
+ value: 4.861
1090
+ - type: precision_at_100
1091
+ value: 0.8
1092
+ - type: precision_at_1000
1093
+ value: 0.116
1094
+ - type: precision_at_3
1095
+ value: 11.029
1096
+ - type: precision_at_5
1097
+ value: 7.985
1098
+ - type: recall_at_1
1099
+ value: 17.689
1100
+ - type: recall_at_10
1101
+ value: 41.724
1102
+ - type: recall_at_100
1103
+ value: 65.95299999999999
1104
+ - type: recall_at_1000
1105
+ value: 88.094
1106
+ - type: recall_at_3
1107
+ value: 29.621
1108
+ - type: recall_at_5
1109
+ value: 35.179
1110
+ - task:
1111
+ type: Retrieval
1112
+ dataset:
1113
+ type: climate-fever
1114
+ name: MTEB ClimateFEVER
1115
+ config: default
1116
+ split: test
1117
+ revision: None
1118
+ metrics:
1119
+ - type: map_at_1
1120
+ value: 10.581
1121
+ - type: map_at_10
1122
+ value: 18.944
1123
+ - type: map_at_100
1124
+ value: 20.812
1125
+ - type: map_at_1000
1126
+ value: 21.002000000000002
1127
+ - type: map_at_3
1128
+ value: 15.661
1129
+ - type: map_at_5
1130
+ value: 17.502000000000002
1131
+ - type: mrr_at_1
1132
+ value: 23.388
1133
+ - type: mrr_at_10
1134
+ value: 34.263
1135
+ - type: mrr_at_100
1136
+ value: 35.364000000000004
1137
+ - type: mrr_at_1000
1138
+ value: 35.409
1139
+ - type: mrr_at_3
1140
+ value: 30.586000000000002
1141
+ - type: mrr_at_5
1142
+ value: 32.928000000000004
1143
+ - type: ndcg_at_1
1144
+ value: 23.388
1145
+ - type: ndcg_at_10
1146
+ value: 26.56
1147
+ - type: ndcg_at_100
1148
+ value: 34.248
1149
+ - type: ndcg_at_1000
1150
+ value: 37.779
1151
+ - type: ndcg_at_3
1152
+ value: 21.179000000000002
1153
+ - type: ndcg_at_5
1154
+ value: 23.504
1155
+ - type: precision_at_1
1156
+ value: 23.388
1157
+ - type: precision_at_10
1158
+ value: 8.476
1159
+ - type: precision_at_100
1160
+ value: 1.672
1161
+ - type: precision_at_1000
1162
+ value: 0.233
1163
+ - type: precision_at_3
1164
+ value: 15.852
1165
+ - type: precision_at_5
1166
+ value: 12.73
1167
+ - type: recall_at_1
1168
+ value: 10.581
1169
+ - type: recall_at_10
1170
+ value: 32.512
1171
+ - type: recall_at_100
1172
+ value: 59.313
1173
+ - type: recall_at_1000
1174
+ value: 79.25
1175
+ - type: recall_at_3
1176
+ value: 19.912
1177
+ - type: recall_at_5
1178
+ value: 25.832
1179
+ - task:
1180
+ type: Retrieval
1181
+ dataset:
1182
+ type: dbpedia-entity
1183
+ name: MTEB DBPedia
1184
+ config: default
1185
+ split: test
1186
+ revision: None
1187
+ metrics:
1188
+ - type: map_at_1
1189
+ value: 9.35
1190
+ - type: map_at_10
1191
+ value: 20.134
1192
+ - type: map_at_100
1193
+ value: 28.975
1194
+ - type: map_at_1000
1195
+ value: 30.709999999999997
1196
+ - type: map_at_3
1197
+ value: 14.513000000000002
1198
+ - type: map_at_5
1199
+ value: 16.671
1200
+ - type: mrr_at_1
1201
+ value: 69.75
1202
+ - type: mrr_at_10
1203
+ value: 77.67699999999999
1204
+ - type: mrr_at_100
1205
+ value: 77.97500000000001
1206
+ - type: mrr_at_1000
1207
+ value: 77.985
1208
+ - type: mrr_at_3
1209
+ value: 76.292
1210
+ - type: mrr_at_5
1211
+ value: 77.179
1212
+ - type: ndcg_at_1
1213
+ value: 56.49999999999999
1214
+ - type: ndcg_at_10
1215
+ value: 42.226
1216
+ - type: ndcg_at_100
1217
+ value: 47.562
1218
+ - type: ndcg_at_1000
1219
+ value: 54.923
1220
+ - type: ndcg_at_3
1221
+ value: 46.564
1222
+ - type: ndcg_at_5
1223
+ value: 43.830000000000005
1224
+ - type: precision_at_1
1225
+ value: 69.75
1226
+ - type: precision_at_10
1227
+ value: 33.525
1228
+ - type: precision_at_100
1229
+ value: 11.035
1230
+ - type: precision_at_1000
1231
+ value: 2.206
1232
+ - type: precision_at_3
1233
+ value: 49.75
1234
+ - type: precision_at_5
1235
+ value: 42
1236
+ - type: recall_at_1
1237
+ value: 9.35
1238
+ - type: recall_at_10
1239
+ value: 25.793
1240
+ - type: recall_at_100
1241
+ value: 54.186
1242
+ - type: recall_at_1000
1243
+ value: 77.81
1244
+ - type: recall_at_3
1245
+ value: 15.770000000000001
1246
+ - type: recall_at_5
1247
+ value: 19.09
1248
+ - task:
1249
+ type: Classification
1250
+ dataset:
1251
+ type: mteb/emotion
1252
+ name: MTEB EmotionClassification
1253
+ config: default
1254
+ split: test
1255
+ revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
1256
+ metrics:
1257
+ - type: accuracy
1258
+ value: 46.945
1259
+ - type: f1
1260
+ value: 42.07407842992542
1261
+ - task:
1262
+ type: Retrieval
1263
+ dataset:
1264
+ type: fever
1265
+ name: MTEB FEVER
1266
+ config: default
1267
+ split: test
1268
+ revision: None
1269
+ metrics:
1270
+ - type: map_at_1
1271
+ value: 71.04599999999999
1272
+ - type: map_at_10
1273
+ value: 80.718
1274
+ - type: map_at_100
1275
+ value: 80.961
1276
+ - type: map_at_1000
1277
+ value: 80.974
1278
+ - type: map_at_3
1279
+ value: 79.49199999999999
1280
+ - type: map_at_5
1281
+ value: 80.32000000000001
1282
+ - type: mrr_at_1
1283
+ value: 76.388
1284
+ - type: mrr_at_10
1285
+ value: 85.214
1286
+ - type: mrr_at_100
1287
+ value: 85.302
1288
+ - type: mrr_at_1000
1289
+ value: 85.302
1290
+ - type: mrr_at_3
1291
+ value: 84.373
1292
+ - type: mrr_at_5
1293
+ value: 84.979
1294
+ - type: ndcg_at_1
1295
+ value: 76.388
1296
+ - type: ndcg_at_10
1297
+ value: 84.987
1298
+ - type: ndcg_at_100
1299
+ value: 85.835
1300
+ - type: ndcg_at_1000
1301
+ value: 86.04899999999999
1302
+ - type: ndcg_at_3
1303
+ value: 83.04
1304
+ - type: ndcg_at_5
1305
+ value: 84.22500000000001
1306
+ - type: precision_at_1
1307
+ value: 76.388
1308
+ - type: precision_at_10
1309
+ value: 10.35
1310
+ - type: precision_at_100
1311
+ value: 1.099
1312
+ - type: precision_at_1000
1313
+ value: 0.11399999999999999
1314
+ - type: precision_at_3
1315
+ value: 32.108
1316
+ - type: precision_at_5
1317
+ value: 20.033
1318
+ - type: recall_at_1
1319
+ value: 71.04599999999999
1320
+ - type: recall_at_10
1321
+ value: 93.547
1322
+ - type: recall_at_100
1323
+ value: 96.887
1324
+ - type: recall_at_1000
1325
+ value: 98.158
1326
+ - type: recall_at_3
1327
+ value: 88.346
1328
+ - type: recall_at_5
1329
+ value: 91.321
1330
+ - task:
1331
+ type: Retrieval
1332
+ dataset:
1333
+ type: fiqa
1334
+ name: MTEB FiQA2018
1335
+ config: default
1336
+ split: test
1337
+ revision: None
1338
+ metrics:
1339
+ - type: map_at_1
1340
+ value: 19.8
1341
+ - type: map_at_10
1342
+ value: 31.979999999999997
1343
+ - type: map_at_100
1344
+ value: 33.876
1345
+ - type: map_at_1000
1346
+ value: 34.056999999999995
1347
+ - type: map_at_3
1348
+ value: 28.067999999999998
1349
+ - type: map_at_5
1350
+ value: 30.066
1351
+ - type: mrr_at_1
1352
+ value: 38.735
1353
+ - type: mrr_at_10
1354
+ value: 47.749
1355
+ - type: mrr_at_100
1356
+ value: 48.605
1357
+ - type: mrr_at_1000
1358
+ value: 48.644999999999996
1359
+ - type: mrr_at_3
1360
+ value: 45.165
1361
+ - type: mrr_at_5
1362
+ value: 46.646
1363
+ - type: ndcg_at_1
1364
+ value: 38.735
1365
+ - type: ndcg_at_10
1366
+ value: 39.883
1367
+ - type: ndcg_at_100
1368
+ value: 46.983000000000004
1369
+ - type: ndcg_at_1000
1370
+ value: 50.043000000000006
1371
+ - type: ndcg_at_3
1372
+ value: 35.943000000000005
1373
+ - type: ndcg_at_5
1374
+ value: 37.119
1375
+ - type: precision_at_1
1376
+ value: 38.735
1377
+ - type: precision_at_10
1378
+ value: 10.940999999999999
1379
+ - type: precision_at_100
1380
+ value: 1.836
1381
+ - type: precision_at_1000
1382
+ value: 0.23900000000000002
1383
+ - type: precision_at_3
1384
+ value: 23.817
1385
+ - type: precision_at_5
1386
+ value: 17.346
1387
+ - type: recall_at_1
1388
+ value: 19.8
1389
+ - type: recall_at_10
1390
+ value: 47.082
1391
+ - type: recall_at_100
1392
+ value: 73.247
1393
+ - type: recall_at_1000
1394
+ value: 91.633
1395
+ - type: recall_at_3
1396
+ value: 33.201
1397
+ - type: recall_at_5
1398
+ value: 38.81
1399
+ - task:
1400
+ type: Retrieval
1401
+ dataset:
1402
+ type: hotpotqa
1403
+ name: MTEB HotpotQA
1404
+ config: default
1405
+ split: test
1406
+ revision: None
1407
+ metrics:
1408
+ - type: map_at_1
1409
+ value: 38.102999999999994
1410
+ - type: map_at_10
1411
+ value: 60.547
1412
+ - type: map_at_100
1413
+ value: 61.466
1414
+ - type: map_at_1000
1415
+ value: 61.526
1416
+ - type: map_at_3
1417
+ value: 56.973
1418
+ - type: map_at_5
1419
+ value: 59.244
1420
+ - type: mrr_at_1
1421
+ value: 76.205
1422
+ - type: mrr_at_10
1423
+ value: 82.816
1424
+ - type: mrr_at_100
1425
+ value: 83.002
1426
+ - type: mrr_at_1000
1427
+ value: 83.009
1428
+ - type: mrr_at_3
1429
+ value: 81.747
1430
+ - type: mrr_at_5
1431
+ value: 82.467
1432
+ - type: ndcg_at_1
1433
+ value: 76.205
1434
+ - type: ndcg_at_10
1435
+ value: 69.15
1436
+ - type: ndcg_at_100
1437
+ value: 72.297
1438
+ - type: ndcg_at_1000
1439
+ value: 73.443
1440
+ - type: ndcg_at_3
1441
+ value: 64.07000000000001
1442
+ - type: ndcg_at_5
1443
+ value: 66.96600000000001
1444
+ - type: precision_at_1
1445
+ value: 76.205
1446
+ - type: precision_at_10
1447
+ value: 14.601
1448
+ - type: precision_at_100
1449
+ value: 1.7049999999999998
1450
+ - type: precision_at_1000
1451
+ value: 0.186
1452
+ - type: precision_at_3
1453
+ value: 41.202
1454
+ - type: precision_at_5
1455
+ value: 27.006000000000004
1456
+ - type: recall_at_1
1457
+ value: 38.102999999999994
1458
+ - type: recall_at_10
1459
+ value: 73.005
1460
+ - type: recall_at_100
1461
+ value: 85.253
1462
+ - type: recall_at_1000
1463
+ value: 92.795
1464
+ - type: recall_at_3
1465
+ value: 61.803
1466
+ - type: recall_at_5
1467
+ value: 67.515
1468
+ - task:
1469
+ type: Classification
1470
+ dataset:
1471
+ type: mteb/imdb
1472
+ name: MTEB ImdbClassification
1473
+ config: default
1474
+ split: test
1475
+ revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
1476
+ metrics:
1477
+ - type: accuracy
1478
+ value: 86.15
1479
+ - type: ap
1480
+ value: 80.36282825265391
1481
+ - type: f1
1482
+ value: 86.07368510726472
1483
+ - task:
1484
+ type: Retrieval
1485
+ dataset:
1486
+ type: msmarco
1487
+ name: MTEB MSMARCO
1488
+ config: default
1489
+ split: dev
1490
+ revision: None
1491
+ metrics:
1492
+ - type: map_at_1
1493
+ value: 22.6
1494
+ - type: map_at_10
1495
+ value: 34.887
1496
+ - type: map_at_100
1497
+ value: 36.069
1498
+ - type: map_at_1000
1499
+ value: 36.115
1500
+ - type: map_at_3
1501
+ value: 31.067
1502
+ - type: map_at_5
1503
+ value: 33.300000000000004
1504
+ - type: mrr_at_1
1505
+ value: 23.238
1506
+ - type: mrr_at_10
1507
+ value: 35.47
1508
+ - type: mrr_at_100
1509
+ value: 36.599
1510
+ - type: mrr_at_1000
1511
+ value: 36.64
1512
+ - type: mrr_at_3
1513
+ value: 31.735999999999997
1514
+ - type: mrr_at_5
1515
+ value: 33.939
1516
+ - type: ndcg_at_1
1517
+ value: 23.252
1518
+ - type: ndcg_at_10
1519
+ value: 41.765
1520
+ - type: ndcg_at_100
1521
+ value: 47.402
1522
+ - type: ndcg_at_1000
1523
+ value: 48.562
1524
+ - type: ndcg_at_3
1525
+ value: 34.016999999999996
1526
+ - type: ndcg_at_5
1527
+ value: 38.016
1528
+ - type: precision_at_1
1529
+ value: 23.252
1530
+ - type: precision_at_10
1531
+ value: 6.569
1532
+ - type: precision_at_100
1533
+ value: 0.938
1534
+ - type: precision_at_1000
1535
+ value: 0.104
1536
+ - type: precision_at_3
1537
+ value: 14.479000000000001
1538
+ - type: precision_at_5
1539
+ value: 10.722
1540
+ - type: recall_at_1
1541
+ value: 22.6
1542
+ - type: recall_at_10
1543
+ value: 62.919000000000004
1544
+ - type: recall_at_100
1545
+ value: 88.82
1546
+ - type: recall_at_1000
1547
+ value: 97.71600000000001
1548
+ - type: recall_at_3
1549
+ value: 41.896
1550
+ - type: recall_at_5
1551
+ value: 51.537
1552
+ - task:
1553
+ type: Classification
1554
+ dataset:
1555
+ type: mteb/mtop_domain
1556
+ name: MTEB MTOPDomainClassification (en)
1557
+ config: en
1558
+ split: test
1559
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
1560
+ metrics:
1561
+ - type: accuracy
1562
+ value: 93.69357045143639
1563
+ - type: f1
1564
+ value: 93.55489858177597
1565
+ - task:
1566
+ type: Classification
1567
+ dataset:
1568
+ type: mteb/mtop_intent
1569
+ name: MTEB MTOPIntentClassification (en)
1570
+ config: en
1571
+ split: test
1572
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
1573
+ metrics:
1574
+ - type: accuracy
1575
+ value: 75.31235750114
1576
+ - type: f1
1577
+ value: 57.891491963121155
1578
+ - task:
1579
+ type: Classification
1580
+ dataset:
1581
+ type: mteb/amazon_massive_intent
1582
+ name: MTEB MassiveIntentClassification (en)
1583
+ config: en
1584
+ split: test
1585
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
1586
+ metrics:
1587
+ - type: accuracy
1588
+ value: 73.04303967720243
1589
+ - type: f1
1590
+ value: 70.51516022297616
1591
+ - task:
1592
+ type: Classification
1593
+ dataset:
1594
+ type: mteb/amazon_massive_scenario
1595
+ name: MTEB MassiveScenarioClassification (en)
1596
+ config: en
1597
+ split: test
1598
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
1599
+ metrics:
1600
+ - type: accuracy
1601
+ value: 77.65299260255549
1602
+ - type: f1
1603
+ value: 77.49059766538576
1604
+ - task:
1605
+ type: Clustering
1606
+ dataset:
1607
+ type: mteb/medrxiv-clustering-p2p
1608
+ name: MTEB MedrxivClusteringP2P
1609
+ config: default
1610
+ split: test
1611
+ revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
1612
+ metrics:
1613
+ - type: v_measure
1614
+ value: 31.458906115906597
1615
+ - task:
1616
+ type: Clustering
1617
+ dataset:
1618
+ type: mteb/medrxiv-clustering-s2s
1619
+ name: MTEB MedrxivClusteringS2S
1620
+ config: default
1621
+ split: test
1622
+ revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
1623
+ metrics:
1624
+ - type: v_measure
1625
+ value: 28.9851513122443
1626
+ - task:
1627
+ type: Reranking
1628
+ dataset:
1629
+ type: mteb/mind_small
1630
+ name: MTEB MindSmallReranking
1631
+ config: default
1632
+ split: test
1633
+ revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
1634
+ metrics:
1635
+ - type: map
1636
+ value: 31.2916268497217
1637
+ - type: mrr
1638
+ value: 32.328276715593816
1639
+ - task:
1640
+ type: Retrieval
1641
+ dataset:
1642
+ type: nfcorpus
1643
+ name: MTEB NFCorpus
1644
+ config: default
1645
+ split: test
1646
+ revision: None
1647
+ metrics:
1648
+ - type: map_at_1
1649
+ value: 6.3740000000000006
1650
+ - type: map_at_10
1651
+ value: 13.089999999999998
1652
+ - type: map_at_100
1653
+ value: 16.512
1654
+ - type: map_at_1000
1655
+ value: 18.014
1656
+ - type: map_at_3
1657
+ value: 9.671000000000001
1658
+ - type: map_at_5
1659
+ value: 11.199
1660
+ - type: mrr_at_1
1661
+ value: 46.749
1662
+ - type: mrr_at_10
1663
+ value: 55.367
1664
+ - type: mrr_at_100
1665
+ value: 56.021
1666
+ - type: mrr_at_1000
1667
+ value: 56.058
1668
+ - type: mrr_at_3
1669
+ value: 53.30200000000001
1670
+ - type: mrr_at_5
1671
+ value: 54.773
1672
+ - type: ndcg_at_1
1673
+ value: 45.046
1674
+ - type: ndcg_at_10
1675
+ value: 35.388999999999996
1676
+ - type: ndcg_at_100
1677
+ value: 32.175
1678
+ - type: ndcg_at_1000
1679
+ value: 41.018
1680
+ - type: ndcg_at_3
1681
+ value: 40.244
1682
+ - type: ndcg_at_5
1683
+ value: 38.267
1684
+ - type: precision_at_1
1685
+ value: 46.749
1686
+ - type: precision_at_10
1687
+ value: 26.563
1688
+ - type: precision_at_100
1689
+ value: 8.074
1690
+ - type: precision_at_1000
1691
+ value: 2.099
1692
+ - type: precision_at_3
1693
+ value: 37.358000000000004
1694
+ - type: precision_at_5
1695
+ value: 33.003
1696
+ - type: recall_at_1
1697
+ value: 6.3740000000000006
1698
+ - type: recall_at_10
1699
+ value: 16.805999999999997
1700
+ - type: recall_at_100
1701
+ value: 31.871
1702
+ - type: recall_at_1000
1703
+ value: 64.098
1704
+ - type: recall_at_3
1705
+ value: 10.383000000000001
1706
+ - type: recall_at_5
1707
+ value: 13.166
1708
+ - task:
1709
+ type: Retrieval
1710
+ dataset:
1711
+ type: nq
1712
+ name: MTEB NQ
1713
+ config: default
1714
+ split: test
1715
+ revision: None
1716
+ metrics:
1717
+ - type: map_at_1
1718
+ value: 34.847
1719
+ - type: map_at_10
1720
+ value: 50.532
1721
+ - type: map_at_100
1722
+ value: 51.504000000000005
1723
+ - type: map_at_1000
1724
+ value: 51.528
1725
+ - type: map_at_3
1726
+ value: 46.219
1727
+ - type: map_at_5
1728
+ value: 48.868
1729
+ - type: mrr_at_1
1730
+ value: 39.137
1731
+ - type: mrr_at_10
1732
+ value: 53.157
1733
+ - type: mrr_at_100
1734
+ value: 53.839999999999996
1735
+ - type: mrr_at_1000
1736
+ value: 53.857
1737
+ - type: mrr_at_3
1738
+ value: 49.667
1739
+ - type: mrr_at_5
1740
+ value: 51.847
1741
+ - type: ndcg_at_1
1742
+ value: 39.108
1743
+ - type: ndcg_at_10
1744
+ value: 58.221000000000004
1745
+ - type: ndcg_at_100
1746
+ value: 62.021
1747
+ - type: ndcg_at_1000
1748
+ value: 62.57
1749
+ - type: ndcg_at_3
1750
+ value: 50.27199999999999
1751
+ - type: ndcg_at_5
1752
+ value: 54.623999999999995
1753
+ - type: precision_at_1
1754
+ value: 39.108
1755
+ - type: precision_at_10
1756
+ value: 9.397
1757
+ - type: precision_at_100
1758
+ value: 1.1520000000000001
1759
+ - type: precision_at_1000
1760
+ value: 0.12
1761
+ - type: precision_at_3
1762
+ value: 22.644000000000002
1763
+ - type: precision_at_5
1764
+ value: 16.141
1765
+ - type: recall_at_1
1766
+ value: 34.847
1767
+ - type: recall_at_10
1768
+ value: 78.945
1769
+ - type: recall_at_100
1770
+ value: 94.793
1771
+ - type: recall_at_1000
1772
+ value: 98.904
1773
+ - type: recall_at_3
1774
+ value: 58.56
1775
+ - type: recall_at_5
1776
+ value: 68.535
1777
+ - task:
1778
+ type: Retrieval
1779
+ dataset:
1780
+ type: quora
1781
+ name: MTEB QuoraRetrieval
1782
+ config: default
1783
+ split: test
1784
+ revision: None
1785
+ metrics:
1786
+ - type: map_at_1
1787
+ value: 68.728
1788
+ - type: map_at_10
1789
+ value: 82.537
1790
+ - type: map_at_100
1791
+ value: 83.218
1792
+ - type: map_at_1000
1793
+ value: 83.238
1794
+ - type: map_at_3
1795
+ value: 79.586
1796
+ - type: map_at_5
1797
+ value: 81.416
1798
+ - type: mrr_at_1
1799
+ value: 79.17999999999999
1800
+ - type: mrr_at_10
1801
+ value: 85.79299999999999
1802
+ - type: mrr_at_100
1803
+ value: 85.937
1804
+ - type: mrr_at_1000
1805
+ value: 85.938
1806
+ - type: mrr_at_3
1807
+ value: 84.748
1808
+ - type: mrr_at_5
1809
+ value: 85.431
1810
+ - type: ndcg_at_1
1811
+ value: 79.17
1812
+ - type: ndcg_at_10
1813
+ value: 86.555
1814
+ - type: ndcg_at_100
1815
+ value: 88.005
1816
+ - type: ndcg_at_1000
1817
+ value: 88.146
1818
+ - type: ndcg_at_3
1819
+ value: 83.557
1820
+ - type: ndcg_at_5
1821
+ value: 85.152
1822
+ - type: precision_at_1
1823
+ value: 79.17
1824
+ - type: precision_at_10
1825
+ value: 13.163
1826
+ - type: precision_at_100
1827
+ value: 1.52
1828
+ - type: precision_at_1000
1829
+ value: 0.156
1830
+ - type: precision_at_3
1831
+ value: 36.53
1832
+ - type: precision_at_5
1833
+ value: 24.046
1834
+ - type: recall_at_1
1835
+ value: 68.728
1836
+ - type: recall_at_10
1837
+ value: 94.217
1838
+ - type: recall_at_100
1839
+ value: 99.295
1840
+ - type: recall_at_1000
1841
+ value: 99.964
1842
+ - type: recall_at_3
1843
+ value: 85.646
1844
+ - type: recall_at_5
1845
+ value: 90.113
1846
+ - task:
1847
+ type: Clustering
1848
+ dataset:
1849
+ type: mteb/reddit-clustering
1850
+ name: MTEB RedditClustering
1851
+ config: default
1852
+ split: test
1853
+ revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
1854
+ metrics:
1855
+ - type: v_measure
1856
+ value: 56.15680266226348
1857
+ - task:
1858
+ type: Clustering
1859
+ dataset:
1860
+ type: mteb/reddit-clustering-p2p
1861
+ name: MTEB RedditClusteringP2P
1862
+ config: default
1863
+ split: test
1864
+ revision: 282350215ef01743dc01b456c7f5241fa8937f16
1865
+ metrics:
1866
+ - type: v_measure
1867
+ value: 63.4318549229047
1868
+ - task:
1869
+ type: Retrieval
1870
+ dataset:
1871
+ type: scidocs
1872
+ name: MTEB SCIDOCS
1873
+ config: default
1874
+ split: test
1875
+ revision: None
1876
+ metrics:
1877
+ - type: map_at_1
1878
+ value: 4.353
1879
+ - type: map_at_10
1880
+ value: 10.956000000000001
1881
+ - type: map_at_100
1882
+ value: 12.873999999999999
1883
+ - type: map_at_1000
1884
+ value: 13.177
1885
+ - type: map_at_3
1886
+ value: 7.854
1887
+ - type: map_at_5
1888
+ value: 9.327
1889
+ - type: mrr_at_1
1890
+ value: 21.4
1891
+ - type: mrr_at_10
1892
+ value: 31.948999999999998
1893
+ - type: mrr_at_100
1894
+ value: 33.039
1895
+ - type: mrr_at_1000
1896
+ value: 33.106
1897
+ - type: mrr_at_3
1898
+ value: 28.449999999999996
1899
+ - type: mrr_at_5
1900
+ value: 30.535
1901
+ - type: ndcg_at_1
1902
+ value: 21.4
1903
+ - type: ndcg_at_10
1904
+ value: 18.694
1905
+ - type: ndcg_at_100
1906
+ value: 26.275
1907
+ - type: ndcg_at_1000
1908
+ value: 31.836
1909
+ - type: ndcg_at_3
1910
+ value: 17.559
1911
+ - type: ndcg_at_5
1912
+ value: 15.372
1913
+ - type: precision_at_1
1914
+ value: 21.4
1915
+ - type: precision_at_10
1916
+ value: 9.790000000000001
1917
+ - type: precision_at_100
1918
+ value: 2.0709999999999997
1919
+ - type: precision_at_1000
1920
+ value: 0.34099999999999997
1921
+ - type: precision_at_3
1922
+ value: 16.467000000000002
1923
+ - type: precision_at_5
1924
+ value: 13.54
1925
+ - type: recall_at_1
1926
+ value: 4.353
1927
+ - type: recall_at_10
1928
+ value: 19.892000000000003
1929
+ - type: recall_at_100
1930
+ value: 42.067
1931
+ - type: recall_at_1000
1932
+ value: 69.268
1933
+ - type: recall_at_3
1934
+ value: 10.042
1935
+ - type: recall_at_5
1936
+ value: 13.741999999999999
1937
+ - task:
1938
+ type: STS
1939
+ dataset:
1940
+ type: mteb/sickr-sts
1941
+ name: MTEB SICK-R
1942
+ config: default
1943
+ split: test
1944
+ revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
1945
+ metrics:
1946
+ - type: cos_sim_pearson
1947
+ value: 83.75433886279843
1948
+ - type: cos_sim_spearman
1949
+ value: 78.29727771767095
1950
+ - type: euclidean_pearson
1951
+ value: 80.83057828506621
1952
+ - type: euclidean_spearman
1953
+ value: 78.35203149750356
1954
+ - type: manhattan_pearson
1955
+ value: 80.7403553891142
1956
+ - type: manhattan_spearman
1957
+ value: 78.33670488531051
1958
+ - task:
1959
+ type: STS
1960
+ dataset:
1961
+ type: mteb/sts12-sts
1962
+ name: MTEB STS12
1963
+ config: default
1964
+ split: test
1965
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
1966
+ metrics:
1967
+ - type: cos_sim_pearson
1968
+ value: 84.59999465280839
1969
+ - type: cos_sim_spearman
1970
+ value: 75.79279003980383
1971
+ - type: euclidean_pearson
1972
+ value: 82.29895375956758
1973
+ - type: euclidean_spearman
1974
+ value: 77.33856514102094
1975
+ - type: manhattan_pearson
1976
+ value: 82.22694214534756
1977
+ - type: manhattan_spearman
1978
+ value: 77.3028993008695
1979
+ - task:
1980
+ type: STS
1981
+ dataset:
1982
+ type: mteb/sts13-sts
1983
+ name: MTEB STS13
1984
+ config: default
1985
+ split: test
1986
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
1987
+ metrics:
1988
+ - type: cos_sim_pearson
1989
+ value: 83.09296929691297
1990
+ - type: cos_sim_spearman
1991
+ value: 83.58056936846941
1992
+ - type: euclidean_pearson
1993
+ value: 83.84067483060005
1994
+ - type: euclidean_spearman
1995
+ value: 84.45155680480985
1996
+ - type: manhattan_pearson
1997
+ value: 83.82353052971942
1998
+ - type: manhattan_spearman
1999
+ value: 84.43030567861112
2000
+ - task:
2001
+ type: STS
2002
+ dataset:
2003
+ type: mteb/sts14-sts
2004
+ name: MTEB STS14
2005
+ config: default
2006
+ split: test
2007
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
2008
+ metrics:
2009
+ - type: cos_sim_pearson
2010
+ value: 82.74616852320915
2011
+ - type: cos_sim_spearman
2012
+ value: 79.948683747966
2013
+ - type: euclidean_pearson
2014
+ value: 81.55702283757084
2015
+ - type: euclidean_spearman
2016
+ value: 80.1721505114231
2017
+ - type: manhattan_pearson
2018
+ value: 81.52251518619441
2019
+ - type: manhattan_spearman
2020
+ value: 80.1469800135577
2021
+ - task:
2022
+ type: STS
2023
+ dataset:
2024
+ type: mteb/sts15-sts
2025
+ name: MTEB STS15
2026
+ config: default
2027
+ split: test
2028
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
2029
+ metrics:
2030
+ - type: cos_sim_pearson
2031
+ value: 87.97170104226318
2032
+ - type: cos_sim_spearman
2033
+ value: 88.82021731518206
2034
+ - type: euclidean_pearson
2035
+ value: 87.92950547187615
2036
+ - type: euclidean_spearman
2037
+ value: 88.67043634645866
2038
+ - type: manhattan_pearson
2039
+ value: 87.90668112827639
2040
+ - type: manhattan_spearman
2041
+ value: 88.64471082785317
2042
+ - task:
2043
+ type: STS
2044
+ dataset:
2045
+ type: mteb/sts16-sts
2046
+ name: MTEB STS16
2047
+ config: default
2048
+ split: test
2049
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
2050
+ metrics:
2051
+ - type: cos_sim_pearson
2052
+ value: 83.02790375770599
2053
+ - type: cos_sim_spearman
2054
+ value: 84.46308496590792
2055
+ - type: euclidean_pearson
2056
+ value: 84.29430000414911
2057
+ - type: euclidean_spearman
2058
+ value: 84.77298303589936
2059
+ - type: manhattan_pearson
2060
+ value: 84.23919291368665
2061
+ - type: manhattan_spearman
2062
+ value: 84.75272234871308
2063
+ - task:
2064
+ type: STS
2065
+ dataset:
2066
+ type: mteb/sts17-crosslingual-sts
2067
+ name: MTEB STS17 (en-en)
2068
+ config: en-en
2069
+ split: test
2070
+ revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
2071
+ metrics:
2072
+ - type: cos_sim_pearson
2073
+ value: 87.62885108477064
2074
+ - type: cos_sim_spearman
2075
+ value: 87.58456196391622
2076
+ - type: euclidean_pearson
2077
+ value: 88.2602775281007
2078
+ - type: euclidean_spearman
2079
+ value: 87.51556278299846
2080
+ - type: manhattan_pearson
2081
+ value: 88.11224053672842
2082
+ - type: manhattan_spearman
2083
+ value: 87.4336094383095
2084
+ - task:
2085
+ type: STS
2086
+ dataset:
2087
+ type: mteb/sts22-crosslingual-sts
2088
+ name: MTEB STS22 (en)
2089
+ config: en
2090
+ split: test
2091
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
2092
+ metrics:
2093
+ - type: cos_sim_pearson
2094
+ value: 63.98187965128411
2095
+ - type: cos_sim_spearman
2096
+ value: 64.0653163219731
2097
+ - type: euclidean_pearson
2098
+ value: 62.30616725924099
2099
+ - type: euclidean_spearman
2100
+ value: 61.556971332295916
2101
+ - type: manhattan_pearson
2102
+ value: 62.07642330128549
2103
+ - type: manhattan_spearman
2104
+ value: 61.155494129828
2105
+ - task:
2106
+ type: STS
2107
+ dataset:
2108
+ type: mteb/stsbenchmark-sts
2109
+ name: MTEB STSBenchmark
2110
+ config: default
2111
+ split: test
2112
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
2113
+ metrics:
2114
+ - type: cos_sim_pearson
2115
+ value: 85.6089703921826
2116
+ - type: cos_sim_spearman
2117
+ value: 86.52303197250791
2118
+ - type: euclidean_pearson
2119
+ value: 85.95801955963246
2120
+ - type: euclidean_spearman
2121
+ value: 86.25242424112962
2122
+ - type: manhattan_pearson
2123
+ value: 85.88829100470312
2124
+ - type: manhattan_spearman
2125
+ value: 86.18742955805165
2126
+ - task:
2127
+ type: Reranking
2128
+ dataset:
2129
+ type: mteb/scidocs-reranking
2130
+ name: MTEB SciDocsRR
2131
+ config: default
2132
+ split: test
2133
+ revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
2134
+ metrics:
2135
+ - type: map
2136
+ value: 83.02282098487036
2137
+ - type: mrr
2138
+ value: 95.05126409538174
2139
+ - task:
2140
+ type: Retrieval
2141
+ dataset:
2142
+ type: scifact
2143
+ name: MTEB SciFact
2144
+ config: default
2145
+ split: test
2146
+ revision: None
2147
+ metrics:
2148
+ - type: map_at_1
2149
+ value: 55.928
2150
+ - type: map_at_10
2151
+ value: 67.308
2152
+ - type: map_at_100
2153
+ value: 67.89500000000001
2154
+ - type: map_at_1000
2155
+ value: 67.91199999999999
2156
+ - type: map_at_3
2157
+ value: 65.091
2158
+ - type: map_at_5
2159
+ value: 66.412
2160
+ - type: mrr_at_1
2161
+ value: 58.667
2162
+ - type: mrr_at_10
2163
+ value: 68.401
2164
+ - type: mrr_at_100
2165
+ value: 68.804
2166
+ - type: mrr_at_1000
2167
+ value: 68.819
2168
+ - type: mrr_at_3
2169
+ value: 66.72200000000001
2170
+ - type: mrr_at_5
2171
+ value: 67.72200000000001
2172
+ - type: ndcg_at_1
2173
+ value: 58.667
2174
+ - type: ndcg_at_10
2175
+ value: 71.944
2176
+ - type: ndcg_at_100
2177
+ value: 74.464
2178
+ - type: ndcg_at_1000
2179
+ value: 74.82799999999999
2180
+ - type: ndcg_at_3
2181
+ value: 68.257
2182
+ - type: ndcg_at_5
2183
+ value: 70.10300000000001
2184
+ - type: precision_at_1
2185
+ value: 58.667
2186
+ - type: precision_at_10
2187
+ value: 9.533
2188
+ - type: precision_at_100
2189
+ value: 1.09
2190
+ - type: precision_at_1000
2191
+ value: 0.11199999999999999
2192
+ - type: precision_at_3
2193
+ value: 27.222
2194
+ - type: precision_at_5
2195
+ value: 17.533
2196
+ - type: recall_at_1
2197
+ value: 55.928
2198
+ - type: recall_at_10
2199
+ value: 84.65
2200
+ - type: recall_at_100
2201
+ value: 96.267
2202
+ - type: recall_at_1000
2203
+ value: 99
2204
+ - type: recall_at_3
2205
+ value: 74.656
2206
+ - type: recall_at_5
2207
+ value: 79.489
2208
+ - task:
2209
+ type: PairClassification
2210
+ dataset:
2211
+ type: mteb/sprintduplicatequestions-pairclassification
2212
+ name: MTEB SprintDuplicateQuestions
2213
+ config: default
2214
+ split: test
2215
+ revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
2216
+ metrics:
2217
+ - type: cos_sim_accuracy
2218
+ value: 99.79009900990098
2219
+ - type: cos_sim_ap
2220
+ value: 94.5795129511524
2221
+ - type: cos_sim_f1
2222
+ value: 89.34673366834171
2223
+ - type: cos_sim_precision
2224
+ value: 89.79797979797979
2225
+ - type: cos_sim_recall
2226
+ value: 88.9
2227
+ - type: dot_accuracy
2228
+ value: 99.53465346534654
2229
+ - type: dot_ap
2230
+ value: 81.56492504352725
2231
+ - type: dot_f1
2232
+ value: 76.33816908454227
2233
+ - type: dot_precision
2234
+ value: 76.37637637637637
2235
+ - type: dot_recall
2236
+ value: 76.3
2237
+ - type: euclidean_accuracy
2238
+ value: 99.78514851485149
2239
+ - type: euclidean_ap
2240
+ value: 94.59134620408962
2241
+ - type: euclidean_f1
2242
+ value: 88.96484375
2243
+ - type: euclidean_precision
2244
+ value: 86.92748091603053
2245
+ - type: euclidean_recall
2246
+ value: 91.10000000000001
2247
+ - type: manhattan_accuracy
2248
+ value: 99.78415841584159
2249
+ - type: manhattan_ap
2250
+ value: 94.5190197328845
2251
+ - type: manhattan_f1
2252
+ value: 88.84462151394423
2253
+ - type: manhattan_precision
2254
+ value: 88.4920634920635
2255
+ - type: manhattan_recall
2256
+ value: 89.2
2257
+ - type: max_accuracy
2258
+ value: 99.79009900990098
2259
+ - type: max_ap
2260
+ value: 94.59134620408962
2261
+ - type: max_f1
2262
+ value: 89.34673366834171
2263
+ - task:
2264
+ type: Clustering
2265
+ dataset:
2266
+ type: mteb/stackexchange-clustering
2267
+ name: MTEB StackExchangeClustering
2268
+ config: default
2269
+ split: test
2270
+ revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
2271
+ metrics:
2272
+ - type: v_measure
2273
+ value: 65.1487505617497
2274
+ - task:
2275
+ type: Clustering
2276
+ dataset:
2277
+ type: mteb/stackexchange-clustering-p2p
2278
+ name: MTEB StackExchangeClusteringP2P
2279
+ config: default
2280
+ split: test
2281
+ revision: 815ca46b2622cec33ccafc3735d572c266efdb44
2282
+ metrics:
2283
+ - type: v_measure
2284
+ value: 32.502518166001856
2285
+ - task:
2286
+ type: Reranking
2287
+ dataset:
2288
+ type: mteb/stackoverflowdupquestions-reranking
2289
+ name: MTEB StackOverflowDupQuestions
2290
+ config: default
2291
+ split: test
2292
+ revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
2293
+ metrics:
2294
+ - type: map
2295
+ value: 50.33775480236701
2296
+ - type: mrr
2297
+ value: 51.17302223919871
2298
+ - task:
2299
+ type: Summarization
2300
+ dataset:
2301
+ type: mteb/summeval
2302
+ name: MTEB SummEval
2303
+ config: default
2304
+ split: test
2305
+ revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
2306
+ metrics:
2307
+ - type: cos_sim_pearson
2308
+ value: 30.561111309808208
2309
+ - type: cos_sim_spearman
2310
+ value: 30.2839254379273
2311
+ - type: dot_pearson
2312
+ value: 29.560242291401973
2313
+ - type: dot_spearman
2314
+ value: 30.51527274679116
2315
+ - task:
2316
+ type: Retrieval
2317
+ dataset:
2318
+ type: trec-covid
2319
+ name: MTEB TRECCOVID
2320
+ config: default
2321
+ split: test
2322
+ revision: None
2323
+ metrics:
2324
+ - type: map_at_1
2325
+ value: 0.215
2326
+ - type: map_at_10
2327
+ value: 1.752
2328
+ - type: map_at_100
2329
+ value: 9.258
2330
+ - type: map_at_1000
2331
+ value: 23.438
2332
+ - type: map_at_3
2333
+ value: 0.6
2334
+ - type: map_at_5
2335
+ value: 0.968
2336
+ - type: mrr_at_1
2337
+ value: 84
2338
+ - type: mrr_at_10
2339
+ value: 91.333
2340
+ - type: mrr_at_100
2341
+ value: 91.333
2342
+ - type: mrr_at_1000
2343
+ value: 91.333
2344
+ - type: mrr_at_3
2345
+ value: 91.333
2346
+ - type: mrr_at_5
2347
+ value: 91.333
2348
+ - type: ndcg_at_1
2349
+ value: 75
2350
+ - type: ndcg_at_10
2351
+ value: 69.596
2352
+ - type: ndcg_at_100
2353
+ value: 51.970000000000006
2354
+ - type: ndcg_at_1000
2355
+ value: 48.864999999999995
2356
+ - type: ndcg_at_3
2357
+ value: 73.92699999999999
2358
+ - type: ndcg_at_5
2359
+ value: 73.175
2360
+ - type: precision_at_1
2361
+ value: 84
2362
+ - type: precision_at_10
2363
+ value: 74
2364
+ - type: precision_at_100
2365
+ value: 53.2
2366
+ - type: precision_at_1000
2367
+ value: 21.836
2368
+ - type: precision_at_3
2369
+ value: 79.333
2370
+ - type: precision_at_5
2371
+ value: 78.4
2372
+ - type: recall_at_1
2373
+ value: 0.215
2374
+ - type: recall_at_10
2375
+ value: 1.9609999999999999
2376
+ - type: recall_at_100
2377
+ value: 12.809999999999999
2378
+ - type: recall_at_1000
2379
+ value: 46.418
2380
+ - type: recall_at_3
2381
+ value: 0.6479999999999999
2382
+ - type: recall_at_5
2383
+ value: 1.057
2384
+ - task:
2385
+ type: Retrieval
2386
+ dataset:
2387
+ type: webis-touche2020
2388
+ name: MTEB Touche2020
2389
+ config: default
2390
+ split: test
2391
+ revision: None
2392
+ metrics:
2393
+ - type: map_at_1
2394
+ value: 3.066
2395
+ - type: map_at_10
2396
+ value: 10.508000000000001
2397
+ - type: map_at_100
2398
+ value: 16.258
2399
+ - type: map_at_1000
2400
+ value: 17.705000000000002
2401
+ - type: map_at_3
2402
+ value: 6.157
2403
+ - type: map_at_5
2404
+ value: 7.510999999999999
2405
+ - type: mrr_at_1
2406
+ value: 34.694
2407
+ - type: mrr_at_10
2408
+ value: 48.786
2409
+ - type: mrr_at_100
2410
+ value: 49.619
2411
+ - type: mrr_at_1000
2412
+ value: 49.619
2413
+ - type: mrr_at_3
2414
+ value: 45.918
2415
+ - type: mrr_at_5
2416
+ value: 46.837
2417
+ - type: ndcg_at_1
2418
+ value: 31.633
2419
+ - type: ndcg_at_10
2420
+ value: 26.401999999999997
2421
+ - type: ndcg_at_100
2422
+ value: 37.139
2423
+ - type: ndcg_at_1000
2424
+ value: 48.012
2425
+ - type: ndcg_at_3
2426
+ value: 31.875999999999998
2427
+ - type: ndcg_at_5
2428
+ value: 27.383000000000003
2429
+ - type: precision_at_1
2430
+ value: 34.694
2431
+ - type: precision_at_10
2432
+ value: 22.857
2433
+ - type: precision_at_100
2434
+ value: 7.611999999999999
2435
+ - type: precision_at_1000
2436
+ value: 1.492
2437
+ - type: precision_at_3
2438
+ value: 33.333
2439
+ - type: precision_at_5
2440
+ value: 26.122
2441
+ - type: recall_at_1
2442
+ value: 3.066
2443
+ - type: recall_at_10
2444
+ value: 16.239
2445
+ - type: recall_at_100
2446
+ value: 47.29
2447
+ - type: recall_at_1000
2448
+ value: 81.137
2449
+ - type: recall_at_3
2450
+ value: 7.069
2451
+ - type: recall_at_5
2452
+ value: 9.483
2453
+ - task:
2454
+ type: Classification
2455
+ dataset:
2456
+ type: mteb/toxic_conversations_50k
2457
+ name: MTEB ToxicConversationsClassification
2458
+ config: default
2459
+ split: test
2460
+ revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
2461
+ metrics:
2462
+ - type: accuracy
2463
+ value: 72.1126
2464
+ - type: ap
2465
+ value: 14.710862719285753
2466
+ - type: f1
2467
+ value: 55.437808972378846
2468
+ - task:
2469
+ type: Classification
2470
+ dataset:
2471
+ type: mteb/tweet_sentiment_extraction
2472
+ name: MTEB TweetSentimentExtractionClassification
2473
+ config: default
2474
+ split: test
2475
+ revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
2476
+ metrics:
2477
+ - type: accuracy
2478
+ value: 60.39049235993209
2479
+ - type: f1
2480
+ value: 60.69810537250234
2481
+ - task:
2482
+ type: Clustering
2483
+ dataset:
2484
+ type: mteb/twentynewsgroups-clustering
2485
+ name: MTEB TwentyNewsgroupsClustering
2486
+ config: default
2487
+ split: test
2488
+ revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
2489
+ metrics:
2490
+ - type: v_measure
2491
+ value: 48.15576640316866
2492
+ - task:
2493
+ type: PairClassification
2494
+ dataset:
2495
+ type: mteb/twittersemeval2015-pairclassification
2496
+ name: MTEB TwitterSemEval2015
2497
+ config: default
2498
+ split: test
2499
+ revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
2500
+ metrics:
2501
+ - type: cos_sim_accuracy
2502
+ value: 86.52917684925792
2503
+ - type: cos_sim_ap
2504
+ value: 75.97497873817315
2505
+ - type: cos_sim_f1
2506
+ value: 70.01151926276718
2507
+ - type: cos_sim_precision
2508
+ value: 67.98409147402435
2509
+ - type: cos_sim_recall
2510
+ value: 72.16358839050132
2511
+ - type: dot_accuracy
2512
+ value: 82.47004828038385
2513
+ - type: dot_ap
2514
+ value: 62.48739894974198
2515
+ - type: dot_f1
2516
+ value: 59.13107511045656
2517
+ - type: dot_precision
2518
+ value: 55.27765029830197
2519
+ - type: dot_recall
2520
+ value: 63.562005277044854
2521
+ - type: euclidean_accuracy
2522
+ value: 86.46361089586935
2523
+ - type: euclidean_ap
2524
+ value: 75.59282886839452
2525
+ - type: euclidean_f1
2526
+ value: 69.6465443945099
2527
+ - type: euclidean_precision
2528
+ value: 64.52847175331982
2529
+ - type: euclidean_recall
2530
+ value: 75.64643799472296
2531
+ - type: manhattan_accuracy
2532
+ value: 86.43380818978363
2533
+ - type: manhattan_ap
2534
+ value: 75.5742420974403
2535
+ - type: manhattan_f1
2536
+ value: 69.8636926889715
2537
+ - type: manhattan_precision
2538
+ value: 65.8644859813084
2539
+ - type: manhattan_recall
2540
+ value: 74.37994722955145
2541
+ - type: max_accuracy
2542
+ value: 86.52917684925792
2543
+ - type: max_ap
2544
+ value: 75.97497873817315
2545
+ - type: max_f1
2546
+ value: 70.01151926276718
2547
+ - task:
2548
+ type: PairClassification
2549
+ dataset:
2550
+ type: mteb/twitterurlcorpus-pairclassification
2551
+ name: MTEB TwitterURLCorpus
2552
+ config: default
2553
+ split: test
2554
+ revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
2555
+ metrics:
2556
+ - type: cos_sim_accuracy
2557
+ value: 89.29056545193464
2558
+ - type: cos_sim_ap
2559
+ value: 86.63028865482376
2560
+ - type: cos_sim_f1
2561
+ value: 79.18166458532285
2562
+ - type: cos_sim_precision
2563
+ value: 75.70585756426465
2564
+ - type: cos_sim_recall
2565
+ value: 82.99199260856174
2566
+ - type: dot_accuracy
2567
+ value: 85.23305002522606
2568
+ - type: dot_ap
2569
+ value: 76.0482687263196
2570
+ - type: dot_f1
2571
+ value: 70.80484330484332
2572
+ - type: dot_precision
2573
+ value: 65.86933474688577
2574
+ - type: dot_recall
2575
+ value: 76.53988296889437
2576
+ - type: euclidean_accuracy
2577
+ value: 89.26145845461248
2578
+ - type: euclidean_ap
2579
+ value: 86.54073288416006
2580
+ - type: euclidean_f1
2581
+ value: 78.9721371479794
2582
+ - type: euclidean_precision
2583
+ value: 76.68649354417525
2584
+ - type: euclidean_recall
2585
+ value: 81.39821373575609
2586
+ - type: manhattan_accuracy
2587
+ value: 89.22847052431405
2588
+ - type: manhattan_ap
2589
+ value: 86.51250729037905
2590
+ - type: manhattan_f1
2591
+ value: 78.94601825044894
2592
+ - type: manhattan_precision
2593
+ value: 75.32694594027555
2594
+ - type: manhattan_recall
2595
+ value: 82.93039728980598
2596
+ - type: max_accuracy
2597
+ value: 89.29056545193464
2598
+ - type: max_ap
2599
+ value: 86.63028865482376
2600
+ - type: max_f1
2601
+ value: 79.18166458532285
2602
+ language:
2603
+ - en
2604
  license: mit
2605
  ---
2606
+
2607
+ # E5-base-v2
2608
+
2609
+ [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
2610
+ Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
2611
+
2612
+ This model has 12 layers and the embedding size is 768.
2613
+
2614
+ ## Usage
2615
+
2616
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
2617
+
2618
+ ```python
2619
+ import torch.nn.functional as F
2620
+
2621
+ from torch import Tensor
2622
+ from transformers import AutoTokenizer, AutoModel
2623
+
2624
+
2625
+ def average_pool(last_hidden_states: Tensor,
2626
+ attention_mask: Tensor) -> Tensor:
2627
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
2628
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
2629
+
2630
+
2631
+ # Each input text should start with "query: " or "passage: ".
2632
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
2633
+ input_texts = ['query: how much protein should a female eat',
2634
+ 'query: summit define',
2635
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
2636
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
2637
+
2638
+ tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-v2')
2639
+ model = AutoModel.from_pretrained('intfloat/e5-base-v2')
2640
+
2641
+ # Tokenize the input texts
2642
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
2643
+
2644
+ outputs = model(**batch_dict)
2645
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
2646
+
2647
+ # normalize embeddings
2648
+ embeddings = F.normalize(embeddings, p=2, dim=1)
2649
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
2650
+ print(scores.tolist())
2651
+ ```
2652
+
2653
+ ## Training Details
2654
+
2655
+ Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
2656
+
2657
+ ## Benchmark Evaluation
2658
+
2659
+ Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
2660
+ on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
2661
+
2662
+ ## Support for Sentence Transformers
2663
+
2664
+ Below is an example for usage with sentence_transformers.
2665
+ ```python
2666
+ from sentence_transformers import SentenceTransformer
2667
+ model = SentenceTransformer('intfloat/e5-base-v2')
2668
+ input_texts = [
2669
+ 'query: how much protein should a female eat',
2670
+ 'query: summit define',
2671
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
2672
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
2673
+ ]
2674
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
2675
+ ```
2676
+
2677
+ Package requirements
2678
+
2679
+ `pip install sentence_transformers~=2.2.2`
2680
+
2681
+ Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
2682
+
2683
+ ## FAQ
2684
+
2685
+ **1. Do I need to add the prefix "query: " and "passage: " to input texts?**
2686
+
2687
+ Yes, this is how the model is trained, otherwise you will see a performance degradation.
2688
+
2689
+ Here are some rules of thumb:
2690
+ - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
2691
+
2692
+ - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
2693
+
2694
+ - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
2695
+
2696
+ **2. Why are my reproduced results slightly different from reported in the model card?**
2697
+
2698
+ Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
2699
+
2700
+ **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
2701
+
2702
+ This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
2703
+
2704
+ For text embedding tasks like text retrieval or semantic similarity,
2705
+ what matters is the relative order of the scores instead of the absolute values,
2706
+ so this should not be an issue.
2707
+
2708
+ ## Citation
2709
+
2710
+ If you find our paper or models helpful, please consider cite as follows:
2711
+
2712
+ ```
2713
+ @article{wang2022text,
2714
+ title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
2715
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
2716
+ journal={arXiv preprint arXiv:2212.03533},
2717
+ year={2022}
2718
+ }
2719
+ ```
2720
+
2721
+ ## Limitations
2722
+
2723
+ This model only works for English texts. Long texts will be truncated to at most 512 tokens.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "tmp/",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.29.0.dev0",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0d559c47d5f71b1d280b13b62a2657f3e3bc70c0786f9ab91a36545e6a8f693
3
+ size 437955512
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c65bec2e3ae59c9f3ab86d4a9762c1a73677b5d7edbb41263cddb10b75a5dd5
3
+ size 435811539
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:251e9ea18f3228c049e6b89d418ffcdcd676f26ab9e17ee497e6cf9cbb7befbd
3
+ size 110083338
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b119cd34c663c26fcd8bbe82e9873e9a16c6588b9817a4243947a6de478c273
3
+ size 437997357
quantize_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "per_channel": true,
3
+ "reduce_range": true,
4
+ "per_model_config": {
5
+ "model": {
6
+ "op_types": [
7
+ "MatMul",
8
+ "Concat",
9
+ "Sqrt",
10
+ "Reshape",
11
+ "Constant",
12
+ "Transpose",
13
+ "Gather",
14
+ "Unsqueeze",
15
+ "Mul",
16
+ "Div",
17
+ "Sub",
18
+ "Shape",
19
+ "Add",
20
+ "ReduceMean",
21
+ "Erf",
22
+ "Slice",
23
+ "Softmax",
24
+ "Cast",
25
+ "Pow"
26
+ ],
27
+ "weight_type": "QInt8"
28
+ }
29
+ }
30
+ }
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "BertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff