Muennighoff commited on
Commit
0d58f6f
1 Parent(s): 1aea373

Add MTEB benchmarking

Browse files
Files changed (1) hide show
  1. README.md +2499 -10
README.md CHANGED
@@ -4,9 +4,2498 @@ tags:
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
- # SGPT-5.8B-weightedmean-nli-bitfit
10
 
11
  ## Usage
12
 
@@ -14,16 +2503,16 @@ For usage instructions, refer to our codebase: https://github.com/Muennighoff/sg
14
 
15
  ## Evaluation Results
16
 
17
- For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
18
 
19
  ## Training
20
  The model was trained with the parameters:
21
 
22
  **DataLoader**:
23
 
24
- `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
25
  ```
26
- {'batch_size': 6}
27
  ```
28
 
29
  **Loss**:
@@ -36,17 +2525,17 @@ The model was trained with the parameters:
36
  Parameters of the fit()-Method:
37
  ```
38
  {
39
- "epochs": 1,
40
- "evaluation_steps": 9394,
41
- "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
42
  "max_grad_norm": 1,
43
  "optimizer_class": "<class 'transformers.optimization.AdamW'>",
44
  "optimizer_params": {
45
- "lr": 0.0001
46
  },
47
  "scheduler": "WarmupLinear",
48
  "steps_per_epoch": null,
49
- "warmup_steps": 9395,
50
  "weight_decay": 0.01
51
  }
52
  ```
@@ -55,7 +2544,7 @@ Parameters of the fit()-Method:
55
  ## Full Model Architecture
56
  ```
57
  SentenceTransformer(
58
- (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTJModel
59
  (1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
60
  )
61
  ```
 
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
7
+ model-index:
8
+ - name: SGPT-5.8B-weightedmean-nli-bitfit
9
+ results:
10
+ - task:
11
+ type: Classification
12
+ dataset:
13
+ type: mteb/amazon_counterfactual
14
+ name: MTEB AmazonCounterfactualClassification (en)
15
+ config: en
16
+ split: test
17
+ metrics:
18
+ - type: accuracy
19
+ value: 74.07462686567165
20
+ - type: ap
21
+ value: 37.44692407529112
22
+ - type: f1
23
+ value: 68.28971003916419
24
+ - task:
25
+ type: Classification
26
+ dataset:
27
+ type: mteb/amazon_counterfactual
28
+ name: MTEB AmazonCounterfactualClassification (de)
29
+ config: de
30
+ split: test
31
+ metrics:
32
+ - type: accuracy
33
+ value: 66.63811563169165
34
+ - type: ap
35
+ value: 78.57252079915924
36
+ - type: f1
37
+ value: 64.5543087846584
38
+ - task:
39
+ type: Classification
40
+ dataset:
41
+ type: mteb/amazon_counterfactual
42
+ name: MTEB AmazonCounterfactualClassification (en-ext)
43
+ config: en-ext
44
+ split: test
45
+ metrics:
46
+ - type: accuracy
47
+ value: 77.21889055472263
48
+ - type: ap
49
+ value: 25.663426367826712
50
+ - type: f1
51
+ value: 64.26265688503176
52
+ - task:
53
+ type: Classification
54
+ dataset:
55
+ type: mteb/amazon_counterfactual
56
+ name: MTEB AmazonCounterfactualClassification (ja)
57
+ config: ja
58
+ split: test
59
+ metrics:
60
+ - type: accuracy
61
+ value: 58.06209850107067
62
+ - type: ap
63
+ value: 14.028219107023915
64
+ - type: f1
65
+ value: 48.10387189660778
66
+ - task:
67
+ type: Classification
68
+ dataset:
69
+ type: mteb/amazon_polarity
70
+ name: MTEB AmazonPolarityClassification
71
+ config: default
72
+ split: test
73
+ metrics:
74
+ - type: accuracy
75
+ value: 82.30920000000002
76
+ - type: ap
77
+ value: 76.88786578621213
78
+ - type: f1
79
+ value: 82.15455656065011
80
+ - task:
81
+ type: Classification
82
+ dataset:
83
+ type: mteb/amazon_reviews_multi
84
+ name: MTEB AmazonReviewsClassification (en)
85
+ config: en
86
+ split: test
87
+ metrics:
88
+ - type: accuracy
89
+ value: 41.584
90
+ - type: f1
91
+ value: 41.203137944390114
92
+ - task:
93
+ type: Classification
94
+ dataset:
95
+ type: mteb/amazon_reviews_multi
96
+ name: MTEB AmazonReviewsClassification (de)
97
+ config: de
98
+ split: test
99
+ metrics:
100
+ - type: accuracy
101
+ value: 35.288000000000004
102
+ - type: f1
103
+ value: 34.672995558518096
104
+ - task:
105
+ type: Classification
106
+ dataset:
107
+ type: mteb/amazon_reviews_multi
108
+ name: MTEB AmazonReviewsClassification (es)
109
+ config: es
110
+ split: test
111
+ metrics:
112
+ - type: accuracy
113
+ value: 38.34
114
+ - type: f1
115
+ value: 37.608755629529455
116
+ - task:
117
+ type: Classification
118
+ dataset:
119
+ type: mteb/amazon_reviews_multi
120
+ name: MTEB AmazonReviewsClassification (fr)
121
+ config: fr
122
+ split: test
123
+ metrics:
124
+ - type: accuracy
125
+ value: 37.839999999999996
126
+ - type: f1
127
+ value: 36.86898201563507
128
+ - task:
129
+ type: Classification
130
+ dataset:
131
+ type: mteb/amazon_reviews_multi
132
+ name: MTEB AmazonReviewsClassification (ja)
133
+ config: ja
134
+ split: test
135
+ metrics:
136
+ - type: accuracy
137
+ value: 30.936000000000003
138
+ - type: f1
139
+ value: 30.49401738527071
140
+ - task:
141
+ type: Classification
142
+ dataset:
143
+ type: mteb/amazon_reviews_multi
144
+ name: MTEB AmazonReviewsClassification (zh)
145
+ config: zh
146
+ split: test
147
+ metrics:
148
+ - type: accuracy
149
+ value: 33.75
150
+ - type: f1
151
+ value: 33.38338946025617
152
+ - task:
153
+ type: Retrieval
154
+ dataset:
155
+ type: arguana
156
+ name: MTEB ArguAna
157
+ config: default
158
+ split: test
159
+ metrics:
160
+ - type: map_at_1
161
+ value: 13.727
162
+ - type: map_at_10
163
+ value: 26.740000000000002
164
+ - type: map_at_100
165
+ value: 28.218
166
+ - type: map_at_1000
167
+ value: 28.246
168
+ - type: map_at_3
169
+ value: 21.728
170
+ - type: map_at_5
171
+ value: 24.371000000000002
172
+ - type: ndcg_at_1
173
+ value: 13.727
174
+ - type: ndcg_at_10
175
+ value: 35.07
176
+ - type: ndcg_at_100
177
+ value: 41.947
178
+ - type: ndcg_at_1000
179
+ value: 42.649
180
+ - type: ndcg_at_3
181
+ value: 24.484
182
+ - type: ndcg_at_5
183
+ value: 29.282999999999998
184
+ - type: precision_at_1
185
+ value: 13.727
186
+ - type: precision_at_10
187
+ value: 6.223
188
+ - type: precision_at_100
189
+ value: 0.9369999999999999
190
+ - type: precision_at_1000
191
+ value: 0.099
192
+ - type: precision_at_3
193
+ value: 10.835
194
+ - type: precision_at_5
195
+ value: 8.848
196
+ - type: recall_at_1
197
+ value: 13.727
198
+ - type: recall_at_10
199
+ value: 62.233000000000004
200
+ - type: recall_at_100
201
+ value: 93.67
202
+ - type: recall_at_1000
203
+ value: 99.14699999999999
204
+ - type: recall_at_3
205
+ value: 32.504
206
+ - type: recall_at_5
207
+ value: 44.239
208
+ - task:
209
+ type: Clustering
210
+ dataset:
211
+ type: mteb/arxiv-clustering-p2p
212
+ name: MTEB ArxivClusteringP2P
213
+ config: default
214
+ split: test
215
+ metrics:
216
+ - type: v_measure
217
+ value: 40.553923271901695
218
+ - task:
219
+ type: Clustering
220
+ dataset:
221
+ type: mteb/arxiv-clustering-s2s
222
+ name: MTEB ArxivClusteringS2S
223
+ config: default
224
+ split: test
225
+ metrics:
226
+ - type: v_measure
227
+ value: 32.49323183712211
228
+ - task:
229
+ type: Reranking
230
+ dataset:
231
+ type: mteb/askubuntudupquestions-reranking
232
+ name: MTEB AskUbuntuDupQuestions
233
+ config: default
234
+ split: test
235
+ metrics:
236
+ - type: map
237
+ value: 55.89811361443445
238
+ - type: mrr
239
+ value: 70.16235764850724
240
+ - task:
241
+ type: STS
242
+ dataset:
243
+ type: mteb/biosses-sts
244
+ name: MTEB BIOSSES
245
+ config: default
246
+ split: test
247
+ metrics:
248
+ - type: cos_sim_pearson
249
+ value: 82.50506557805856
250
+ - type: cos_sim_spearman
251
+ value: 79.50000423261176
252
+ - type: euclidean_pearson
253
+ value: 75.76190885392926
254
+ - type: euclidean_spearman
255
+ value: 76.7330737163434
256
+ - type: manhattan_pearson
257
+ value: 75.825318036112
258
+ - type: manhattan_spearman
259
+ value: 76.7415076434559
260
+ - task:
261
+ type: BitextMining
262
+ dataset:
263
+ type: mteb/bucc-bitext-mining
264
+ name: MTEB BUCC (de-en)
265
+ config: de-en
266
+ split: test
267
+ metrics:
268
+ - type: accuracy
269
+ value: 75.49060542797494
270
+ - type: f1
271
+ value: 75.15379262352123
272
+ - type: precision
273
+ value: 74.99391092553932
274
+ - type: recall
275
+ value: 75.49060542797494
276
+ - task:
277
+ type: BitextMining
278
+ dataset:
279
+ type: mteb/bucc-bitext-mining
280
+ name: MTEB BUCC (fr-en)
281
+ config: fr-en
282
+ split: test
283
+ metrics:
284
+ - type: accuracy
285
+ value: 0.4182258419546555
286
+ - type: f1
287
+ value: 0.4182258419546555
288
+ - type: precision
289
+ value: 0.4182258419546555
290
+ - type: recall
291
+ value: 0.4182258419546555
292
+ - task:
293
+ type: BitextMining
294
+ dataset:
295
+ type: mteb/bucc-bitext-mining
296
+ name: MTEB BUCC (ru-en)
297
+ config: ru-en
298
+ split: test
299
+ metrics:
300
+ - type: accuracy
301
+ value: 0.013855213023900243
302
+ - type: f1
303
+ value: 0.0115460108532502
304
+ - type: precision
305
+ value: 0.010391409767925183
306
+ - type: recall
307
+ value: 0.013855213023900243
308
+ - task:
309
+ type: BitextMining
310
+ dataset:
311
+ type: mteb/bucc-bitext-mining
312
+ name: MTEB BUCC (zh-en)
313
+ config: zh-en
314
+ split: test
315
+ metrics:
316
+ - type: accuracy
317
+ value: 0.315955766192733
318
+ - type: f1
319
+ value: 0.315955766192733
320
+ - type: precision
321
+ value: 0.315955766192733
322
+ - type: recall
323
+ value: 0.315955766192733
324
+ - task:
325
+ type: Classification
326
+ dataset:
327
+ type: mteb/banking77
328
+ name: MTEB Banking77Classification
329
+ config: default
330
+ split: test
331
+ metrics:
332
+ - type: accuracy
333
+ value: 81.74025974025973
334
+ - type: f1
335
+ value: 81.66568824876
336
+ - task:
337
+ type: Clustering
338
+ dataset:
339
+ type: mteb/biorxiv-clustering-p2p
340
+ name: MTEB BiorxivClusteringP2P
341
+ config: default
342
+ split: test
343
+ metrics:
344
+ - type: v_measure
345
+ value: 33.59451202614059
346
+ - task:
347
+ type: Clustering
348
+ dataset:
349
+ type: mteb/biorxiv-clustering-s2s
350
+ name: MTEB BiorxivClusteringS2S
351
+ config: default
352
+ split: test
353
+ metrics:
354
+ - type: v_measure
355
+ value: 29.128241446157165
356
+ - task:
357
+ type: Retrieval
358
+ dataset:
359
+ type: BeIR/cqadupstack
360
+ name: MTEB CQADupstackAndroidRetrieval
361
+ config: default
362
+ split: test
363
+ metrics:
364
+ - type: map_at_1
365
+ value: 26.715
366
+ - type: map_at_10
367
+ value: 35.007
368
+ - type: map_at_100
369
+ value: 36.352000000000004
370
+ - type: map_at_1000
371
+ value: 36.51
372
+ - type: map_at_3
373
+ value: 32.257999999999996
374
+ - type: map_at_5
375
+ value: 33.595000000000006
376
+ - type: ndcg_at_1
377
+ value: 33.906
378
+ - type: ndcg_at_10
379
+ value: 40.353
380
+ - type: ndcg_at_100
381
+ value: 45.562999999999995
382
+ - type: ndcg_at_1000
383
+ value: 48.454
384
+ - type: ndcg_at_3
385
+ value: 36.349
386
+ - type: ndcg_at_5
387
+ value: 37.856
388
+ - type: precision_at_1
389
+ value: 33.906
390
+ - type: precision_at_10
391
+ value: 7.854
392
+ - type: precision_at_100
393
+ value: 1.29
394
+ - type: precision_at_1000
395
+ value: 0.188
396
+ - type: precision_at_3
397
+ value: 17.549
398
+ - type: precision_at_5
399
+ value: 12.561
400
+ - type: recall_at_1
401
+ value: 26.715
402
+ - type: recall_at_10
403
+ value: 49.508
404
+ - type: recall_at_100
405
+ value: 71.76599999999999
406
+ - type: recall_at_1000
407
+ value: 91.118
408
+ - type: recall_at_3
409
+ value: 37.356
410
+ - type: recall_at_5
411
+ value: 41.836
412
+ - task:
413
+ type: Retrieval
414
+ dataset:
415
+ type: BeIR/cqadupstack
416
+ name: MTEB CQADupstackEnglishRetrieval
417
+ config: default
418
+ split: test
419
+ metrics:
420
+ - type: map_at_1
421
+ value: 19.663
422
+ - type: map_at_10
423
+ value: 27.086
424
+ - type: map_at_100
425
+ value: 28.066999999999997
426
+ - type: map_at_1000
427
+ value: 28.18
428
+ - type: map_at_3
429
+ value: 24.819
430
+ - type: map_at_5
431
+ value: 26.332
432
+ - type: ndcg_at_1
433
+ value: 25.732
434
+ - type: ndcg_at_10
435
+ value: 31.613999999999997
436
+ - type: ndcg_at_100
437
+ value: 35.757
438
+ - type: ndcg_at_1000
439
+ value: 38.21
440
+ - type: ndcg_at_3
441
+ value: 28.332
442
+ - type: ndcg_at_5
443
+ value: 30.264000000000003
444
+ - type: precision_at_1
445
+ value: 25.732
446
+ - type: precision_at_10
447
+ value: 6.038
448
+ - type: precision_at_100
449
+ value: 1.034
450
+ - type: precision_at_1000
451
+ value: 0.149
452
+ - type: precision_at_3
453
+ value: 13.864
454
+ - type: precision_at_5
455
+ value: 10.241999999999999
456
+ - type: recall_at_1
457
+ value: 19.663
458
+ - type: recall_at_10
459
+ value: 39.585
460
+ - type: recall_at_100
461
+ value: 57.718
462
+ - type: recall_at_1000
463
+ value: 74.26700000000001
464
+ - type: recall_at_3
465
+ value: 29.845
466
+ - type: recall_at_5
467
+ value: 35.105
468
+ - task:
469
+ type: Retrieval
470
+ dataset:
471
+ type: BeIR/cqadupstack
472
+ name: MTEB CQADupstackGamingRetrieval
473
+ config: default
474
+ split: test
475
+ metrics:
476
+ - type: map_at_1
477
+ value: 30.125
478
+ - type: map_at_10
479
+ value: 39.824
480
+ - type: map_at_100
481
+ value: 40.935
482
+ - type: map_at_1000
483
+ value: 41.019
484
+ - type: map_at_3
485
+ value: 37.144
486
+ - type: map_at_5
487
+ value: 38.647999999999996
488
+ - type: ndcg_at_1
489
+ value: 34.922
490
+ - type: ndcg_at_10
491
+ value: 45.072
492
+ - type: ndcg_at_100
493
+ value: 50.046
494
+ - type: ndcg_at_1000
495
+ value: 51.895
496
+ - type: ndcg_at_3
497
+ value: 40.251
498
+ - type: ndcg_at_5
499
+ value: 42.581
500
+ - type: precision_at_1
501
+ value: 34.922
502
+ - type: precision_at_10
503
+ value: 7.303999999999999
504
+ - type: precision_at_100
505
+ value: 1.0739999999999998
506
+ - type: precision_at_1000
507
+ value: 0.13
508
+ - type: precision_at_3
509
+ value: 17.994
510
+ - type: precision_at_5
511
+ value: 12.475999999999999
512
+ - type: recall_at_1
513
+ value: 30.125
514
+ - type: recall_at_10
515
+ value: 57.253
516
+ - type: recall_at_100
517
+ value: 79.35799999999999
518
+ - type: recall_at_1000
519
+ value: 92.523
520
+ - type: recall_at_3
521
+ value: 44.088
522
+ - type: recall_at_5
523
+ value: 49.893
524
+ - task:
525
+ type: Retrieval
526
+ dataset:
527
+ type: BeIR/cqadupstack
528
+ name: MTEB CQADupstackGisRetrieval
529
+ config: default
530
+ split: test
531
+ metrics:
532
+ - type: map_at_1
533
+ value: 16.298000000000002
534
+ - type: map_at_10
535
+ value: 21.479
536
+ - type: map_at_100
537
+ value: 22.387
538
+ - type: map_at_1000
539
+ value: 22.483
540
+ - type: map_at_3
541
+ value: 19.743
542
+ - type: map_at_5
543
+ value: 20.444000000000003
544
+ - type: ndcg_at_1
545
+ value: 17.740000000000002
546
+ - type: ndcg_at_10
547
+ value: 24.887
548
+ - type: ndcg_at_100
549
+ value: 29.544999999999998
550
+ - type: ndcg_at_1000
551
+ value: 32.417
552
+ - type: ndcg_at_3
553
+ value: 21.274
554
+ - type: ndcg_at_5
555
+ value: 22.399
556
+ - type: precision_at_1
557
+ value: 17.740000000000002
558
+ - type: precision_at_10
559
+ value: 3.932
560
+ - type: precision_at_100
561
+ value: 0.666
562
+ - type: precision_at_1000
563
+ value: 0.094
564
+ - type: precision_at_3
565
+ value: 8.927
566
+ - type: precision_at_5
567
+ value: 6.056
568
+ - type: recall_at_1
569
+ value: 16.298000000000002
570
+ - type: recall_at_10
571
+ value: 34.031
572
+ - type: recall_at_100
573
+ value: 55.769000000000005
574
+ - type: recall_at_1000
575
+ value: 78.19500000000001
576
+ - type: recall_at_3
577
+ value: 23.799999999999997
578
+ - type: recall_at_5
579
+ value: 26.562
580
+ - task:
581
+ type: Retrieval
582
+ dataset:
583
+ type: BeIR/cqadupstack
584
+ name: MTEB CQADupstackMathematicaRetrieval
585
+ config: default
586
+ split: test
587
+ metrics:
588
+ - type: map_at_1
589
+ value: 10.958
590
+ - type: map_at_10
591
+ value: 16.999
592
+ - type: map_at_100
593
+ value: 17.979
594
+ - type: map_at_1000
595
+ value: 18.112000000000002
596
+ - type: map_at_3
597
+ value: 15.010000000000002
598
+ - type: map_at_5
599
+ value: 16.256999999999998
600
+ - type: ndcg_at_1
601
+ value: 14.179
602
+ - type: ndcg_at_10
603
+ value: 20.985
604
+ - type: ndcg_at_100
605
+ value: 26.216
606
+ - type: ndcg_at_1000
607
+ value: 29.675
608
+ - type: ndcg_at_3
609
+ value: 17.28
610
+ - type: ndcg_at_5
611
+ value: 19.301
612
+ - type: precision_at_1
613
+ value: 14.179
614
+ - type: precision_at_10
615
+ value: 3.968
616
+ - type: precision_at_100
617
+ value: 0.784
618
+ - type: precision_at_1000
619
+ value: 0.121
620
+ - type: precision_at_3
621
+ value: 8.541
622
+ - type: precision_at_5
623
+ value: 6.468
624
+ - type: recall_at_1
625
+ value: 10.958
626
+ - type: recall_at_10
627
+ value: 29.903000000000002
628
+ - type: recall_at_100
629
+ value: 53.413
630
+ - type: recall_at_1000
631
+ value: 78.74799999999999
632
+ - type: recall_at_3
633
+ value: 19.717000000000002
634
+ - type: recall_at_5
635
+ value: 24.817
636
+ - task:
637
+ type: Retrieval
638
+ dataset:
639
+ type: BeIR/cqadupstack
640
+ name: MTEB CQADupstackPhysicsRetrieval
641
+ config: default
642
+ split: test
643
+ metrics:
644
+ - type: map_at_1
645
+ value: 21.217
646
+ - type: map_at_10
647
+ value: 29.677
648
+ - type: map_at_100
649
+ value: 30.928
650
+ - type: map_at_1000
651
+ value: 31.063000000000002
652
+ - type: map_at_3
653
+ value: 26.611
654
+ - type: map_at_5
655
+ value: 28.463
656
+ - type: ndcg_at_1
657
+ value: 26.083000000000002
658
+ - type: ndcg_at_10
659
+ value: 35.217
660
+ - type: ndcg_at_100
661
+ value: 40.715
662
+ - type: ndcg_at_1000
663
+ value: 43.559
664
+ - type: ndcg_at_3
665
+ value: 30.080000000000002
666
+ - type: ndcg_at_5
667
+ value: 32.701
668
+ - type: precision_at_1
669
+ value: 26.083000000000002
670
+ - type: precision_at_10
671
+ value: 6.622
672
+ - type: precision_at_100
673
+ value: 1.115
674
+ - type: precision_at_1000
675
+ value: 0.156
676
+ - type: precision_at_3
677
+ value: 14.629
678
+ - type: precision_at_5
679
+ value: 10.837
680
+ - type: recall_at_1
681
+ value: 21.217
682
+ - type: recall_at_10
683
+ value: 47.031
684
+ - type: recall_at_100
685
+ value: 70.378
686
+ - type: recall_at_1000
687
+ value: 89.704
688
+ - type: recall_at_3
689
+ value: 32.427
690
+ - type: recall_at_5
691
+ value: 39.31
692
+ - task:
693
+ type: Retrieval
694
+ dataset:
695
+ type: BeIR/cqadupstack
696
+ name: MTEB CQADupstackProgrammersRetrieval
697
+ config: default
698
+ split: test
699
+ metrics:
700
+ - type: map_at_1
701
+ value: 19.274
702
+ - type: map_at_10
703
+ value: 26.398
704
+ - type: map_at_100
705
+ value: 27.711000000000002
706
+ - type: map_at_1000
707
+ value: 27.833000000000002
708
+ - type: map_at_3
709
+ value: 24.294
710
+ - type: map_at_5
711
+ value: 25.385
712
+ - type: ndcg_at_1
713
+ value: 24.886
714
+ - type: ndcg_at_10
715
+ value: 30.909
716
+ - type: ndcg_at_100
717
+ value: 36.941
718
+ - type: ndcg_at_1000
719
+ value: 39.838
720
+ - type: ndcg_at_3
721
+ value: 27.455000000000002
722
+ - type: ndcg_at_5
723
+ value: 28.828
724
+ - type: precision_at_1
725
+ value: 24.886
726
+ - type: precision_at_10
727
+ value: 5.6739999999999995
728
+ - type: precision_at_100
729
+ value: 1.0290000000000001
730
+ - type: precision_at_1000
731
+ value: 0.146
732
+ - type: precision_at_3
733
+ value: 13.242
734
+ - type: precision_at_5
735
+ value: 9.292
736
+ - type: recall_at_1
737
+ value: 19.274
738
+ - type: recall_at_10
739
+ value: 39.643
740
+ - type: recall_at_100
741
+ value: 66.091
742
+ - type: recall_at_1000
743
+ value: 86.547
744
+ - type: recall_at_3
745
+ value: 29.602
746
+ - type: recall_at_5
747
+ value: 33.561
748
+ - task:
749
+ type: Retrieval
750
+ dataset:
751
+ type: BeIR/cqadupstack
752
+ name: MTEB CQADupstackRetrieval
753
+ config: default
754
+ split: test
755
+ metrics:
756
+ - type: map_at_1
757
+ value: 18.653666666666666
758
+ - type: map_at_10
759
+ value: 25.606666666666666
760
+ - type: map_at_100
761
+ value: 26.669333333333334
762
+ - type: map_at_1000
763
+ value: 26.795833333333334
764
+ - type: map_at_3
765
+ value: 23.43433333333333
766
+ - type: map_at_5
767
+ value: 24.609666666666666
768
+ - type: ndcg_at_1
769
+ value: 22.742083333333333
770
+ - type: ndcg_at_10
771
+ value: 29.978333333333335
772
+ - type: ndcg_at_100
773
+ value: 34.89808333333333
774
+ - type: ndcg_at_1000
775
+ value: 37.806583333333336
776
+ - type: ndcg_at_3
777
+ value: 26.223666666666674
778
+ - type: ndcg_at_5
779
+ value: 27.91033333333333
780
+ - type: precision_at_1
781
+ value: 22.742083333333333
782
+ - type: precision_at_10
783
+ value: 5.397083333333334
784
+ - type: precision_at_100
785
+ value: 0.9340000000000002
786
+ - type: precision_at_1000
787
+ value: 0.13691666666666663
788
+ - type: precision_at_3
789
+ value: 12.331083333333332
790
+ - type: precision_at_5
791
+ value: 8.805499999999999
792
+ - type: recall_at_1
793
+ value: 18.653666666666666
794
+ - type: recall_at_10
795
+ value: 39.22625000000001
796
+ - type: recall_at_100
797
+ value: 61.31049999999999
798
+ - type: recall_at_1000
799
+ value: 82.19058333333334
800
+ - type: recall_at_3
801
+ value: 28.517333333333333
802
+ - type: recall_at_5
803
+ value: 32.9565
804
+ - task:
805
+ type: Retrieval
806
+ dataset:
807
+ type: BeIR/cqadupstack
808
+ name: MTEB CQADupstackStatsRetrieval
809
+ config: default
810
+ split: test
811
+ metrics:
812
+ - type: map_at_1
813
+ value: 16.07
814
+ - type: map_at_10
815
+ value: 21.509
816
+ - type: map_at_100
817
+ value: 22.335
818
+ - type: map_at_1000
819
+ value: 22.437
820
+ - type: map_at_3
821
+ value: 19.717000000000002
822
+ - type: map_at_5
823
+ value: 20.574
824
+ - type: ndcg_at_1
825
+ value: 18.865000000000002
826
+ - type: ndcg_at_10
827
+ value: 25.135999999999996
828
+ - type: ndcg_at_100
829
+ value: 29.483999999999998
830
+ - type: ndcg_at_1000
831
+ value: 32.303
832
+ - type: ndcg_at_3
833
+ value: 21.719
834
+ - type: ndcg_at_5
835
+ value: 23.039
836
+ - type: precision_at_1
837
+ value: 18.865000000000002
838
+ - type: precision_at_10
839
+ value: 4.263999999999999
840
+ - type: precision_at_100
841
+ value: 0.696
842
+ - type: precision_at_1000
843
+ value: 0.1
844
+ - type: precision_at_3
845
+ value: 9.866999999999999
846
+ - type: precision_at_5
847
+ value: 6.902
848
+ - type: recall_at_1
849
+ value: 16.07
850
+ - type: recall_at_10
851
+ value: 33.661
852
+ - type: recall_at_100
853
+ value: 54.001999999999995
854
+ - type: recall_at_1000
855
+ value: 75.564
856
+ - type: recall_at_3
857
+ value: 23.956
858
+ - type: recall_at_5
859
+ value: 27.264
860
+ - task:
861
+ type: Retrieval
862
+ dataset:
863
+ type: BeIR/cqadupstack
864
+ name: MTEB CQADupstackTexRetrieval
865
+ config: default
866
+ split: test
867
+ metrics:
868
+ - type: map_at_1
869
+ value: 10.847
870
+ - type: map_at_10
871
+ value: 15.518
872
+ - type: map_at_100
873
+ value: 16.384
874
+ - type: map_at_1000
875
+ value: 16.506
876
+ - type: map_at_3
877
+ value: 14.093
878
+ - type: map_at_5
879
+ value: 14.868
880
+ - type: ndcg_at_1
881
+ value: 13.764999999999999
882
+ - type: ndcg_at_10
883
+ value: 18.766
884
+ - type: ndcg_at_100
885
+ value: 23.076
886
+ - type: ndcg_at_1000
887
+ value: 26.344
888
+ - type: ndcg_at_3
889
+ value: 16.150000000000002
890
+ - type: ndcg_at_5
891
+ value: 17.373
892
+ - type: precision_at_1
893
+ value: 13.764999999999999
894
+ - type: precision_at_10
895
+ value: 3.572
896
+ - type: precision_at_100
897
+ value: 0.6779999999999999
898
+ - type: precision_at_1000
899
+ value: 0.11199999999999999
900
+ - type: precision_at_3
901
+ value: 7.88
902
+ - type: precision_at_5
903
+ value: 5.712
904
+ - type: recall_at_1
905
+ value: 10.847
906
+ - type: recall_at_10
907
+ value: 25.141999999999996
908
+ - type: recall_at_100
909
+ value: 44.847
910
+ - type: recall_at_1000
911
+ value: 68.92099999999999
912
+ - type: recall_at_3
913
+ value: 17.721999999999998
914
+ - type: recall_at_5
915
+ value: 20.968999999999998
916
+ - task:
917
+ type: Retrieval
918
+ dataset:
919
+ type: BeIR/cqadupstack
920
+ name: MTEB CQADupstackUnixRetrieval
921
+ config: default
922
+ split: test
923
+ metrics:
924
+ - type: map_at_1
925
+ value: 18.377
926
+ - type: map_at_10
927
+ value: 26.005
928
+ - type: map_at_100
929
+ value: 26.996
930
+ - type: map_at_1000
931
+ value: 27.116
932
+ - type: map_at_3
933
+ value: 23.712
934
+ - type: map_at_5
935
+ value: 24.859
936
+ - type: ndcg_at_1
937
+ value: 22.201
938
+ - type: ndcg_at_10
939
+ value: 30.635
940
+ - type: ndcg_at_100
941
+ value: 35.623
942
+ - type: ndcg_at_1000
943
+ value: 38.551
944
+ - type: ndcg_at_3
945
+ value: 26.565
946
+ - type: ndcg_at_5
947
+ value: 28.28
948
+ - type: precision_at_1
949
+ value: 22.201
950
+ - type: precision_at_10
951
+ value: 5.41
952
+ - type: precision_at_100
953
+ value: 0.88
954
+ - type: precision_at_1000
955
+ value: 0.125
956
+ - type: precision_at_3
957
+ value: 12.531
958
+ - type: precision_at_5
959
+ value: 8.806
960
+ - type: recall_at_1
961
+ value: 18.377
962
+ - type: recall_at_10
963
+ value: 40.908
964
+ - type: recall_at_100
965
+ value: 63.563
966
+ - type: recall_at_1000
967
+ value: 84.503
968
+ - type: recall_at_3
969
+ value: 29.793999999999997
970
+ - type: recall_at_5
971
+ value: 34.144999999999996
972
+ - task:
973
+ type: Retrieval
974
+ dataset:
975
+ type: BeIR/cqadupstack
976
+ name: MTEB CQADupstackWebmastersRetrieval
977
+ config: default
978
+ split: test
979
+ metrics:
980
+ - type: map_at_1
981
+ value: 20.246
982
+ - type: map_at_10
983
+ value: 27.528000000000002
984
+ - type: map_at_100
985
+ value: 28.78
986
+ - type: map_at_1000
987
+ value: 29.002
988
+ - type: map_at_3
989
+ value: 25.226
990
+ - type: map_at_5
991
+ value: 26.355
992
+ - type: ndcg_at_1
993
+ value: 25.099
994
+ - type: ndcg_at_10
995
+ value: 32.421
996
+ - type: ndcg_at_100
997
+ value: 37.2
998
+ - type: ndcg_at_1000
999
+ value: 40.693
1000
+ - type: ndcg_at_3
1001
+ value: 28.768
1002
+ - type: ndcg_at_5
1003
+ value: 30.23
1004
+ - type: precision_at_1
1005
+ value: 25.099
1006
+ - type: precision_at_10
1007
+ value: 6.245
1008
+ - type: precision_at_100
1009
+ value: 1.269
1010
+ - type: precision_at_1000
1011
+ value: 0.218
1012
+ - type: precision_at_3
1013
+ value: 13.767999999999999
1014
+ - type: precision_at_5
1015
+ value: 9.881
1016
+ - type: recall_at_1
1017
+ value: 20.246
1018
+ - type: recall_at_10
1019
+ value: 41.336
1020
+ - type: recall_at_100
1021
+ value: 63.098
1022
+ - type: recall_at_1000
1023
+ value: 86.473
1024
+ - type: recall_at_3
1025
+ value: 30.069000000000003
1026
+ - type: recall_at_5
1027
+ value: 34.262
1028
+ - task:
1029
+ type: Retrieval
1030
+ dataset:
1031
+ type: BeIR/cqadupstack
1032
+ name: MTEB CQADupstackWordpressRetrieval
1033
+ config: default
1034
+ split: test
1035
+ metrics:
1036
+ - type: map_at_1
1037
+ value: 14.054
1038
+ - type: map_at_10
1039
+ value: 20.25
1040
+ - type: map_at_100
1041
+ value: 21.178
1042
+ - type: map_at_1000
1043
+ value: 21.288999999999998
1044
+ - type: map_at_3
1045
+ value: 18.584999999999997
1046
+ - type: map_at_5
1047
+ value: 19.536
1048
+ - type: ndcg_at_1
1049
+ value: 15.527
1050
+ - type: ndcg_at_10
1051
+ value: 23.745
1052
+ - type: ndcg_at_100
1053
+ value: 28.610999999999997
1054
+ - type: ndcg_at_1000
1055
+ value: 31.740000000000002
1056
+ - type: ndcg_at_3
1057
+ value: 20.461
1058
+ - type: ndcg_at_5
1059
+ value: 22.072
1060
+ - type: precision_at_1
1061
+ value: 15.527
1062
+ - type: precision_at_10
1063
+ value: 3.882
1064
+ - type: precision_at_100
1065
+ value: 0.6930000000000001
1066
+ - type: precision_at_1000
1067
+ value: 0.104
1068
+ - type: precision_at_3
1069
+ value: 9.181000000000001
1070
+ - type: precision_at_5
1071
+ value: 6.433
1072
+ - type: recall_at_1
1073
+ value: 14.054
1074
+ - type: recall_at_10
1075
+ value: 32.714
1076
+ - type: recall_at_100
1077
+ value: 55.723
1078
+ - type: recall_at_1000
1079
+ value: 79.72399999999999
1080
+ - type: recall_at_3
1081
+ value: 23.832
1082
+ - type: recall_at_5
1083
+ value: 27.754
1084
+ - task:
1085
+ type: Retrieval
1086
+ dataset:
1087
+ type: climate-fever
1088
+ name: MTEB ClimateFEVER
1089
+ config: default
1090
+ split: test
1091
+ metrics:
1092
+ - type: map_at_1
1093
+ value: 6.122
1094
+ - type: map_at_10
1095
+ value: 11.556
1096
+ - type: map_at_100
1097
+ value: 12.998000000000001
1098
+ - type: map_at_1000
1099
+ value: 13.202
1100
+ - type: map_at_3
1101
+ value: 9.657
1102
+ - type: map_at_5
1103
+ value: 10.585
1104
+ - type: ndcg_at_1
1105
+ value: 15.049000000000001
1106
+ - type: ndcg_at_10
1107
+ value: 17.574
1108
+ - type: ndcg_at_100
1109
+ value: 24.465999999999998
1110
+ - type: ndcg_at_1000
1111
+ value: 28.511999999999997
1112
+ - type: ndcg_at_3
1113
+ value: 13.931
1114
+ - type: ndcg_at_5
1115
+ value: 15.112
1116
+ - type: precision_at_1
1117
+ value: 15.049000000000001
1118
+ - type: precision_at_10
1119
+ value: 5.831
1120
+ - type: precision_at_100
1121
+ value: 1.322
1122
+ - type: precision_at_1000
1123
+ value: 0.20500000000000002
1124
+ - type: precision_at_3
1125
+ value: 10.749
1126
+ - type: precision_at_5
1127
+ value: 8.365
1128
+ - type: recall_at_1
1129
+ value: 6.122
1130
+ - type: recall_at_10
1131
+ value: 22.207
1132
+ - type: recall_at_100
1133
+ value: 47.08
1134
+ - type: recall_at_1000
1135
+ value: 70.182
1136
+ - type: recall_at_3
1137
+ value: 13.416
1138
+ - type: recall_at_5
1139
+ value: 16.672
1140
+ - task:
1141
+ type: Retrieval
1142
+ dataset:
1143
+ type: dbpedia-entity
1144
+ name: MTEB DBPedia
1145
+ config: default
1146
+ split: test
1147
+ metrics:
1148
+ - type: map_at_1
1149
+ value: 4.672
1150
+ - type: map_at_10
1151
+ value: 10.534
1152
+ - type: map_at_100
1153
+ value: 14.798
1154
+ - type: map_at_1000
1155
+ value: 15.927
1156
+ - type: map_at_3
1157
+ value: 7.317
1158
+ - type: map_at_5
1159
+ value: 8.726
1160
+ - type: ndcg_at_1
1161
+ value: 36.5
1162
+ - type: ndcg_at_10
1163
+ value: 26.098
1164
+ - type: ndcg_at_100
1165
+ value: 29.215999999999998
1166
+ - type: ndcg_at_1000
1167
+ value: 36.254999999999995
1168
+ - type: ndcg_at_3
1169
+ value: 29.247
1170
+ - type: ndcg_at_5
1171
+ value: 27.692
1172
+ - type: precision_at_1
1173
+ value: 47.25
1174
+ - type: precision_at_10
1175
+ value: 22.625
1176
+ - type: precision_at_100
1177
+ value: 7.042
1178
+ - type: precision_at_1000
1179
+ value: 1.6129999999999998
1180
+ - type: precision_at_3
1181
+ value: 34.083000000000006
1182
+ - type: precision_at_5
1183
+ value: 29.5
1184
+ - type: recall_at_1
1185
+ value: 4.672
1186
+ - type: recall_at_10
1187
+ value: 15.638
1188
+ - type: recall_at_100
1189
+ value: 36.228
1190
+ - type: recall_at_1000
1191
+ value: 58.831
1192
+ - type: recall_at_3
1193
+ value: 8.578
1194
+ - type: recall_at_5
1195
+ value: 11.18
1196
+ - task:
1197
+ type: Classification
1198
+ dataset:
1199
+ type: mteb/emotion
1200
+ name: MTEB EmotionClassification
1201
+ config: default
1202
+ split: test
1203
+ metrics:
1204
+ - type: accuracy
1205
+ value: 49.919999999999995
1206
+ - type: f1
1207
+ value: 45.37973678791632
1208
+ - task:
1209
+ type: Retrieval
1210
+ dataset:
1211
+ type: fever
1212
+ name: MTEB FEVER
1213
+ config: default
1214
+ split: test
1215
+ metrics:
1216
+ - type: map_at_1
1217
+ value: 25.801000000000002
1218
+ - type: map_at_10
1219
+ value: 33.941
1220
+ - type: map_at_100
1221
+ value: 34.73
1222
+ - type: map_at_1000
1223
+ value: 34.793
1224
+ - type: map_at_3
1225
+ value: 31.705
1226
+ - type: map_at_5
1227
+ value: 33.047
1228
+ - type: ndcg_at_1
1229
+ value: 27.933000000000003
1230
+ - type: ndcg_at_10
1231
+ value: 38.644
1232
+ - type: ndcg_at_100
1233
+ value: 42.594
1234
+ - type: ndcg_at_1000
1235
+ value: 44.352000000000004
1236
+ - type: ndcg_at_3
1237
+ value: 34.199
1238
+ - type: ndcg_at_5
1239
+ value: 36.573
1240
+ - type: precision_at_1
1241
+ value: 27.933000000000003
1242
+ - type: precision_at_10
1243
+ value: 5.603000000000001
1244
+ - type: precision_at_100
1245
+ value: 0.773
1246
+ - type: precision_at_1000
1247
+ value: 0.094
1248
+ - type: precision_at_3
1249
+ value: 14.171
1250
+ - type: precision_at_5
1251
+ value: 9.786999999999999
1252
+ - type: recall_at_1
1253
+ value: 25.801000000000002
1254
+ - type: recall_at_10
1255
+ value: 50.876
1256
+ - type: recall_at_100
1257
+ value: 69.253
1258
+ - type: recall_at_1000
1259
+ value: 82.907
1260
+ - type: recall_at_3
1261
+ value: 38.879000000000005
1262
+ - type: recall_at_5
1263
+ value: 44.651999999999994
1264
+ - task:
1265
+ type: Retrieval
1266
+ dataset:
1267
+ type: fiqa
1268
+ name: MTEB FiQA2018
1269
+ config: default
1270
+ split: test
1271
+ metrics:
1272
+ - type: map_at_1
1273
+ value: 9.142
1274
+ - type: map_at_10
1275
+ value: 13.841999999999999
1276
+ - type: map_at_100
1277
+ value: 14.960999999999999
1278
+ - type: map_at_1000
1279
+ value: 15.187000000000001
1280
+ - type: map_at_3
1281
+ value: 11.966000000000001
1282
+ - type: map_at_5
1283
+ value: 12.921
1284
+ - type: ndcg_at_1
1285
+ value: 18.364
1286
+ - type: ndcg_at_10
1287
+ value: 18.590999999999998
1288
+ - type: ndcg_at_100
1289
+ value: 24.153
1290
+ - type: ndcg_at_1000
1291
+ value: 29.104000000000003
1292
+ - type: ndcg_at_3
1293
+ value: 16.323
1294
+ - type: ndcg_at_5
1295
+ value: 17.000999999999998
1296
+ - type: precision_at_1
1297
+ value: 18.364
1298
+ - type: precision_at_10
1299
+ value: 5.216
1300
+ - type: precision_at_100
1301
+ value: 1.09
1302
+ - type: precision_at_1000
1303
+ value: 0.193
1304
+ - type: precision_at_3
1305
+ value: 10.751
1306
+ - type: precision_at_5
1307
+ value: 7.932
1308
+ - type: recall_at_1
1309
+ value: 9.142
1310
+ - type: recall_at_10
1311
+ value: 22.747
1312
+ - type: recall_at_100
1313
+ value: 44.585
1314
+ - type: recall_at_1000
1315
+ value: 75.481
1316
+ - type: recall_at_3
1317
+ value: 14.602
1318
+ - type: recall_at_5
1319
+ value: 17.957
1320
+ - task:
1321
+ type: Retrieval
1322
+ dataset:
1323
+ type: hotpotqa
1324
+ name: MTEB HotpotQA
1325
+ config: default
1326
+ split: test
1327
+ metrics:
1328
+ - type: map_at_1
1329
+ value: 18.677
1330
+ - type: map_at_10
1331
+ value: 26.616
1332
+ - type: map_at_100
1333
+ value: 27.605
1334
+ - type: map_at_1000
1335
+ value: 27.711999999999996
1336
+ - type: map_at_3
1337
+ value: 24.396
1338
+ - type: map_at_5
1339
+ value: 25.627
1340
+ - type: ndcg_at_1
1341
+ value: 37.352999999999994
1342
+ - type: ndcg_at_10
1343
+ value: 33.995
1344
+ - type: ndcg_at_100
1345
+ value: 38.423
1346
+ - type: ndcg_at_1000
1347
+ value: 40.947
1348
+ - type: ndcg_at_3
1349
+ value: 29.885
1350
+ - type: ndcg_at_5
1351
+ value: 31.874999999999996
1352
+ - type: precision_at_1
1353
+ value: 37.352999999999994
1354
+ - type: precision_at_10
1355
+ value: 7.539999999999999
1356
+ - type: precision_at_100
1357
+ value: 1.107
1358
+ - type: precision_at_1000
1359
+ value: 0.145
1360
+ - type: precision_at_3
1361
+ value: 18.938
1362
+ - type: precision_at_5
1363
+ value: 12.943
1364
+ - type: recall_at_1
1365
+ value: 18.677
1366
+ - type: recall_at_10
1367
+ value: 37.698
1368
+ - type: recall_at_100
1369
+ value: 55.354000000000006
1370
+ - type: recall_at_1000
1371
+ value: 72.255
1372
+ - type: recall_at_3
1373
+ value: 28.406
1374
+ - type: recall_at_5
1375
+ value: 32.357
1376
+ - task:
1377
+ type: Classification
1378
+ dataset:
1379
+ type: mteb/imdb
1380
+ name: MTEB ImdbClassification
1381
+ config: default
1382
+ split: test
1383
+ metrics:
1384
+ - type: accuracy
1385
+ value: 74.3292
1386
+ - type: ap
1387
+ value: 68.30186110189658
1388
+ - type: f1
1389
+ value: 74.20709636944783
1390
+ - task:
1391
+ type: Retrieval
1392
+ dataset:
1393
+ type: msmarco
1394
+ name: MTEB MSMARCO
1395
+ config: default
1396
+ split: validation
1397
+ metrics:
1398
+ - type: map_at_1
1399
+ value: 6.889000000000001
1400
+ - type: map_at_10
1401
+ value: 12.321
1402
+ - type: map_at_100
1403
+ value: 13.416
1404
+ - type: map_at_1000
1405
+ value: 13.525
1406
+ - type: map_at_3
1407
+ value: 10.205
1408
+ - type: map_at_5
1409
+ value: 11.342
1410
+ - type: ndcg_at_1
1411
+ value: 7.092
1412
+ - type: ndcg_at_10
1413
+ value: 15.827
1414
+ - type: ndcg_at_100
1415
+ value: 21.72
1416
+ - type: ndcg_at_1000
1417
+ value: 24.836
1418
+ - type: ndcg_at_3
1419
+ value: 11.393
1420
+ - type: ndcg_at_5
1421
+ value: 13.462
1422
+ - type: precision_at_1
1423
+ value: 7.092
1424
+ - type: precision_at_10
1425
+ value: 2.7969999999999997
1426
+ - type: precision_at_100
1427
+ value: 0.583
1428
+ - type: precision_at_1000
1429
+ value: 0.08499999999999999
1430
+ - type: precision_at_3
1431
+ value: 5.019
1432
+ - type: precision_at_5
1433
+ value: 4.06
1434
+ - type: recall_at_1
1435
+ value: 6.889000000000001
1436
+ - type: recall_at_10
1437
+ value: 26.791999999999998
1438
+ - type: recall_at_100
1439
+ value: 55.371
1440
+ - type: recall_at_1000
1441
+ value: 80.12899999999999
1442
+ - type: recall_at_3
1443
+ value: 14.573
1444
+ - type: recall_at_5
1445
+ value: 19.557
1446
+ - task:
1447
+ type: Classification
1448
+ dataset:
1449
+ type: mteb/mtop_domain
1450
+ name: MTEB MTOPDomainClassification (en)
1451
+ config: en
1452
+ split: test
1453
+ metrics:
1454
+ - type: accuracy
1455
+ value: 89.6374829001368
1456
+ - type: f1
1457
+ value: 89.20878379358307
1458
+ - task:
1459
+ type: Classification
1460
+ dataset:
1461
+ type: mteb/mtop_domain
1462
+ name: MTEB MTOPDomainClassification (de)
1463
+ config: de
1464
+ split: test
1465
+ metrics:
1466
+ - type: accuracy
1467
+ value: 84.54212454212454
1468
+ - type: f1
1469
+ value: 82.81080100037023
1470
+ - task:
1471
+ type: Classification
1472
+ dataset:
1473
+ type: mteb/mtop_domain
1474
+ name: MTEB MTOPDomainClassification (es)
1475
+ config: es
1476
+ split: test
1477
+ metrics:
1478
+ - type: accuracy
1479
+ value: 86.46430953969313
1480
+ - type: f1
1481
+ value: 86.00019824223267
1482
+ - task:
1483
+ type: Classification
1484
+ dataset:
1485
+ type: mteb/mtop_domain
1486
+ name: MTEB MTOPDomainClassification (fr)
1487
+ config: fr
1488
+ split: test
1489
+ metrics:
1490
+ - type: accuracy
1491
+ value: 81.31850923896022
1492
+ - type: f1
1493
+ value: 81.07860454762863
1494
+ - task:
1495
+ type: Classification
1496
+ dataset:
1497
+ type: mteb/mtop_domain
1498
+ name: MTEB MTOPDomainClassification (hi)
1499
+ config: hi
1500
+ split: test
1501
+ metrics:
1502
+ - type: accuracy
1503
+ value: 58.23234134098243
1504
+ - type: f1
1505
+ value: 56.63845098081841
1506
+ - task:
1507
+ type: Classification
1508
+ dataset:
1509
+ type: mteb/mtop_domain
1510
+ name: MTEB MTOPDomainClassification (th)
1511
+ config: th
1512
+ split: test
1513
+ metrics:
1514
+ - type: accuracy
1515
+ value: 72.28571428571429
1516
+ - type: f1
1517
+ value: 70.95796714592039
1518
+ - task:
1519
+ type: Classification
1520
+ dataset:
1521
+ type: mteb/mtop_intent
1522
+ name: MTEB MTOPIntentClassification (en)
1523
+ config: en
1524
+ split: test
1525
+ metrics:
1526
+ - type: accuracy
1527
+ value: 70.68171454628363
1528
+ - type: f1
1529
+ value: 52.57188062729139
1530
+ - task:
1531
+ type: Classification
1532
+ dataset:
1533
+ type: mteb/mtop_intent
1534
+ name: MTEB MTOPIntentClassification (de)
1535
+ config: de
1536
+ split: test
1537
+ metrics:
1538
+ - type: accuracy
1539
+ value: 60.521273598196665
1540
+ - type: f1
1541
+ value: 42.70492970339204
1542
+ - task:
1543
+ type: Classification
1544
+ dataset:
1545
+ type: mteb/mtop_intent
1546
+ name: MTEB MTOPIntentClassification (es)
1547
+ config: es
1548
+ split: test
1549
+ metrics:
1550
+ - type: accuracy
1551
+ value: 64.32288192128087
1552
+ - type: f1
1553
+ value: 45.97360620220273
1554
+ - task:
1555
+ type: Classification
1556
+ dataset:
1557
+ type: mteb/mtop_intent
1558
+ name: MTEB MTOPIntentClassification (fr)
1559
+ config: fr
1560
+ split: test
1561
+ metrics:
1562
+ - type: accuracy
1563
+ value: 58.67209520826808
1564
+ - type: f1
1565
+ value: 42.82844991304579
1566
+ - task:
1567
+ type: Classification
1568
+ dataset:
1569
+ type: mteb/mtop_intent
1570
+ name: MTEB MTOPIntentClassification (hi)
1571
+ config: hi
1572
+ split: test
1573
+ metrics:
1574
+ - type: accuracy
1575
+ value: 41.95769092864826
1576
+ - type: f1
1577
+ value: 28.914127631431263
1578
+ - task:
1579
+ type: Classification
1580
+ dataset:
1581
+ type: mteb/mtop_intent
1582
+ name: MTEB MTOPIntentClassification (th)
1583
+ config: th
1584
+ split: test
1585
+ metrics:
1586
+ - type: accuracy
1587
+ value: 55.28390596745027
1588
+ - type: f1
1589
+ value: 38.33899250561289
1590
+ - task:
1591
+ type: Classification
1592
+ dataset:
1593
+ type: mteb/amazon_massive_intent
1594
+ name: MTEB MassiveIntentClassification (en)
1595
+ config: en
1596
+ split: test
1597
+ metrics:
1598
+ - type: accuracy
1599
+ value: 70.00336247478144
1600
+ - type: f1
1601
+ value: 68.72041942191649
1602
+ - task:
1603
+ type: Classification
1604
+ dataset:
1605
+ type: mteb/amazon_massive_scenario
1606
+ name: MTEB MassiveScenarioClassification (en)
1607
+ config: en
1608
+ split: test
1609
+ metrics:
1610
+ - type: accuracy
1611
+ value: 75.0268997982515
1612
+ - type: f1
1613
+ value: 75.29844481506652
1614
+ - task:
1615
+ type: Clustering
1616
+ dataset:
1617
+ type: mteb/medrxiv-clustering-p2p
1618
+ name: MTEB MedrxivClusteringP2P
1619
+ config: default
1620
+ split: test
1621
+ metrics:
1622
+ - type: v_measure
1623
+ value: 30.327566856300813
1624
+ - task:
1625
+ type: Clustering
1626
+ dataset:
1627
+ type: mteb/medrxiv-clustering-s2s
1628
+ name: MTEB MedrxivClusteringS2S
1629
+ config: default
1630
+ split: test
1631
+ metrics:
1632
+ - type: v_measure
1633
+ value: 28.01650210863619
1634
+ - task:
1635
+ type: Reranking
1636
+ dataset:
1637
+ type: mteb/mind_small
1638
+ name: MTEB MindSmallReranking
1639
+ config: default
1640
+ split: test
1641
+ metrics:
1642
+ - type: map
1643
+ value: 31.11041256752524
1644
+ - type: mrr
1645
+ value: 32.14172939750204
1646
+ - task:
1647
+ type: Retrieval
1648
+ dataset:
1649
+ type: nfcorpus
1650
+ name: MTEB NFCorpus
1651
+ config: default
1652
+ split: test
1653
+ metrics:
1654
+ - type: map_at_1
1655
+ value: 3.527
1656
+ - type: map_at_10
1657
+ value: 9.283
1658
+ - type: map_at_100
1659
+ value: 11.995000000000001
1660
+ - type: map_at_1000
1661
+ value: 13.33
1662
+ - type: map_at_3
1663
+ value: 6.223
1664
+ - type: map_at_5
1665
+ value: 7.68
1666
+ - type: ndcg_at_1
1667
+ value: 36.223
1668
+ - type: ndcg_at_10
1669
+ value: 28.255999999999997
1670
+ - type: ndcg_at_100
1671
+ value: 26.355
1672
+ - type: ndcg_at_1000
1673
+ value: 35.536
1674
+ - type: ndcg_at_3
1675
+ value: 31.962000000000003
1676
+ - type: ndcg_at_5
1677
+ value: 30.61
1678
+ - type: precision_at_1
1679
+ value: 37.771
1680
+ - type: precision_at_10
1681
+ value: 21.889
1682
+ - type: precision_at_100
1683
+ value: 7.1080000000000005
1684
+ - type: precision_at_1000
1685
+ value: 1.989
1686
+ - type: precision_at_3
1687
+ value: 30.857
1688
+ - type: precision_at_5
1689
+ value: 27.307
1690
+ - type: recall_at_1
1691
+ value: 3.527
1692
+ - type: recall_at_10
1693
+ value: 14.015
1694
+ - type: recall_at_100
1695
+ value: 28.402
1696
+ - type: recall_at_1000
1697
+ value: 59.795
1698
+ - type: recall_at_3
1699
+ value: 7.5969999999999995
1700
+ - type: recall_at_5
1701
+ value: 10.641
1702
+ - task:
1703
+ type: Retrieval
1704
+ dataset:
1705
+ type: nq
1706
+ name: MTEB NQ
1707
+ config: default
1708
+ split: test
1709
+ metrics:
1710
+ - type: map_at_1
1711
+ value: 11.631
1712
+ - type: map_at_10
1713
+ value: 19.532
1714
+ - type: map_at_100
1715
+ value: 20.821
1716
+ - type: map_at_1000
1717
+ value: 20.910999999999998
1718
+ - type: map_at_3
1719
+ value: 16.597
1720
+ - type: map_at_5
1721
+ value: 18.197
1722
+ - type: ndcg_at_1
1723
+ value: 13.413
1724
+ - type: ndcg_at_10
1725
+ value: 24.628
1726
+ - type: ndcg_at_100
1727
+ value: 30.883
1728
+ - type: ndcg_at_1000
1729
+ value: 33.216
1730
+ - type: ndcg_at_3
1731
+ value: 18.697
1732
+ - type: ndcg_at_5
1733
+ value: 21.501
1734
+ - type: precision_at_1
1735
+ value: 13.413
1736
+ - type: precision_at_10
1737
+ value: 4.571
1738
+ - type: precision_at_100
1739
+ value: 0.812
1740
+ - type: precision_at_1000
1741
+ value: 0.10300000000000001
1742
+ - type: precision_at_3
1743
+ value: 8.845
1744
+ - type: precision_at_5
1745
+ value: 6.889000000000001
1746
+ - type: recall_at_1
1747
+ value: 11.631
1748
+ - type: recall_at_10
1749
+ value: 38.429
1750
+ - type: recall_at_100
1751
+ value: 67.009
1752
+ - type: recall_at_1000
1753
+ value: 84.796
1754
+ - type: recall_at_3
1755
+ value: 22.74
1756
+ - type: recall_at_5
1757
+ value: 29.266
1758
+ - task:
1759
+ type: Retrieval
1760
+ dataset:
1761
+ type: quora
1762
+ name: MTEB QuoraRetrieval
1763
+ config: default
1764
+ split: test
1765
+ metrics:
1766
+ - type: map_at_1
1767
+ value: 66.64
1768
+ - type: map_at_10
1769
+ value: 80.394
1770
+ - type: map_at_100
1771
+ value: 81.099
1772
+ - type: map_at_1000
1773
+ value: 81.122
1774
+ - type: map_at_3
1775
+ value: 77.289
1776
+ - type: map_at_5
1777
+ value: 79.25999999999999
1778
+ - type: ndcg_at_1
1779
+ value: 76.85
1780
+ - type: ndcg_at_10
1781
+ value: 84.68
1782
+ - type: ndcg_at_100
1783
+ value: 86.311
1784
+ - type: ndcg_at_1000
1785
+ value: 86.49900000000001
1786
+ - type: ndcg_at_3
1787
+ value: 81.295
1788
+ - type: ndcg_at_5
1789
+ value: 83.199
1790
+ - type: precision_at_1
1791
+ value: 76.85
1792
+ - type: precision_at_10
1793
+ value: 12.928999999999998
1794
+ - type: precision_at_100
1795
+ value: 1.51
1796
+ - type: precision_at_1000
1797
+ value: 0.156
1798
+ - type: precision_at_3
1799
+ value: 35.557
1800
+ - type: precision_at_5
1801
+ value: 23.576
1802
+ - type: recall_at_1
1803
+ value: 66.64
1804
+ - type: recall_at_10
1805
+ value: 93.059
1806
+ - type: recall_at_100
1807
+ value: 98.922
1808
+ - type: recall_at_1000
1809
+ value: 99.883
1810
+ - type: recall_at_3
1811
+ value: 83.49499999999999
1812
+ - type: recall_at_5
1813
+ value: 88.729
1814
+ - task:
1815
+ type: Clustering
1816
+ dataset:
1817
+ type: mteb/reddit-clustering
1818
+ name: MTEB RedditClustering
1819
+ config: default
1820
+ split: test
1821
+ metrics:
1822
+ - type: v_measure
1823
+ value: 42.17131361041068
1824
+ - task:
1825
+ type: Clustering
1826
+ dataset:
1827
+ type: mteb/reddit-clustering-p2p
1828
+ name: MTEB RedditClusteringP2P
1829
+ config: default
1830
+ split: test
1831
+ metrics:
1832
+ - type: v_measure
1833
+ value: 48.01815621479994
1834
+ - task:
1835
+ type: Retrieval
1836
+ dataset:
1837
+ type: scidocs
1838
+ name: MTEB SCIDOCS
1839
+ config: default
1840
+ split: test
1841
+ metrics:
1842
+ - type: map_at_1
1843
+ value: 3.198
1844
+ - type: map_at_10
1845
+ value: 7.550999999999999
1846
+ - type: map_at_100
1847
+ value: 9.232
1848
+ - type: map_at_1000
1849
+ value: 9.51
1850
+ - type: map_at_3
1851
+ value: 5.2940000000000005
1852
+ - type: map_at_5
1853
+ value: 6.343999999999999
1854
+ - type: ndcg_at_1
1855
+ value: 15.8
1856
+ - type: ndcg_at_10
1857
+ value: 13.553999999999998
1858
+ - type: ndcg_at_100
1859
+ value: 20.776
1860
+ - type: ndcg_at_1000
1861
+ value: 26.204
1862
+ - type: ndcg_at_3
1863
+ value: 12.306000000000001
1864
+ - type: ndcg_at_5
1865
+ value: 10.952
1866
+ - type: precision_at_1
1867
+ value: 15.8
1868
+ - type: precision_at_10
1869
+ value: 7.180000000000001
1870
+ - type: precision_at_100
1871
+ value: 1.762
1872
+ - type: precision_at_1000
1873
+ value: 0.307
1874
+ - type: precision_at_3
1875
+ value: 11.333
1876
+ - type: precision_at_5
1877
+ value: 9.62
1878
+ - type: recall_at_1
1879
+ value: 3.198
1880
+ - type: recall_at_10
1881
+ value: 14.575
1882
+ - type: recall_at_100
1883
+ value: 35.758
1884
+ - type: recall_at_1000
1885
+ value: 62.317
1886
+ - type: recall_at_3
1887
+ value: 6.922000000000001
1888
+ - type: recall_at_5
1889
+ value: 9.767000000000001
1890
+ - task:
1891
+ type: STS
1892
+ dataset:
1893
+ type: mteb/sickr-sts
1894
+ name: MTEB SICK-R
1895
+ config: default
1896
+ split: test
1897
+ metrics:
1898
+ - type: cos_sim_pearson
1899
+ value: 84.5217161312271
1900
+ - type: cos_sim_spearman
1901
+ value: 79.58562467776268
1902
+ - type: euclidean_pearson
1903
+ value: 76.69364353942403
1904
+ - type: euclidean_spearman
1905
+ value: 74.68959282070473
1906
+ - type: manhattan_pearson
1907
+ value: 76.81159265133732
1908
+ - type: manhattan_spearman
1909
+ value: 74.7519444048176
1910
+ - task:
1911
+ type: STS
1912
+ dataset:
1913
+ type: mteb/sts12-sts
1914
+ name: MTEB STS12
1915
+ config: default
1916
+ split: test
1917
+ metrics:
1918
+ - type: cos_sim_pearson
1919
+ value: 83.70403706922605
1920
+ - type: cos_sim_spearman
1921
+ value: 74.28502198729447
1922
+ - type: euclidean_pearson
1923
+ value: 83.32719404608066
1924
+ - type: euclidean_spearman
1925
+ value: 75.92189433460788
1926
+ - type: manhattan_pearson
1927
+ value: 83.35841543005293
1928
+ - type: manhattan_spearman
1929
+ value: 75.94458615451978
1930
+ - task:
1931
+ type: STS
1932
+ dataset:
1933
+ type: mteb/sts13-sts
1934
+ name: MTEB STS13
1935
+ config: default
1936
+ split: test
1937
+ metrics:
1938
+ - type: cos_sim_pearson
1939
+ value: 84.94127878986795
1940
+ - type: cos_sim_spearman
1941
+ value: 85.35148434923192
1942
+ - type: euclidean_pearson
1943
+ value: 81.71127467071571
1944
+ - type: euclidean_spearman
1945
+ value: 82.88240481546771
1946
+ - type: manhattan_pearson
1947
+ value: 81.72826221967252
1948
+ - type: manhattan_spearman
1949
+ value: 82.90725064625128
1950
+ - task:
1951
+ type: STS
1952
+ dataset:
1953
+ type: mteb/sts14-sts
1954
+ name: MTEB STS14
1955
+ config: default
1956
+ split: test
1957
+ metrics:
1958
+ - type: cos_sim_pearson
1959
+ value: 83.1474704168523
1960
+ - type: cos_sim_spearman
1961
+ value: 79.20612995350827
1962
+ - type: euclidean_pearson
1963
+ value: 78.85993329596555
1964
+ - type: euclidean_spearman
1965
+ value: 78.91956572744715
1966
+ - type: manhattan_pearson
1967
+ value: 78.89999720522347
1968
+ - type: manhattan_spearman
1969
+ value: 78.93956842550107
1970
+ - task:
1971
+ type: STS
1972
+ dataset:
1973
+ type: mteb/sts15-sts
1974
+ name: MTEB STS15
1975
+ config: default
1976
+ split: test
1977
+ metrics:
1978
+ - type: cos_sim_pearson
1979
+ value: 84.81255514055894
1980
+ - type: cos_sim_spearman
1981
+ value: 85.5217140762934
1982
+ - type: euclidean_pearson
1983
+ value: 82.15024353784499
1984
+ - type: euclidean_spearman
1985
+ value: 83.04155334389833
1986
+ - type: manhattan_pearson
1987
+ value: 82.18598945053624
1988
+ - type: manhattan_spearman
1989
+ value: 83.07248357693301
1990
+ - task:
1991
+ type: STS
1992
+ dataset:
1993
+ type: mteb/sts16-sts
1994
+ name: MTEB STS16
1995
+ config: default
1996
+ split: test
1997
+ metrics:
1998
+ - type: cos_sim_pearson
1999
+ value: 80.63248465157822
2000
+ - type: cos_sim_spearman
2001
+ value: 82.53853238521991
2002
+ - type: euclidean_pearson
2003
+ value: 78.33936863828221
2004
+ - type: euclidean_spearman
2005
+ value: 79.16305579487414
2006
+ - type: manhattan_pearson
2007
+ value: 78.3888359870894
2008
+ - type: manhattan_spearman
2009
+ value: 79.18504473136467
2010
+ - task:
2011
+ type: STS
2012
+ dataset:
2013
+ type: mteb/sts17-crosslingual-sts
2014
+ name: MTEB STS17 (en-en)
2015
+ config: en-en
2016
+ split: test
2017
+ metrics:
2018
+ - type: cos_sim_pearson
2019
+ value: 90.09066290639687
2020
+ - type: cos_sim_spearman
2021
+ value: 90.43893699357069
2022
+ - type: euclidean_pearson
2023
+ value: 82.39520777222396
2024
+ - type: euclidean_spearman
2025
+ value: 81.23948185395952
2026
+ - type: manhattan_pearson
2027
+ value: 82.35529784653383
2028
+ - type: manhattan_spearman
2029
+ value: 81.12681522483975
2030
+ - task:
2031
+ type: STS
2032
+ dataset:
2033
+ type: mteb/sts22-crosslingual-sts
2034
+ name: MTEB STS22 (en)
2035
+ config: en
2036
+ split: test
2037
+ metrics:
2038
+ - type: cos_sim_pearson
2039
+ value: 63.52752323046846
2040
+ - type: cos_sim_spearman
2041
+ value: 63.19719780439462
2042
+ - type: euclidean_pearson
2043
+ value: 58.29085490641428
2044
+ - type: euclidean_spearman
2045
+ value: 58.975178656335046
2046
+ - type: manhattan_pearson
2047
+ value: 58.183542772416985
2048
+ - type: manhattan_spearman
2049
+ value: 59.190630462178994
2050
+ - task:
2051
+ type: STS
2052
+ dataset:
2053
+ type: mteb/stsbenchmark-sts
2054
+ name: MTEB STSBenchmark
2055
+ config: default
2056
+ split: test
2057
+ metrics:
2058
+ - type: cos_sim_pearson
2059
+ value: 85.45100366635687
2060
+ - type: cos_sim_spearman
2061
+ value: 85.66816193002651
2062
+ - type: euclidean_pearson
2063
+ value: 81.87976731329091
2064
+ - type: euclidean_spearman
2065
+ value: 82.01382867690964
2066
+ - type: manhattan_pearson
2067
+ value: 81.88260155706726
2068
+ - type: manhattan_spearman
2069
+ value: 82.05258597906492
2070
+ - task:
2071
+ type: Reranking
2072
+ dataset:
2073
+ type: mteb/scidocs-reranking
2074
+ name: MTEB SciDocsRR
2075
+ config: default
2076
+ split: test
2077
+ metrics:
2078
+ - type: map
2079
+ value: 77.53549990038017
2080
+ - type: mrr
2081
+ value: 93.37474163454556
2082
+ - task:
2083
+ type: Retrieval
2084
+ dataset:
2085
+ type: scifact
2086
+ name: MTEB SciFact
2087
+ config: default
2088
+ split: test
2089
+ metrics:
2090
+ - type: map_at_1
2091
+ value: 31.167
2092
+ - type: map_at_10
2093
+ value: 40.778
2094
+ - type: map_at_100
2095
+ value: 42.063
2096
+ - type: map_at_1000
2097
+ value: 42.103
2098
+ - type: map_at_3
2099
+ value: 37.12
2100
+ - type: map_at_5
2101
+ value: 39.205
2102
+ - type: ndcg_at_1
2103
+ value: 33.667
2104
+ - type: ndcg_at_10
2105
+ value: 46.662
2106
+ - type: ndcg_at_100
2107
+ value: 51.995999999999995
2108
+ - type: ndcg_at_1000
2109
+ value: 53.254999999999995
2110
+ - type: ndcg_at_3
2111
+ value: 39.397999999999996
2112
+ - type: ndcg_at_5
2113
+ value: 42.934
2114
+ - type: precision_at_1
2115
+ value: 33.667
2116
+ - type: precision_at_10
2117
+ value: 7.1
2118
+ - type: precision_at_100
2119
+ value: 0.993
2120
+ - type: precision_at_1000
2121
+ value: 0.11
2122
+ - type: precision_at_3
2123
+ value: 16.111
2124
+ - type: precision_at_5
2125
+ value: 11.600000000000001
2126
+ - type: recall_at_1
2127
+ value: 31.167
2128
+ - type: recall_at_10
2129
+ value: 63.744
2130
+ - type: recall_at_100
2131
+ value: 87.156
2132
+ - type: recall_at_1000
2133
+ value: 97.556
2134
+ - type: recall_at_3
2135
+ value: 44.0
2136
+ - type: recall_at_5
2137
+ value: 52.556000000000004
2138
+ - task:
2139
+ type: PairClassification
2140
+ dataset:
2141
+ type: mteb/sprintduplicatequestions-pairclassification
2142
+ name: MTEB SprintDuplicateQuestions
2143
+ config: default
2144
+ split: test
2145
+ metrics:
2146
+ - type: cos_sim_accuracy
2147
+ value: 99.55148514851486
2148
+ - type: cos_sim_ap
2149
+ value: 80.535236573428
2150
+ - type: cos_sim_f1
2151
+ value: 75.01331912626532
2152
+ - type: cos_sim_precision
2153
+ value: 80.27366020524515
2154
+ - type: cos_sim_recall
2155
+ value: 70.39999999999999
2156
+ - type: dot_accuracy
2157
+ value: 99.04851485148515
2158
+ - type: dot_ap
2159
+ value: 28.505358821499726
2160
+ - type: dot_f1
2161
+ value: 36.36363636363637
2162
+ - type: dot_precision
2163
+ value: 37.160751565762006
2164
+ - type: dot_recall
2165
+ value: 35.6
2166
+ - type: euclidean_accuracy
2167
+ value: 99.4990099009901
2168
+ - type: euclidean_ap
2169
+ value: 74.95819047075476
2170
+ - type: euclidean_f1
2171
+ value: 71.15489874110564
2172
+ - type: euclidean_precision
2173
+ value: 78.59733978234583
2174
+ - type: euclidean_recall
2175
+ value: 65.0
2176
+ - type: manhattan_accuracy
2177
+ value: 99.50198019801981
2178
+ - type: manhattan_ap
2179
+ value: 75.02070096015086
2180
+ - type: manhattan_f1
2181
+ value: 71.20535714285712
2182
+ - type: manhattan_precision
2183
+ value: 80.55555555555556
2184
+ - type: manhattan_recall
2185
+ value: 63.800000000000004
2186
+ - type: max_accuracy
2187
+ value: 99.55148514851486
2188
+ - type: max_ap
2189
+ value: 80.535236573428
2190
+ - type: max_f1
2191
+ value: 75.01331912626532
2192
+ - task:
2193
+ type: Clustering
2194
+ dataset:
2195
+ type: mteb/stackexchange-clustering
2196
+ name: MTEB StackExchangeClustering
2197
+ config: default
2198
+ split: test
2199
+ metrics:
2200
+ - type: v_measure
2201
+ value: 54.13314692311623
2202
+ - task:
2203
+ type: Clustering
2204
+ dataset:
2205
+ type: mteb/stackexchange-clustering-p2p
2206
+ name: MTEB StackExchangeClusteringP2P
2207
+ config: default
2208
+ split: test
2209
+ metrics:
2210
+ - type: v_measure
2211
+ value: 31.115181648287145
2212
+ - task:
2213
+ type: Reranking
2214
+ dataset:
2215
+ type: mteb/stackoverflowdupquestions-reranking
2216
+ name: MTEB StackOverflowDupQuestions
2217
+ config: default
2218
+ split: test
2219
+ metrics:
2220
+ - type: map
2221
+ value: 44.771112666694336
2222
+ - type: mrr
2223
+ value: 45.30415764790765
2224
+ - task:
2225
+ type: Summarization
2226
+ dataset:
2227
+ type: mteb/summeval
2228
+ name: MTEB SummEval
2229
+ config: default
2230
+ split: test
2231
+ metrics:
2232
+ - type: cos_sim_pearson
2233
+ value: 30.849429597669374
2234
+ - type: cos_sim_spearman
2235
+ value: 30.384175038360194
2236
+ - type: dot_pearson
2237
+ value: 29.030383429536823
2238
+ - type: dot_spearman
2239
+ value: 28.03273624951732
2240
+ - task:
2241
+ type: Retrieval
2242
+ dataset:
2243
+ type: trec-covid
2244
+ name: MTEB TRECCOVID
2245
+ config: default
2246
+ split: test
2247
+ metrics:
2248
+ - type: map_at_1
2249
+ value: 0.19499999999999998
2250
+ - type: map_at_10
2251
+ value: 1.0959999999999999
2252
+ - type: map_at_100
2253
+ value: 5.726
2254
+ - type: map_at_1000
2255
+ value: 13.611999999999998
2256
+ - type: map_at_3
2257
+ value: 0.45399999999999996
2258
+ - type: map_at_5
2259
+ value: 0.67
2260
+ - type: ndcg_at_1
2261
+ value: 71.0
2262
+ - type: ndcg_at_10
2263
+ value: 55.352999999999994
2264
+ - type: ndcg_at_100
2265
+ value: 40.797
2266
+ - type: ndcg_at_1000
2267
+ value: 35.955999999999996
2268
+ - type: ndcg_at_3
2269
+ value: 63.263000000000005
2270
+ - type: ndcg_at_5
2271
+ value: 60.14000000000001
2272
+ - type: precision_at_1
2273
+ value: 78.0
2274
+ - type: precision_at_10
2275
+ value: 56.99999999999999
2276
+ - type: precision_at_100
2277
+ value: 41.199999999999996
2278
+ - type: precision_at_1000
2279
+ value: 16.154
2280
+ - type: precision_at_3
2281
+ value: 66.667
2282
+ - type: precision_at_5
2283
+ value: 62.8
2284
+ - type: recall_at_1
2285
+ value: 0.19499999999999998
2286
+ - type: recall_at_10
2287
+ value: 1.3639999999999999
2288
+ - type: recall_at_100
2289
+ value: 9.317
2290
+ - type: recall_at_1000
2291
+ value: 33.629999999999995
2292
+ - type: recall_at_3
2293
+ value: 0.49300000000000005
2294
+ - type: recall_at_5
2295
+ value: 0.756
2296
+ - task:
2297
+ type: Retrieval
2298
+ dataset:
2299
+ type: webis-touche2020
2300
+ name: MTEB Touche2020
2301
+ config: default
2302
+ split: test
2303
+ metrics:
2304
+ - type: map_at_1
2305
+ value: 1.335
2306
+ - type: map_at_10
2307
+ value: 6.293
2308
+ - type: map_at_100
2309
+ value: 10.928
2310
+ - type: map_at_1000
2311
+ value: 12.359
2312
+ - type: map_at_3
2313
+ value: 3.472
2314
+ - type: map_at_5
2315
+ value: 4.935
2316
+ - type: ndcg_at_1
2317
+ value: 19.387999999999998
2318
+ - type: ndcg_at_10
2319
+ value: 16.178
2320
+ - type: ndcg_at_100
2321
+ value: 28.149
2322
+ - type: ndcg_at_1000
2323
+ value: 39.845000000000006
2324
+ - type: ndcg_at_3
2325
+ value: 19.171
2326
+ - type: ndcg_at_5
2327
+ value: 17.864
2328
+ - type: precision_at_1
2329
+ value: 20.408
2330
+ - type: precision_at_10
2331
+ value: 14.49
2332
+ - type: precision_at_100
2333
+ value: 6.306000000000001
2334
+ - type: precision_at_1000
2335
+ value: 1.3860000000000001
2336
+ - type: precision_at_3
2337
+ value: 21.088
2338
+ - type: precision_at_5
2339
+ value: 18.367
2340
+ - type: recall_at_1
2341
+ value: 1.335
2342
+ - type: recall_at_10
2343
+ value: 10.825999999999999
2344
+ - type: recall_at_100
2345
+ value: 39.251000000000005
2346
+ - type: recall_at_1000
2347
+ value: 74.952
2348
+ - type: recall_at_3
2349
+ value: 4.9110000000000005
2350
+ - type: recall_at_5
2351
+ value: 7.312
2352
+ - task:
2353
+ type: Classification
2354
+ dataset:
2355
+ type: mteb/toxic_conversations_50k
2356
+ name: MTEB ToxicConversationsClassification
2357
+ config: default
2358
+ split: test
2359
+ metrics:
2360
+ - type: accuracy
2361
+ value: 69.93339999999999
2362
+ - type: ap
2363
+ value: 13.87476602492533
2364
+ - type: f1
2365
+ value: 53.867357615848555
2366
+ - task:
2367
+ type: Classification
2368
+ dataset:
2369
+ type: mteb/tweet_sentiment_extraction
2370
+ name: MTEB TweetSentimentExtractionClassification
2371
+ config: default
2372
+ split: test
2373
+ metrics:
2374
+ - type: accuracy
2375
+ value: 62.43916242218449
2376
+ - type: f1
2377
+ value: 62.870386304954685
2378
+ - task:
2379
+ type: Clustering
2380
+ dataset:
2381
+ type: mteb/twentynewsgroups-clustering
2382
+ name: MTEB TwentyNewsgroupsClustering
2383
+ config: default
2384
+ split: test
2385
+ metrics:
2386
+ - type: v_measure
2387
+ value: 37.202082549859796
2388
+ - task:
2389
+ type: PairClassification
2390
+ dataset:
2391
+ type: mteb/twittersemeval2015-pairclassification
2392
+ name: MTEB TwitterSemEval2015
2393
+ config: default
2394
+ split: test
2395
+ metrics:
2396
+ - type: cos_sim_accuracy
2397
+ value: 83.65023544137807
2398
+ - type: cos_sim_ap
2399
+ value: 65.99787692764193
2400
+ - type: cos_sim_f1
2401
+ value: 62.10650887573965
2402
+ - type: cos_sim_precision
2403
+ value: 56.30901287553648
2404
+ - type: cos_sim_recall
2405
+ value: 69.23482849604221
2406
+ - type: dot_accuracy
2407
+ value: 79.10830303391549
2408
+ - type: dot_ap
2409
+ value: 48.80109642320246
2410
+ - type: dot_f1
2411
+ value: 51.418744625967314
2412
+ - type: dot_precision
2413
+ value: 40.30253107683091
2414
+ - type: dot_recall
2415
+ value: 71.00263852242745
2416
+ - type: euclidean_accuracy
2417
+ value: 82.45812719794957
2418
+ - type: euclidean_ap
2419
+ value: 60.09969493259607
2420
+ - type: euclidean_f1
2421
+ value: 57.658573789246226
2422
+ - type: euclidean_precision
2423
+ value: 55.62913907284768
2424
+ - type: euclidean_recall
2425
+ value: 59.84168865435356
2426
+ - type: manhattan_accuracy
2427
+ value: 82.46408773916671
2428
+ - type: manhattan_ap
2429
+ value: 60.116199786815116
2430
+ - type: manhattan_f1
2431
+ value: 57.683903860160235
2432
+ - type: manhattan_precision
2433
+ value: 53.41726618705036
2434
+ - type: manhattan_recall
2435
+ value: 62.69129287598945
2436
+ - type: max_accuracy
2437
+ value: 83.65023544137807
2438
+ - type: max_ap
2439
+ value: 65.99787692764193
2440
+ - type: max_f1
2441
+ value: 62.10650887573965
2442
+ - task:
2443
+ type: PairClassification
2444
+ dataset:
2445
+ type: mteb/twitterurlcorpus-pairclassification
2446
+ name: MTEB TwitterURLCorpus
2447
+ config: default
2448
+ split: test
2449
+ metrics:
2450
+ - type: cos_sim_accuracy
2451
+ value: 88.34943920518494
2452
+ - type: cos_sim_ap
2453
+ value: 84.5428891020442
2454
+ - type: cos_sim_f1
2455
+ value: 77.09709933923172
2456
+ - type: cos_sim_precision
2457
+ value: 74.83150952967607
2458
+ - type: cos_sim_recall
2459
+ value: 79.50415768401602
2460
+ - type: dot_accuracy
2461
+ value: 84.53448208949432
2462
+ - type: dot_ap
2463
+ value: 73.96328242371995
2464
+ - type: dot_f1
2465
+ value: 70.00553786515299
2466
+ - type: dot_precision
2467
+ value: 63.58777665995976
2468
+ - type: dot_recall
2469
+ value: 77.86418232214352
2470
+ - type: euclidean_accuracy
2471
+ value: 86.87662514068381
2472
+ - type: euclidean_ap
2473
+ value: 81.45499631520235
2474
+ - type: euclidean_f1
2475
+ value: 73.46567109816063
2476
+ - type: euclidean_precision
2477
+ value: 69.71037533697381
2478
+ - type: euclidean_recall
2479
+ value: 77.6485987064983
2480
+ - type: manhattan_accuracy
2481
+ value: 86.88244654014825
2482
+ - type: manhattan_ap
2483
+ value: 81.47180273946366
2484
+ - type: manhattan_f1
2485
+ value: 73.44624393136418
2486
+ - type: manhattan_precision
2487
+ value: 70.80385852090032
2488
+ - type: manhattan_recall
2489
+ value: 76.29350169387126
2490
+ - type: max_accuracy
2491
+ value: 88.34943920518494
2492
+ - type: max_ap
2493
+ value: 84.5428891020442
2494
+ - type: max_f1
2495
+ value: 77.09709933923172
2496
  ---
2497
 
2498
+ # SGPT-5.8B-weightedmean-msmarco-specb-bitfit
2499
 
2500
  ## Usage
2501
 
 
2503
 
2504
  ## Evaluation Results
2505
 
2506
+ For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
2507
 
2508
  ## Training
2509
  The model was trained with the parameters:
2510
 
2511
  **DataLoader**:
2512
 
2513
+ `torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
2514
  ```
2515
+ {'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
2516
  ```
2517
 
2518
  **Loss**:
 
2525
  Parameters of the fit()-Method:
2526
  ```
2527
  {
2528
+ "epochs": 10,
2529
+ "evaluation_steps": 0,
2530
+ "evaluator": "NoneType",
2531
  "max_grad_norm": 1,
2532
  "optimizer_class": "<class 'transformers.optimization.AdamW'>",
2533
  "optimizer_params": {
2534
+ "lr": 5e-05
2535
  },
2536
  "scheduler": "WarmupLinear",
2537
  "steps_per_epoch": null,
2538
+ "warmup_steps": 1000,
2539
  "weight_decay": 0.01
2540
  }
2541
  ```
 
2544
  ## Full Model Architecture
2545
  ```
2546
  SentenceTransformer(
2547
+ (0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
2548
  (1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
2549
  )
2550
  ```