nanos-hpe commited on
Commit
522f0a4
1 Parent(s): 2280003

Upload folder using huggingface_hub

Browse files
.ipynb_checkpoints/config-checkpoint.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.45.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,3 +1,1427 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:3221
8
+ - loss:MultipleNegativesRankingLoss
9
+ base_model: BAAI/bge-small-en
10
+ widget:
11
+ - source_sentence: • 2 x AMD EPYC 7763 64-core processors
12
+ sentences:
13
+ - 'QuickSpecs
14
+
15
+ HPE Cray XD670
16
+
17
+ Configuration Information
18
+
19
+ DA - 16896   Worldwide QuickSpecs — Version 16 — 10/7/2024
20
+
21
+ Page  22'
22
+ - 'HPE 1.8TB SAS 12G Mission Critical 10K SFF SC 3-year Warranty 512e Multi Vendor
23
+ HDD 872481-H21
24
+
25
+ Hard Drive Blank Kits  
26
+
27
+ HPE Small Form Factor Hard Drive Blank Kit 666987-B21
28
+
29
+ Notes: Hard Drives require the selection of appropriate Drive Cage.  
30
+
31
+  
32
+
33
+ SSD Selection
34
+
35
+ To streamline the configuration process for HPE ProLiant Gen10 servers and to
36
+ provide the best product
37
+
38
+ availability, HPE recommends SSDs from the list located here: http://www.hpe.com/products/recommend
39
+ .
40
+
41
+  
42
+
43
+ All SSD options listed are compatible on both the XL675d and XL645d servers, except
44
+ where explicitly
45
+
46
+ marked.
47
+
48
+ Read Intensive - 12G SAS - SFF - Solid State Drives  
49
+
50
+ HPE 960GB SAS 12G Read Intensive SFF SC Value SAS Multi Vendor SSD P36997-H21
51
+
52
+ HPE 1.92TB SAS 12G Read Intensive SFF SC Value SAS Multi Vendor SSD P36999-H21
53
+
54
+ HPE 3.84TB SAS 12G Read Intensive SFF SC Value SAS Multi Vendor SSD P37001-H21
55
+
56
+ HPE 7.68TB SAS 12G Read Intensive SFF SC Value SAS Multi Vendor SSD P37003-H21
57
+
58
+ Mixed Use - 12G SAS - SFF - Solid State Drives  
59
+
60
+ HPE 960GB SAS 12G Mixed Use SFF SC Value SAS Multi Vendor SSD P37005-H21
61
+
62
+ HPE 1.92TB SAS 12G Mixed Use SFF SC Value SAS Multi Vendor SSD P37011-H21
63
+
64
+ HPE 3.84TB SAS 12G Mixed Use SFF SC Value SAS Multi Vendor SSD P37017-H21
65
+
66
+ Mixed Use - 6G SATA - SFF - Solid State Drives  
67
+
68
+ HPE 480GB SATA 6G Mixed Use SFF SC Multi Vendor SSD P18432-H21
69
+
70
+ HPE 960GB SATA 6G Mixed Use SFF SC Multi Vendor SSD P18434-H21
71
+
72
+ HPE 1.92TB SATA 6G Mixed Use SFF SC Multi Vendor SSD P18436-H21
73
+
74
+ HPE 3.84TB SATA 6G Mixed Use SFF SC Multi Vendor SSD P18438-H21
75
+
76
+ Read Intensive - 6G SATA - SFF - Solid State Drives  
77
+
78
+ HPE 240GB SATA 6G Read Intensive SFF SC Multi Vendor SSD P18420-H21
79
+
80
+ HPE 480GB SATA 6G Read Intensive SFF SC Multi Vendor SSD P18422-H21
81
+
82
+ HPE 960GB SATA 6G Read Intensive SFF SC Multi Vendor SSD P18424-H21
83
+
84
+ HPE 1.92TB SATA 6G Read Intensive SFF SC Multi Vendor SSD P18426-H21
85
+
86
+ HPE 3.84TB SATA 6G Read Intensive SFF SC Multi Vendor SSD P18428-H21
87
+
88
+ HPE 7.68TB SATA 6G Read Intensive SFF SC Multi Vendor SSD P18430-H21
89
+
90
+  
91
+
92
+ Read Intensive - NVMe - SFF - Solid State Drives  
93
+
94
+ HPE 480GB NVMe Gen3 Mainstream Performance Read Intensive M.2 Multi Vendor SSD
95
+ P40513-H21
96
+
97
+ HPE 960GB NVMe Gen3 Mainstream Performance Read Intensive M.2 Multi Vendor SSD
98
+ P40514-H21
99
+
100
+ HPE 1.92TB NVMe Gen3 Mainstream Performance Read Intensive M.2 Multi Vendor SSD
101
+ P40515-H21
102
+
103
+ HPE 3.84TB NVMe Gen4 Mainstream Performance Read Intensive SFF SC U.3 Static V2
104
+
105
+ Multi Vendor SSD
106
+
107
+ P64845-H21
108
+
109
+ HPE 480GB NVMe Gen4 Mainstream Performance Read Intensive M.2 PM9A3 SSD P69543-H21
110
+
111
+ Mixed Use - NVMe - SFF - Solid State Drives  
112
+
113
+ HPE 1.6TB NVMe Gen4 Mainstream Performance Mixed Use SFF SC U.3 Static V2 Multi
114
+
115
+ Vendor SSD
116
+
117
+ P65003-H21
118
+
119
+ QuickSpecs
120
+
121
+ HPE Apollo 6500 Gen10 Plus System
122
+
123
+ Additional Options
124
+
125
+ DA - 16700   Worldwide QuickSpecs — Version 25 — 6/3/2024
126
+
127
+ Page  45'
128
+ - 'Intel Xeon-Gold 6434 3.7GHz 8-core 195W Processor Kit for HPE Cray XD
129
+
130
+ P56395-
131
+
132
+ B21
133
+
134
+ Intel Xeon-Gold 5415+ 2.9GHz 8-core 150W Processor Kit for HPE Cray XD
135
+
136
+ P56391-
137
+
138
+ B21
139
+
140
+  
141
+
142
+ Notes:
143
+
144
+ − "HPE Cray XD220v CPU 1 Rear FIO Heat Sink Kit" (P49855-B21) must be ordered
145
+ for 1st
146
+
147
+ Processor.
148
+
149
+ − "HPE Cray XD220v CPU 2 Front Heat Sink Kit" (P49854-B21) must be ordered for
150
+ 2nd
151
+
152
+ Processor.'
153
+ - source_sentence: '- 2 x 16 GB DDR4-2933-MHz memory modules'
154
+ sentences:
155
+ - 'HPE Performance Cluster Manager 1 Node 3yr 24x7 Support Perpetual LTU Q9V60A
156
+
157
+ Notes:
158
+
159
+ − One license per node.
160
+
161
+ − Includes three years of support.
162
+
163
+ − This is a perpetual license. The software will continue working even when the
164
+ support term ends.
165
+
166
+ HPE Performance Cluster Manager FIO Software Q9V61A
167
+
168
+ Notes:
169
+
170
+ − This SKU does not include the license. Please order with Q9V60AAE.
171
+
172
+ − Order one per node
173
+
174
+ HPE Performance Cluster Manager Media Kit Q9V62A
175
+
176
+ Notes: One media kit per solution.
177
+
178
+  
179
+
180
+  
181
+
182
+ HPE Power Distribution Units
183
+
184
+ Power Distribution Units (PDUs) are an integral piece to this data center solution
185
+ and HPE offers several
186
+
187
+ types. Basic PDUs provide reliable power with 0U or 1U installation options. Metered
188
+ PDUs have added
189
+
190
+ intelligence to precisely track power usage and switched PDUs provide both local
191
+ and remote power
192
+
193
+ management. There are additional metered PDUs that are recommended for this solution
194
+ that are not part
195
+
196
+ of the mainstream PDU product offering. They are as follows:
197
+
198
+ HPE Switched 3-phase 66.5kVA/60309 5-wire 100A/277V 21-breaker Vertical NA PDU
199
+ R8P19A
200
+
201
+ HPE Metered 3Ph 66.5kVA/60309 100A 5-wire 480/277V Outlets (21) SDG23/Vertical
202
+ NA
203
+
204
+ PDU
205
+
206
+ 879034-B21
207
+
208
+ HPE Metered 3Ph 39.9kVA/60309 60A 5-wire 480/277V Outlets (21) SDG23/Vertical
209
+ NA
210
+
211
+ PDU  
212
+
213
+ 880459-B21
214
+
215
+ HPE Metered 3Ph 57.6kVA/60309 100A 5-wire 80A/230V Outlets (3) C13 (18) C19/Vertical
216
+
217
+ NA PDU
218
+
219
+ 880460-B21
220
+
221
+ HPE Metered 3Ph 34.5kVA/60309 60A 5-wire 48A/230V Outlets (3) C13 (18) C19/Vertical
222
+ NA
223
+
224
+ FIO PDU
225
+
226
+ 880461-B21
227
+
228
+ HPE Cray Supercomputer 60A 415V 3 Phase 24 CX PDU R4N30A
229
+
230
+ HPE Mtrd 3P 69.1kVA 125A 96A230V FIO PDU 880462-B21
231
+
232
+ HPE Metered 3Ph 45.1kVA/60309 63A 5-wire 63A/230V Outlets (3) C13 (18) C19/Vertical
233
+
234
+ INTL FIO PDU
235
+
236
+ 880463-B21
237
+
238
+ HPE Cray Supercomputer 63A 400V 3 Phase 24 CX PDU   R4N29A
239
+
240
+ HPE G2 Metered/Switched 3Ph 17.3kVA/60309 4-wire 48A/208V Out (12) C13 (12)
241
+
242
+ C19/Vertical NA/JP PDU
243
+
244
+ P9S22A
245
+
246
+ HPE G2 Metered 3Ph 17.3kVA/60309 60A 4-wire 48A/208V Outlets (12) C13 (12)
247
+
248
+ C19/Vertical NA/JP PDU
249
+
250
+ P9R86A
251
+
252
+ HPE G2 Metered Modular 3Ph 17.3kVA/60309 60A 4-wire 48A/208V Outlets (6) C19/1U
253
+
254
+ Horizontal NA/JP PDU
255
+
256
+ P9R80A
257
+
258
+ QuickSpecs
259
+
260
+ HPE Cray XD2000
261
+
262
+ Configuration Information
263
+
264
+ DA - 16905   Worldwide QuickSpecs — Version 20 — 10/7/2024
265
+
266
+ Page  28'
267
+ - 'World''s most secure industry standard server using HPE iLO5
268
+
269
+  
270
+
271
+ HPE ProLiant XL675d Gen10 Plus - Front Panel View
272
+
273
+ 1. Serial number / iLO Information pull tab 4. Chassis front door lever button
274
+
275
+ 2. Power Switch module 5. Drive Box 2
276
+
277
+ 3. Drive Box 1 6. Dedicated iLO management port
278
+
279
+  
280
+
281
+ QuickSpecs
282
+
283
+ HPE Apollo 6500 Gen10 Plus System
284
+
285
+ Overview
286
+
287
+ DA - 16700   Worldwide QuickSpecs — Version 25 — 6/3/2024
288
+
289
+ Page  2'
290
+ - '8SFF Front View - 8 SFF + optional Universal Media Bay, optical Drive, Display
291
+ Port, USB2.0, and
292
+
293
+ SATA Drive shown  
294
+
295
+    
296
+
297
+ 1. Quick removal access panel 9. Health LED  
298
+
299
+ 2. Serial number/iLO information pull tab 10. NIC Status 1  
300
+
301
+ 3. Display Port (optional - shown) 11. Unit ID Button/LED  
302
+
303
+ 4. Universal Media Bay (optional): 12. USB 3.2 Gen1 port  
304
+
305
+   Option1: Optical drive bay + Display port &
306
+
307
+ USB 2.0 port kit (shown)
308
+
309
+ 13. Drive bays; backplanes options  
310
+
311
+   Option1: 8SFF x1 Tri-Mode 24G U.3 BC
312
+
313
+ Backplane  Option2: 2 SFF x4 Tri-Mode 24G U.3 BC
314
+
315
+ Drive Cage  
316
+
317
+ 5. USB 3.2 Gen1 port (optional - shown)   Option2: 8SFF x4 Tri-Mode 24G U.3
318
+
319
+ BC
320
+
321
+ Backplane
322
+
323
+  
324
+
325
+ 6. Optical Drive (optional- shown)
326
+
327
+ 7. iLO Service Port 14. Drive support label
328
+
329
+ 8. Power On / Standby button and system power LED      
330
+
331
+ Notes: 1 Front NIC LED display doesn''t support NIC LED ACT/LINK indication from
332
+ ALOM/PCIE/FLOM
333
+
334
+ NIC''s
335
+
336
+  
337
+
338
+ 12 LFF Front View - 12 LFF + SAS drives shown  
339
+
340
+    
341
+
342
+ 1. Serial number/iLO information pull tab 6. NIC Status 1  
343
+
344
+ 2. USB 3.2 Gen1 Port 7. Unit ID Button/LED  
345
+
346
+ 3. iLO Service Port 8. SAS/SATA drive bays  
347
+
348
+ QuickSpecs
349
+
350
+ HPE ProLiant DL320 Gen11
351
+
352
+ Overview
353
+
354
+ DA - 16919   Worldwide QuickSpecs — Version 28 — 10/7/2024
355
+
356
+ Page  2'
357
+ - source_sentence: '* 2 x PCIe x4 slots for HPE iLO 5 management per node'
358
+ sentences:
359
+ - "HPE ProLiant XL645d System Block Diagrams - PCIe GPU Configuration \n \nQuickSpecs\n\
360
+ HPE Apollo 6500 Gen10 Plus System\nStandard Features\nDA - 16700   Worldwide QuickSpecs\
361
+ \ — Version 25 — 6/3/2024\nPage  18"
362
+ - 'Date Version
363
+
364
+ History
365
+
366
+ Action Description of Change
367
+
368
+ 03-Jun-2024 Version 24 Changed Additional Options section was updated.
369
+
370
+ Obsolete SKUs were removed
371
+
372
+ 04-Mar-2024 Version 23 Changed Configuration Information section was updated
373
+
374
+ Obsolete SKUs were removed
375
+
376
+ 04-Dec-2023 Version 22 Changed Configuration Information section was updated
377
+
378
+ Obsolete SKUs were removed
379
+
380
+ 06-Nov-2023 Version 21 Changed Configuration Information section was updated
381
+
382
+ Obsolete SKUs were removed
383
+
384
+ 05-Sep-2023 Version 20 Changed Configuration Information section was updated
385
+
386
+ Obsolete SKUs were removed
387
+
388
+ 10-Jul-2023 Version 19 Changed Standard Features section was updated.
389
+
390
+ 20-Jun-2023 Version 18 Changed Overview section was updated
391
+
392
+ 03-Apr-2023 Version 17 Changed Optional Features and Configuration Information
393
+ sections
394
+
395
+ were updated
396
+
397
+ 06-Feb-2023 Version 16 Changed Overview and Configuration Information sections
398
+ were
399
+
400
+ updated
401
+
402
+ 05-Dec-2022 Version 15 Changed Core Options section was updated
403
+
404
+ 07-Nov-2022 Version 14 Changed Configuration Information section was updated
405
+
406
+ Obsolete SKUs were removed
407
+
408
+ 01-Aug-2022 Version 13 Changed Configuration Information section was updated
409
+
410
+ 05-Jul-2022 Version 12 Changed Configuration Information section was updated
411
+
412
+ Obsolete SKUs were removed
413
+
414
+ 16-May-2022 Version 11 Changed Configuration Information section was updated
415
+
416
+ Obsolete SKUs were removed
417
+
418
+ 21-Mar-2022 Version 10 Changed Standard Features and Configuration Information
419
+ were
420
+
421
+ removed.
422
+
423
+ 07-Feb-2022 Version 9 Changed Configuration Information section was updated
424
+
425
+ 10-Jan-2022 Version 8 Changed Additional Options section was updated.
426
+
427
+ Obsolete SKUs were removed
428
+
429
+ 01-Nov-2021 Version 7 Changed Added Software Development Tools
430
+
431
+ Overview, Standard features and Additional Options sections
432
+
433
+ were updated.
434
+
435
+ Obsolete SKUs were removed
436
+
437
+ 07-Sep-2021 Version 6 Changed Additional Options section was updated.
438
+
439
+ Obsolete SKUs were removed
440
+
441
+ 02-Aug-2021 Version 5 Changed Obsolete SKUs were removed
442
+
443
+ 04-May-2021 Version 4 Changed Overview, Standard Features, Optional Features,
444
+
445
+ Configuration Information and Additional Options were
446
+
447
+ removed.
448
+
449
+ 06-Apr-2021 Version 3 Changed Standard Features, Configuration Information, Additional
450
+
451
+ Options and Technical Specifications sections were updated.
452
+
453
+ 01-Feb-2021 Version 2 Changed Overview, Standard Features, Configuration Information,
454
+
455
+ Additional Options and Technical Specifications sections were
456
+
457
+ updated.
458
+
459
+ Obsolete SKUs were removed
460
+
461
+ QuickSpecs
462
+
463
+ HPE Apollo 6500 Gen10 Plus System
464
+
465
+ Summary of Changes
466
+
467
+ DA - 16700   Worldwide QuickSpecs — Version 25 — 6/3/2024
468
+
469
+ Page  52'
470
+ - 'Storage Controller Cable Kits
471
+
472
+ HPE XL22xn Gen10+ E208ip/P408ip Cbl Kit
473
+
474
+ HPE XL225n Gen10+ SATA Cbl Kit
475
+
476
+ Notes: By default, Embedded Controller will work in AHCI Mode. If "P28417-B21
477
+ - HPE SR100i Gen10+
478
+
479
+ Software RAID" is selected, then Embedded Controller will work in SR100i Mode.
480
+
481
+ Maximum Internal StoragePer node
482
+
483
+ Drive Capacity Configuration
484
+
485
+ Hot Plug SFF SATA SSD 46TB 6 x 7.68TB
486
+
487
+ Hot Plug SFF SAS SSD 91TB 6 x 15.3TB
488
+
489
+ Hot Plug NVMe SSD (AMD) 30TB 2x 15.3TB
490
+
491
+ Hot Plug NVMe SSD (Intel) 92TB 6x 15.3TB
492
+
493
+  
494
+
495
+ Notes: NVMe is x2 for Intel
496
+
497
+ Internal Storage Devices
498
+
499
+ Optional USB Mezz Riser Kit
500
+
501
+  
502
+
503
+ Interfaces
504
+
505
+ KVM Serial USB Video Port (SUV)
506
+
507
+ USB Ports 3 external USB ports via SUV (2 regular USB, 1
508
+
509
+ USB management); 1 USB 3.2 Gen1 Type A Port
510
+
511
+ (external)
512
+
513
+ HPE iLO Remote Management Network Port NIC/Shares iLO network port (AMD only)
514
+ Separate
515
+
516
+ NIC and iLO ports on Intel node
517
+
518
+ Health LED 1
519
+
520
+ Power 1
521
+
522
+ UID 1
523
+
524
+ Do not remove LED 1
525
+
526
+ Industry Standard Compliance
527
+
528
+ ACPI 6.3 Compliant
529
+
530
+ PCIe 4.0 Compliant
531
+
532
+ WOL Support
533
+
534
+ Microsoft® Logo certifications
535
+
536
+ PXE Support
537
+
538
+ USB 3.0 Compliant (internal); USB 2 .0 compliant (external ports via SUV)
539
+
540
+ SMBIOS 3.2
541
+
542
+ UEFI 2.8
543
+
544
+ Redfish API
545
+
546
+ European Union Erp Lot 9 Regulation European Union (EU) eco-design regulations
547
+ for server and
548
+
549
+ storage products, known as Lot 9, establishes power thresholds for idle state,
550
+ as well as efficiency
551
+
552
+ and performance in active state which vary among configurations. HPE ProLiant
553
+ Gen10 Plus servers
554
+
555
+ QuickSpecs
556
+
557
+ HPE Apollo 2000 Gen10 Plus System
558
+
559
+ Standard Features
560
+
561
+ DA - 16526   Worldwide QuickSpecs — Version 41 — 7/1/2024
562
+
563
+ Page  14'
564
+ - source_sentence: What type of processors are supported by the HPE Cray XD665 System?
565
+ sentences:
566
+ - 'HPE Cray XD675 Server Top View
567
+
568
+ Item Description    
569
+
570
+ 1. 8x AMD MI300X OAM Accelerator    
571
+
572
+  
573
+
574
+ QuickSpecs
575
+
576
+ HPE Cray XD675
577
+
578
+ Overview
579
+
580
+ DA - 17239   Worldwide QuickSpecs — Version 4 — 8/19/2024
581
+
582
+ Page  2'
583
+ - 'HPE Cray XD665
584
+
585
+ HPE is bringing the power of supercomputing to datacenters of any size with the
586
+ HPE Cray XD665 System .
587
+
588
+ HPE Cray XD665 System is a top-performing GPU-accelerated server, delivering mixed-HPC/AI
589
+ workload
590
+
591
+ solutions to rack-scale, in a rack and roll fashion.  
592
+
593
+ HPE Cray XD665 System is a 4U chassis system that contains a single 2x CPU node
594
+ with 4x Nvidia H100
595
+
596
+ Tensor Core SXM5 GPUs. It offers a complete, scalable solution for AI & HPC customers
597
+ everywhere, with
598
+
599
+ flexibility of fabric, memory, storage and operating system. HPE Cray XD665 System
600
+ provides maximum
601
+
602
+ performance for advanced HPC Simulations, AI Training and Deep Learning.  
603
+
604
+ Built with Exascale-ready networking technologies, integrated storage, extensive
605
+ software portfolio and
606
+
607
+ management tools, HPE Cray XD665 Systems can enable customers to innovate and
608
+ prepare for tomorrow''s
609
+
610
+ challenges.
611
+
612
+ HPE Cray XD665 Server System Key Features
613
+
614
+ 4U Single-Node Chassis (Air & Liquid-Cooled)
615
+
616
+ GPUs: 4x NVIDIA® H100 Tensor Core SXM5 GPUs providing leadership performance for
617
+ AI Training,
618
+
619
+ Deep Learning and advanced HPC simulations. PCIe GPUs are not supported on Cray
620
+ XD665.
621
+
622
+ CPUs: Support for 4 th Generation AMD® EPYC® Scalable Processors: "Genoa"
623
+
624
+ DRAM: Support for up to 24x DDR5 4800MT/s DIMMs
625
+
626
+ High-Speed Fabric: 5x PCIe Gen 5.0 Half-Height, Half-Length slots supporting Slingshot
627
+ 11, Infiband
628
+
629
+ NDR and Ethernet, providing direct switchable connections between High-Speed Fabric,
630
+ GPUs, NVMe
631
+
632
+ drives and CPUs.
633
+
634
+ Storage: Up to 8 SFF NVME U.3 and 2 M.2 RAID SSDs
635
+
636
+ Power Supplies: 6x 3,000-Watt capacity per server system, providing full N+N redundancy.
637
+
638
+ PCIe Expansion: 1x HHHL PCIe 5.0, 1x OCP 3.0 expansion slot with embedded 2-port
639
+ 10G Base-T
640
+
641
+ (RJ45), 1 1GbE NIC, 1x BMC Port, 1x VGA, 1x USB3.0, PWR Button/Reset/ID Button/Status
642
+ LEDs  
643
+
644
+ NVIDIA and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation
645
+ in the U.S. and
646
+
647
+ other countries. All third-party marks are the property of their respective owners.
648
+  
649
+
650
+ AMD and EPYC are trademarks and/or registered trademarks of Advanced Micro Devices,
651
+ Inc. in the U.S.
652
+
653
+ and other countries. All third-party marks are property of their respective owners
654
+
655
+  
656
+
657
+ QuickSpecs
658
+
659
+ HPE Cray Supercomputing XD665 System
660
+
661
+ Overview
662
+
663
+ DA - 17114   Worldwide QuickSpecs — Version 9 — 10/21/2024
664
+
665
+ Page  1'
666
+ - "EPYC\n7543P\n32 2.8GHz 3.7GHz 2TB 225 256MB 3200MT/s\nEPYC\n7443P\n24 2.85GHs\
667
+ \ 4.0GHz 2TB 200 128 MB 3200MT/s\nEPYC\n7313P\n16 3.0GHz 3.7GHz 2TB 155 128MB\
668
+ \ 3200MT/s\nIntel Xeon \nProcessor\nCores Base\nFrequency\nMax\nFrequenc\ny\n\
669
+ Max\nMemory\nWattage Cache\n1.5MB/cor\ne\nMemory\nXeon 8380 40 2.3GHz 3.4GHz 6TB/socket\
670
+ \ 270 60MB 3200MT/s\nXeon 8368 38 2.4GHz 3.4GHz 6TB/socket 270 57MB 3200MT/s\n\
671
+ Xeon\n8360Y\n36 2.4GHz 3.5GHz 6TB/socket 250 54MB 3200MT/s\nXeon 8358 32 2.6GHZ\
672
+ \ 3.4GHz 6TB/socket 250 48MB 3200MT/s\nXeon\n8352Y\n32 2.2GHz 3.4GHz 6TB/socket\
673
+ \ 205 48MB 3200MT/s\nXeon 6354 18 3.0GHz 3.6GHz 6TB/socket 205 39MB 3200MT/s\n\
674
+ Xeon 6348 28 2.6GHz 3.5GHz 6TB/socket 235 42MB 3200MT/s\nXeon 6346 16 3.1GHz 3.6GHz\
675
+ \ 6TB/socket 205 36MB 3200MT/s\nXeon 6338 32 2.0GHz 3.2GHz 6TB/socket 205 48MB\
676
+ \ 3200MT/s\nXeon 6330 28 2.0GHZ 3.1GHz 6TB/socket 205 42MB 3200MT/s\nXeon 6342\
677
+ \ 24 2.8GHZ 3.5GHz 6TB/socket 230 36MB 3200MT/s\nXeon\n4309Y\n8 2.8GHz 3.6GHz\
678
+ \ 6TB/socket 105 12MB 3200MT/s\nXeon 4310 12 2.1GHz 3.3GHz 6TB/socket 120 18MB\
679
+ \ 3200MT/s\nXeon 4314 16 2.4GHz 3.4GHz 6TB/socket 135 24MB 3200MT/s\nXeon 4316\
680
+ \ 20 2.3GHz 3.4GHz 6TB/socket 150 30MB 3200MT/s\nXeon\n5318Y\n24 2.1GHz 3.4GHz\
681
+ \ 6TB/socket 165 36MB 3200MT/s\nXeon 5320 26 2.2GHz 3.4GHz 6TB/socket 185 39MB\
682
+ \ 3200MT/s\nXeon\n6336Y\n24 2.4GHz 3.6GHz 6TB/socket 185 36MB 3200MT/s\nXeon\n\
683
+ 5115Y\n8 3.2GHz 3.6GHz 6TB/socket 140 12MB 3200MT/s\nXeon 5317 12 3.0GHz 3.6GHZ\
684
+ \ 6TB/socket 150 18MB 3200MT/s\nXeon 6326 16 2.9GHz 3.5GHz 6TB/socket 185 24MB\
685
+ \ 3200MT/s\nXeon 6334 8 3.6GHz 3.7GHz 6TB/socket 165 18MB 3200MT/s\nChipset\n\
686
+ No Chipset - System on Chip (SoC) design\nOn System Management Chipset\nHPE iLO\
687
+ \ 5 ASIC\nRead and learn more in the iLO QuickSpecs\nQuickSpecs\nHPE Apollo\
688
+ \ 2000 Gen10 Plus System\nStandard Features\nDA - 16526   Worldwide QuickSpecs\
689
+ \ — Version 41 — 7/1/2024\nPage  9"
690
+ - source_sentence: What is the website to find services for customers purchasing from
691
+ a commercial reseller?
692
+ sentences:
693
+ - 'HPE Cray XD675 Server Top View
694
+
695
+ Item Description    
696
+
697
+ 1. 8x AMD MI300X OAM Accelerator    
698
+
699
+  
700
+
701
+ QuickSpecs
702
+
703
+ HPE Cray XD675
704
+
705
+ Overview
706
+
707
+ DA - 17239   Worldwide QuickSpecs — Version 4 — 8/19/2024
708
+
709
+ Page  2'
710
+ - 'AMD EPYC 7443P 2.85GHz 24-core 200W FIO Processor Kit for HPE ProLiant XL225n
711
+
712
+ Gen10 Plus
713
+
714
+ P38737-L21
715
+
716
+ AMD EPYC 7313P 3.0GHz 16-core 155W FIO Processor Kit for HPE ProLiant XL225n
717
+
718
+ Gen10 Plus
719
+
720
+ P38736-L21
721
+
722
+ AMD EPYC 7552 (2.2GHz/48-core/200W) FIO Processor Kit for HPE ProLiant XL225n
723
+
724
+ Gen10 Plus P24258-L21
725
+
726
+ AMD EPYC 7542 (2.9GHz/32-core/225W) FIO Processor Kit for HPE ProLiant XL225n
727
+
728
+ Gen10 Plus P24259-L21
729
+
730
+ AMD EPYC 7502 (2.5GHz/32-core/180W) FIO Processor Kit for HPE ProLiant XL225n
731
+
732
+ Gen10 Plus P24260-L21
733
+
734
+ AMD EPYC 7452 (2.35GHz/32-core/155W) FIO Processor Kit for HPE ProLiant XL225n
735
+
736
+ Gen10 Plus P24261-L21
737
+
738
+ AMD EPYC 7402 (2.8GHz/24-core/180W) FIO Processor Kit for HPE ProLiant XL225n
739
+
740
+ Gen10 Plus P24262-L21
741
+
742
+ AMD EPYC 7352 (2.3GHz/24-core/155W) FIO Processor Kit for HPE ProLiant XL225n
743
+
744
+ Gen10 Plus P24263-L21
745
+
746
+ AMD EPYC 7302 (3.0GHz/16-core/155W) FIO Processor Kit for HPE ProLiant XL225n
747
+
748
+ Gen10 Plus P24264-L21
749
+
750
+ AMD EPYC 7702P (2.0GHz/64-core/200W) FIO Processor Kit for HPE ProLiant XL225n
751
+
752
+ Gen10 Plus P24266-L21
753
+
754
+ AMD EPYC 7402P (2.8GHz/24-core/180W) FIO Processor Kit for HPE ProLiant XL225n
755
+
756
+ Gen10 Plus P24268-L21
757
+
758
+ AMD EPYC 7302P (3.0GHz/16-core/155W) FIO Processor Kit for HPE ProLiant XL225n
759
+
760
+ Gen10 Plus P24269-L21
761
+
762
+ AMD EPYC 7662 (2.0GHz/64-core/225W) FIO Processor Kit for HPE ProLiant XL225n
763
+
764
+ Gen10 Plus P24392-L21
765
+
766
+ AMD EPYC 7642 (2.3GHz/48-core/225W) FIO Processor Kit for HPE ProLiant XL225n
767
+
768
+ Gen10 Plus P24393-L21
769
+
770
+ AMD EPYC 7252 (3.1GHz/8-core/120W) FIO Processor Kit for HPE ProLiant XL225n
771
+
772
+ Gen10 Plus P24397-L21
773
+
774
+ AMD EPYC 7F32 (3.7GHz/8-core/180W) FIO Processor Kit for HPE ProLiant XL225n
775
+
776
+ Gen10 Plus
777
+
778
+ P26686-L21
779
+
780
+ AMD EPYC 7F52 (3.5GHz/16-core/240W) FIO Processor Kit for HPE ProLiant XL225n
781
+
782
+ Gen10 Plus
783
+
784
+ P26687-L21
785
+
786
+ AMD EPYC 7F72 (3.2GHz/24-core/240W) FIO Processor Kit for HPE ProLiant XL225n
787
+
788
+ Gen10 Plus
789
+
790
+ P26688-L21
791
+
792
+  
793
+
794
+  
795
+
796
+ Intel Processors - Factory Integrated Processor Kit for XL220n & XL290n
797
+
798
+ Intel Xeon-Platinum 8380 2.3GHz 40-core 270W FIO Processor Kit for HPE ProLiant
799
+
800
+ XL2x0n Gen10 Plus P36816-L21
801
+
802
+ Intel Xeon-Platinum 8368 2.4GHz 38-core 270W FIO Processor Kit for HPE ProLiant
803
+
804
+ XL2x0n Gen10 Plus P36815-L21
805
+
806
+ QuickSpecs
807
+
808
+ HPE Apollo 2000 Gen10 Plus System
809
+
810
+ Additional Options
811
+
812
+ DA - 16526   Worldwide QuickSpecs — Version 41 — 7/1/2024
813
+
814
+ Page  33'
815
+ - 'Parts and Materials
816
+
817
+ HPE will provide HPE-supported replacement parts and materials necessary to maintain
818
+ the covered hardware
819
+
820
+ product in operating condition, including parts and materials for available and
821
+ recommended engineering
822
+
823
+ improvements.  
824
+
825
+ Parts and components that have reached their maximum supported lifetime and/or
826
+ the maximum usage
827
+
828
+ limitations as set forth in the manufacturer''s operating manual, product quick-specs,
829
+ or the technical product
830
+
831
+ data sheet will not be provided, repaired, or replaced as part of these services.
832
+
833
+  
834
+
835
+ How to Purchase Services
836
+
837
+ Services are sold by Hewlett Packard Enterprise and Hewlett Packard Enterprise
838
+ Authorized Service Partners:
839
+
840
+ Services for customers purchasing from HPE or an enterprise reseller are quoted
841
+ using HPE order
842
+
843
+ configuration tools.
844
+
845
+ Customers purchasing from a commercial reseller can find services at
846
+
847
+ https://ssc.hpe.com/portal/site/ssc/
848
+
849
+  
850
+
851
+ AI Powered and Digitally Enabled Support Experience
852
+
853
+ Achieve faster time to resolution with access to product-specific resources and
854
+ expertise through a digital and
855
+
856
+ data driven customer experience  
857
+
858
+ Sign into the HPE Support Center experience, featuring streamlined self-serve
859
+ case creation and
860
+
861
+ management capabilities with inline knowledge recommendations. You will also find
862
+ personalized task alerts
863
+
864
+ and powerful troubleshooting support through an intelligent virtual agent with
865
+ seamless transition when needed
866
+
867
+ to a live support agent.  
868
+
869
+ https://support.hpe.com/hpesc/public/home/signin
870
+
871
+ Consume IT On Your Terms
872
+
873
+ HPE GreenLake edge-to-cloud platform brings the cloud experience directly to
874
+ your apps and data wherever
875
+
876
+ they are-the edge, colocations, or your data center. It delivers cloud services
877
+ for on-premises IT infrastructure
878
+
879
+ specifically tailored to your most demanding workloads. With a pay-per-use, scalable,
880
+ point-and-click self-
881
+
882
+ service experience that is managed for you, HPE GreenLake edge-to-cloud platform
883
+ accelerates digital
884
+
885
+ transformation in a distributed, edge-to-cloud world.
886
+
887
+ Get faster time to market
888
+
889
+ Save on TCO, align costs to business
890
+
891
+ Scale quickly, meet unpredictable demand
892
+
893
+ Simplify IT operations across your data centers and clouds
894
+
895
+ To learn more about HPE Services, please contact your Hewlett Packard Enterprise
896
+ sales representative or
897
+
898
+ Hewlett Packard Enterprise Authorized Channel Partner.   Contact information for
899
+ a representative in your area
900
+
901
+ can be found at "Contact HPE" https://www.hpe.com/us/en/contact-hpe.html  
902
+
903
+ For more information
904
+
905
+ http://www.hpe.com/services
906
+
907
+ QuickSpecs
908
+
909
+ HPE Cray XD675
910
+
911
+ Service and Support
912
+
913
+ DA - 17239   Worldwide QuickSpecs — Version 4 — 8/19/2024
914
+
915
+ Page  13'
916
+ pipeline_tag: sentence-similarity
917
+ library_name: sentence-transformers
918
+ metrics:
919
+ - cosine_accuracy@1
920
+ - cosine_accuracy@3
921
+ - cosine_accuracy@5
922
+ - cosine_accuracy@10
923
+ - cosine_precision@1
924
+ - cosine_precision@3
925
+ - cosine_precision@5
926
+ - cosine_precision@10
927
+ - cosine_recall@1
928
+ - cosine_recall@3
929
+ - cosine_recall@5
930
+ - cosine_recall@10
931
+ - cosine_ndcg@10
932
+ - cosine_mrr@10
933
+ - cosine_map@100
934
+ - dot_accuracy@1
935
+ - dot_accuracy@3
936
+ - dot_accuracy@5
937
+ - dot_accuracy@10
938
+ - dot_precision@1
939
+ - dot_precision@3
940
+ - dot_precision@5
941
+ - dot_precision@10
942
+ - dot_recall@1
943
+ - dot_recall@3
944
+ - dot_recall@5
945
+ - dot_recall@10
946
+ - dot_ndcg@10
947
+ - dot_mrr@10
948
+ - dot_map@100
949
+ model-index:
950
+ - name: SentenceTransformer based on BAAI/bge-small-en
951
+ results:
952
+ - task:
953
+ type: information-retrieval
954
+ name: Information Retrieval
955
+ dataset:
956
+ name: Unknown
957
+ type: unknown
958
+ metrics:
959
+ - type: cosine_accuracy@1
960
+ value: 0.4857142857142857
961
+ name: Cosine Accuracy@1
962
+ - type: cosine_accuracy@3
963
+ value: 0.8047619047619048
964
+ name: Cosine Accuracy@3
965
+ - type: cosine_accuracy@5
966
+ value: 0.861904761904762
967
+ name: Cosine Accuracy@5
968
+ - type: cosine_accuracy@10
969
+ value: 0.9095238095238095
970
+ name: Cosine Accuracy@10
971
+ - type: cosine_precision@1
972
+ value: 0.4857142857142857
973
+ name: Cosine Precision@1
974
+ - type: cosine_precision@3
975
+ value: 0.26825396825396824
976
+ name: Cosine Precision@3
977
+ - type: cosine_precision@5
978
+ value: 0.17238095238095236
979
+ name: Cosine Precision@5
980
+ - type: cosine_precision@10
981
+ value: 0.09095238095238094
982
+ name: Cosine Precision@10
983
+ - type: cosine_recall@1
984
+ value: 0.4857142857142857
985
+ name: Cosine Recall@1
986
+ - type: cosine_recall@3
987
+ value: 0.8047619047619048
988
+ name: Cosine Recall@3
989
+ - type: cosine_recall@5
990
+ value: 0.861904761904762
991
+ name: Cosine Recall@5
992
+ - type: cosine_recall@10
993
+ value: 0.9095238095238095
994
+ name: Cosine Recall@10
995
+ - type: cosine_ndcg@10
996
+ value: 0.718420116457893
997
+ name: Cosine Ndcg@10
998
+ - type: cosine_mrr@10
999
+ value: 0.6551719576719578
1000
+ name: Cosine Mrr@10
1001
+ - type: cosine_map@100
1002
+ value: 0.6598918961837339
1003
+ name: Cosine Map@100
1004
+ - type: dot_accuracy@1
1005
+ value: 0.4857142857142857
1006
+ name: Dot Accuracy@1
1007
+ - type: dot_accuracy@3
1008
+ value: 0.8047619047619048
1009
+ name: Dot Accuracy@3
1010
+ - type: dot_accuracy@5
1011
+ value: 0.861904761904762
1012
+ name: Dot Accuracy@5
1013
+ - type: dot_accuracy@10
1014
+ value: 0.9095238095238095
1015
+ name: Dot Accuracy@10
1016
+ - type: dot_precision@1
1017
+ value: 0.4857142857142857
1018
+ name: Dot Precision@1
1019
+ - type: dot_precision@3
1020
+ value: 0.26825396825396824
1021
+ name: Dot Precision@3
1022
+ - type: dot_precision@5
1023
+ value: 0.17238095238095236
1024
+ name: Dot Precision@5
1025
+ - type: dot_precision@10
1026
+ value: 0.09095238095238094
1027
+ name: Dot Precision@10
1028
+ - type: dot_recall@1
1029
+ value: 0.4857142857142857
1030
+ name: Dot Recall@1
1031
+ - type: dot_recall@3
1032
+ value: 0.8047619047619048
1033
+ name: Dot Recall@3
1034
+ - type: dot_recall@5
1035
+ value: 0.861904761904762
1036
+ name: Dot Recall@5
1037
+ - type: dot_recall@10
1038
+ value: 0.9095238095238095
1039
+ name: Dot Recall@10
1040
+ - type: dot_ndcg@10
1041
+ value: 0.718420116457893
1042
+ name: Dot Ndcg@10
1043
+ - type: dot_mrr@10
1044
+ value: 0.6551719576719578
1045
+ name: Dot Mrr@10
1046
+ - type: dot_map@100
1047
+ value: 0.6598918961837339
1048
+ name: Dot Map@100
1049
+ ---
1050
+
1051
+ # SentenceTransformer based on BAAI/bge-small-en
1052
+
1053
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
1054
+
1055
+ ## Model Details
1056
+
1057
+ ### Model Description
1058
+ - **Model Type:** Sentence Transformer
1059
+ - **Base model:** [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) <!-- at revision 2275a7bdee235e9b4f01fa73aa60d3311983cfea -->
1060
+ - **Maximum Sequence Length:** 512 tokens
1061
+ - **Output Dimensionality:** 384 tokens
1062
+ - **Similarity Function:** Cosine Similarity
1063
+ <!-- - **Training Dataset:** Unknown -->
1064
+ <!-- - **Language:** Unknown -->
1065
+ <!-- - **License:** Unknown -->
1066
+
1067
+ ### Model Sources
1068
+
1069
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
1070
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
1071
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
1072
+
1073
+ ### Full Model Architecture
1074
+
1075
+ ```
1076
+ SentenceTransformer(
1077
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
1078
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
1079
+ (2): Normalize()
1080
+ )
1081
+ ```
1082
+
1083
+ ## Usage
1084
+
1085
+ ### Direct Usage (Sentence Transformers)
1086
+
1087
+ First install the Sentence Transformers library:
1088
+
1089
+ ```bash
1090
+ pip install -U sentence-transformers
1091
+ ```
1092
+
1093
+ Then you can load this model and run inference.
1094
+ ```python
1095
+ from sentence_transformers import SentenceTransformer
1096
+
1097
+ # Download from the 🤗 Hub
1098
+ model = SentenceTransformer("sentence_transformers_model_id")
1099
+ # Run inference
1100
+ sentences = [
1101
+ 'What is the website to find services for customers purchasing from a commercial reseller?',
1102
+ 'Parts and Materials\nHPE will provide HPE-supported replacement parts and materials necessary to maintain the covered hardware\nproduct in operating condition, including parts and materials for available and recommended engineering\nimprovements. \xa0\nParts and components that have reached their maximum supported lifetime and/or the maximum usage\nlimitations as set forth in the manufacturer\'s operating manual, product quick-specs, or the technical product\ndata sheet will not be provided, repaired, or replaced as part of these services.\n\xa0\nHow to Purchase Services\nServices are sold by Hewlett Packard Enterprise and Hewlett Packard Enterprise Authorized Service Partners:\nServices for customers purchasing from HPE or an enterprise reseller are quoted using HPE order\nconfiguration tools.\nCustomers purchasing from a commercial reseller can find services at\nhttps://ssc.hpe.com/portal/site/ssc/\n\xa0\nAI Powered and Digitally Enabled Support Experience\nAchieve faster time to resolution with access to product-specific resources and expertise through a digital and\ndata driven customer experience \xa0\nSign into the HPE Support Center experience, featuring streamlined self-serve case creation and\nmanagement capabilities with inline knowledge recommendations. You will also find personalized task alerts\nand powerful troubleshooting support through an intelligent virtual agent with seamless transition when needed\nto a live support agent. \xa0\nhttps://support.hpe.com/hpesc/public/home/signin\nConsume IT On Your Terms\nHPE GreenLake edge-to-cloud platform brings the cloud experience directly to your apps and data wherever\nthey are-the edge, colocations, or your data center. It delivers cloud services for on-premises IT infrastructure\nspecifically tailored to your most demanding workloads. With a pay-per-use, scalable, point-and-click self-\nservice experience that is managed for you, HPE GreenLake edge-to-cloud platform accelerates digital\ntransformation in a distributed, edge-to-cloud world.\nGet faster time to market\nSave on TCO, align costs to business\nScale quickly, meet unpredictable demand\nSimplify IT operations across your data centers and clouds\nTo learn more about HPE Services, please contact your Hewlett Packard Enterprise sales representative or\nHewlett Packard Enterprise Authorized Channel Partner. \xa0 Contact information for a representative in your area\ncan be found at "Contact HPE" https://www.hpe.com/us/en/contact-hpe.html \xa0\nFor more information\nhttp://www.hpe.com/services\nQuickSpecs\nHPE Cray XD675\nService and Support\nDA - 17239\xa0\xa0\xa0Worldwide QuickSpecs — Version 4 — 8/19/2024\nPage\xa0 13',
1103
+ 'HPE Cray XD675 Server Top View\nItem Description \xa0 \xa0\n1. 8x AMD MI300X OAM Accelerator \xa0 \xa0\n\xa0\nQuickSpecs\nHPE Cray XD675\nOverview\nDA - 17239\xa0\xa0\xa0Worldwide QuickSpecs — Version 4 — 8/19/2024\nPage\xa0 2',
1104
+ ]
1105
+ embeddings = model.encode(sentences)
1106
+ print(embeddings.shape)
1107
+ # [3, 384]
1108
+
1109
+ # Get the similarity scores for the embeddings
1110
+ similarities = model.similarity(embeddings, embeddings)
1111
+ print(similarities.shape)
1112
+ # [3, 3]
1113
+ ```
1114
+
1115
+ <!--
1116
+ ### Direct Usage (Transformers)
1117
+
1118
+ <details><summary>Click to see the direct usage in Transformers</summary>
1119
+
1120
+ </details>
1121
+ -->
1122
+
1123
+ <!--
1124
+ ### Downstream Usage (Sentence Transformers)
1125
+
1126
+ You can finetune this model on your own dataset.
1127
+
1128
+ <details><summary>Click to expand</summary>
1129
+
1130
+ </details>
1131
+ -->
1132
+
1133
+ <!--
1134
+ ### Out-of-Scope Use
1135
+
1136
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
1137
+ -->
1138
+
1139
+ ## Evaluation
1140
+
1141
+ ### Metrics
1142
+
1143
+ #### Information Retrieval
1144
+
1145
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
1146
+
1147
+ | Metric | Value |
1148
+ |:--------------------|:-----------|
1149
+ | cosine_accuracy@1 | 0.4857 |
1150
+ | cosine_accuracy@3 | 0.8048 |
1151
+ | cosine_accuracy@5 | 0.8619 |
1152
+ | cosine_accuracy@10 | 0.9095 |
1153
+ | cosine_precision@1 | 0.4857 |
1154
+ | cosine_precision@3 | 0.2683 |
1155
+ | cosine_precision@5 | 0.1724 |
1156
+ | cosine_precision@10 | 0.091 |
1157
+ | cosine_recall@1 | 0.4857 |
1158
+ | cosine_recall@3 | 0.8048 |
1159
+ | cosine_recall@5 | 0.8619 |
1160
+ | cosine_recall@10 | 0.9095 |
1161
+ | cosine_ndcg@10 | 0.7184 |
1162
+ | cosine_mrr@10 | 0.6552 |
1163
+ | **cosine_map@100** | **0.6599** |
1164
+ | dot_accuracy@1 | 0.4857 |
1165
+ | dot_accuracy@3 | 0.8048 |
1166
+ | dot_accuracy@5 | 0.8619 |
1167
+ | dot_accuracy@10 | 0.9095 |
1168
+ | dot_precision@1 | 0.4857 |
1169
+ | dot_precision@3 | 0.2683 |
1170
+ | dot_precision@5 | 0.1724 |
1171
+ | dot_precision@10 | 0.091 |
1172
+ | dot_recall@1 | 0.4857 |
1173
+ | dot_recall@3 | 0.8048 |
1174
+ | dot_recall@5 | 0.8619 |
1175
+ | dot_recall@10 | 0.9095 |
1176
+ | dot_ndcg@10 | 0.7184 |
1177
+ | dot_mrr@10 | 0.6552 |
1178
+ | dot_map@100 | 0.6599 |
1179
+
1180
+ <!--
1181
+ ## Bias, Risks and Limitations
1182
+
1183
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
1184
+ -->
1185
+
1186
+ <!--
1187
+ ### Recommendations
1188
+
1189
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
1190
+ -->
1191
+
1192
+ ## Training Details
1193
+
1194
+ ### Training Dataset
1195
+
1196
+ #### Unnamed Dataset
1197
+
1198
+
1199
+ * Size: 3,221 training samples
1200
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
1201
+ * Approximate statistics based on the first 1000 samples:
1202
+ | | sentence_0 | sentence_1 |
1203
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
1204
+ | type | string | string |
1205
+ | details | <ul><li>min: 3 tokens</li><li>mean: 22.72 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 328.94 tokens</li><li>max: 512 tokens</li></ul> |
1206
+ * Samples:
1207
+ | sentence_0 | sentence_1 |
1208
+ |:-----------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1209
+ | <code>What is the maximum number of Apollo n2X00 series chassis that can fit in a 42U rack?</code> | <code>HPE Apollo 2000 Gen10 Plus System  <br>HPE is bringing the power of supercomputing to datacenters of any size with the Apollo 2000 Gen10 Plus<br>system.  <br>The HPE Apollo 2000 Gen10 Plus System is a dense, multi-server platform that packs incredible<br>performance and workload flexibility into a small datacenter space, while delivering the efficiencies of a<br>shared infrastructure. It is designed to provide a bridge to scale-out architecture for traditional data centers,<br>so enterprise and SME customers can achieve the space-saving value of density-optimized infrastructure in a<br>cost-effective and non-disruptive manner.  <br>The Apollo 2000 Gen10 Plus offers a density optimized, shared infrastructure with a flexible scale-out<br>architecture to support a variety of workloads from remote site systems to large HPC clusters and everything<br>in between. HPE iLO5 provides built-in firmware-level server security with silicon root of trust. It can be<br>deployed cost-effectively starting with a single 2U, shared infrastructure chassis and configured with a variety<br>of storage options to meet the configuration needs of a wide variety of scale-out workloads.<br> <br>The Apollo 2000 Gen10 Plus System delivers up to four times the density of a traditional rack mount server<br>with up to four ProLiant Gen10 Plus independent servers per 2U mounted in standard racks with rear-aisle<br>serviceability access. A 42U rack fits up to 20 Apollo n2X00 series chassis accommodating up to 80 servers<br>per rack.<br>What's New  <br>Support for up to four Xilinx Alveo U50 single wide  GPU's  in XL290n node.<br>Enables a robust stack of Intel 3 rd generation Xeon Scalable Processors to increase your power density<br>and increase datacenter efficiency. Intel AVX-512 * feature increases memory bandwidth, improves<br>frequency management to enable greater performance. Also Speed Select Technology (SST) allows<br>Core count and frequency flexibility *<br>The Direct Liquid Cooling (DLC) option for the Apollo 2000 Gen10 Plus System comes ready to plug and<br>play. Choose from either CPU only or CPU plus memory cooling options.<br>Enables flexible choices with Intel 3 rd Generation Xeon Scalable Processors and AMD 2 nd and 3 rd <br>generation EPYC Processors<br>New flexible infrastructure offers multiple storage options, 8 memory channels and 3200 MT/s memory,<br>PCIe Gen4 and support for processors over 250W for improved application performance.<br>Complete software portfolio for all customer workloads, for node to rack management, including<br>comprehensive integrated cluster management software<br>Secure from the start with firmware anchored into silicon with iLO5 and silicon root of trust for the<br>highest level of system security  <br>Notes: *Available on select processors<br> <br>QuickSpecs<br>HPE Apollo 2000 Gen10 Plus System<br>Overview<br>DA - 16526   Worldwide QuickSpecs — Version 41 — 7/1/2024<br>Page  1</code> |
1210
+ | <code>What is the maximum number of independent servers that can be mounted in a single 2U Apollo 2000 Gen10 Plus System chassis?</code> | <code>HPE Apollo 2000 Gen10 Plus System  <br>HPE is bringing the power of supercomputing to datacenters of any size with the Apollo 2000 Gen10 Plus<br>system.  <br>The HPE Apollo 2000 Gen10 Plus System is a dense, multi-server platform that packs incredible<br>performance and workload flexibility into a small datacenter space, while delivering the efficiencies of a<br>shared infrastructure. It is designed to provide a bridge to scale-out architecture for traditional data centers,<br>so enterprise and SME customers can achieve the space-saving value of density-optimized infrastructure in a<br>cost-effective and non-disruptive manner.  <br>The Apollo 2000 Gen10 Plus offers a density optimized, shared infrastructure with a flexible scale-out<br>architecture to support a variety of workloads from remote site systems to large HPC clusters and everything<br>in between. HPE iLO5 provides built-in firmware-level server security with silicon root of trust. It can be<br>deployed cost-effectively starting with a single 2U, shared infrastructure chassis and configured with a variety<br>of storage options to meet the configuration needs of a wide variety of scale-out workloads.<br> <br>The Apollo 2000 Gen10 Plus System delivers up to four times the density of a traditional rack mount server<br>with up to four ProLiant Gen10 Plus independent servers per 2U mounted in standard racks with rear-aisle<br>serviceability access. A 42U rack fits up to 20 Apollo n2X00 series chassis accommodating up to 80 servers<br>per rack.<br>What's New  <br>Support for up to four Xilinx Alveo U50 single wide  GPU's  in XL290n node.<br>Enables a robust stack of Intel 3 rd generation Xeon Scalable Processors to increase your power density<br>and increase datacenter efficiency. Intel AVX-512 * feature increases memory bandwidth, improves<br>frequency management to enable greater performance. Also Speed Select Technology (SST) allows<br>Core count and frequency flexibility *<br>The Direct Liquid Cooling (DLC) option for the Apollo 2000 Gen10 Plus System comes ready to plug and<br>play. Choose from either CPU only or CPU plus memory cooling options.<br>Enables flexible choices with Intel 3 rd Generation Xeon Scalable Processors and AMD 2 nd and 3 rd <br>generation EPYC Processors<br>New flexible infrastructure offers multiple storage options, 8 memory channels and 3200 MT/s memory,<br>PCIe Gen4 and support for processors over 250W for improved application performance.<br>Complete software portfolio for all customer workloads, for node to rack management, including<br>comprehensive integrated cluster management software<br>Secure from the start with firmware anchored into silicon with iLO5 and silicon root of trust for the<br>highest level of system security  <br>Notes: *Available on select processors<br> <br>QuickSpecs<br>HPE Apollo 2000 Gen10 Plus System<br>Overview<br>DA - 16526   Worldwide QuickSpecs — Version 41 — 7/1/2024<br>Page  1</code> |
1211
+ | <code>What is the processor type supported by the HPE Apollo n2800 Gen10 Plus 24 SFF Flexible CTO chassis?</code> | <code>HPE Apollo n2600 Gen10 Plus SFF CTO Chassis supports both Intel and AMD based server nodes<br>HPE Apollo n2800 Gen10 Plus 24 SFF Flexible CTO chassis supports Intel based server nodes<br>Backplane selection determines number and type of drives supported<br>Item Description Item Description<br>1 SFF hot-plug drives 3 Health LED<br>2 Serial number/iLO information pull tab 4 UID button LED<br> <br>Chassis Rear Panel Components - 4 x 1U nodes<br>Item Description Item Description<br>1 Server 3 & 4 4 iLO Ports<br>2 HPE Apollo Platform Manager (APM) 2.0 port 5 Server 1 & 2<br>3 Power supply 1 & 2 6 Optional Rack Consolidation Module (RCM)<br> <br> <br>QuickSpecs<br>HPE Apollo 2000 Gen10 Plus System<br>Overview<br>DA - 16526   Worldwide QuickSpecs — Version 41 — 7/1/2024<br>Page  2</code> |
1212
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
1213
+ ```json
1214
+ {
1215
+ "scale": 20.0,
1216
+ "similarity_fct": "cos_sim"
1217
+ }
1218
+ ```
1219
+
1220
+ ### Training Hyperparameters
1221
+ #### Non-Default Hyperparameters
1222
+
1223
+ - `eval_strategy`: steps
1224
+ - `per_device_train_batch_size`: 256
1225
+ - `per_device_eval_batch_size`: 256
1226
+ - `num_train_epochs`: 20
1227
+ - `multi_dataset_batch_sampler`: round_robin
1228
+
1229
+ #### All Hyperparameters
1230
+ <details><summary>Click to expand</summary>
1231
+
1232
+ - `overwrite_output_dir`: False
1233
+ - `do_predict`: False
1234
+ - `eval_strategy`: steps
1235
+ - `prediction_loss_only`: True
1236
+ - `per_device_train_batch_size`: 256
1237
+ - `per_device_eval_batch_size`: 256
1238
+ - `per_gpu_train_batch_size`: None
1239
+ - `per_gpu_eval_batch_size`: None
1240
+ - `gradient_accumulation_steps`: 1
1241
+ - `eval_accumulation_steps`: None
1242
+ - `torch_empty_cache_steps`: None
1243
+ - `learning_rate`: 5e-05
1244
+ - `weight_decay`: 0.0
1245
+ - `adam_beta1`: 0.9
1246
+ - `adam_beta2`: 0.999
1247
+ - `adam_epsilon`: 1e-08
1248
+ - `max_grad_norm`: 1
1249
+ - `num_train_epochs`: 20
1250
+ - `max_steps`: -1
1251
+ - `lr_scheduler_type`: linear
1252
+ - `lr_scheduler_kwargs`: {}
1253
+ - `warmup_ratio`: 0.0
1254
+ - `warmup_steps`: 0
1255
+ - `log_level`: passive
1256
+ - `log_level_replica`: warning
1257
+ - `log_on_each_node`: True
1258
+ - `logging_nan_inf_filter`: True
1259
+ - `save_safetensors`: True
1260
+ - `save_on_each_node`: False
1261
+ - `save_only_model`: False
1262
+ - `restore_callback_states_from_checkpoint`: False
1263
+ - `no_cuda`: False
1264
+ - `use_cpu`: False
1265
+ - `use_mps_device`: False
1266
+ - `seed`: 42
1267
+ - `data_seed`: None
1268
+ - `jit_mode_eval`: False
1269
+ - `use_ipex`: False
1270
+ - `bf16`: False
1271
+ - `fp16`: False
1272
+ - `fp16_opt_level`: O1
1273
+ - `half_precision_backend`: auto
1274
+ - `bf16_full_eval`: False
1275
+ - `fp16_full_eval`: False
1276
+ - `tf32`: None
1277
+ - `local_rank`: 0
1278
+ - `ddp_backend`: None
1279
+ - `tpu_num_cores`: None
1280
+ - `tpu_metrics_debug`: False
1281
+ - `debug`: []
1282
+ - `dataloader_drop_last`: False
1283
+ - `dataloader_num_workers`: 0
1284
+ - `dataloader_prefetch_factor`: None
1285
+ - `past_index`: -1
1286
+ - `disable_tqdm`: False
1287
+ - `remove_unused_columns`: True
1288
+ - `label_names`: None
1289
+ - `load_best_model_at_end`: False
1290
+ - `ignore_data_skip`: False
1291
+ - `fsdp`: []
1292
+ - `fsdp_min_num_params`: 0
1293
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
1294
+ - `fsdp_transformer_layer_cls_to_wrap`: None
1295
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
1296
+ - `deepspeed`: None
1297
+ - `label_smoothing_factor`: 0.0
1298
+ - `optim`: adamw_torch
1299
+ - `optim_args`: None
1300
+ - `adafactor`: False
1301
+ - `group_by_length`: False
1302
+ - `length_column_name`: length
1303
+ - `ddp_find_unused_parameters`: None
1304
+ - `ddp_bucket_cap_mb`: None
1305
+ - `ddp_broadcast_buffers`: False
1306
+ - `dataloader_pin_memory`: True
1307
+ - `dataloader_persistent_workers`: False
1308
+ - `skip_memory_metrics`: True
1309
+ - `use_legacy_prediction_loop`: False
1310
+ - `push_to_hub`: False
1311
+ - `resume_from_checkpoint`: None
1312
+ - `hub_model_id`: None
1313
+ - `hub_strategy`: every_save
1314
+ - `hub_private_repo`: False
1315
+ - `hub_always_push`: False
1316
+ - `gradient_checkpointing`: False
1317
+ - `gradient_checkpointing_kwargs`: None
1318
+ - `include_inputs_for_metrics`: False
1319
+ - `eval_do_concat_batches`: True
1320
+ - `fp16_backend`: auto
1321
+ - `push_to_hub_model_id`: None
1322
+ - `push_to_hub_organization`: None
1323
+ - `mp_parameters`:
1324
+ - `auto_find_batch_size`: False
1325
+ - `full_determinism`: False
1326
+ - `torchdynamo`: None
1327
+ - `ray_scope`: last
1328
+ - `ddp_timeout`: 1800
1329
+ - `torch_compile`: False
1330
+ - `torch_compile_backend`: None
1331
+ - `torch_compile_mode`: None
1332
+ - `dispatch_batches`: None
1333
+ - `split_batches`: None
1334
+ - `include_tokens_per_second`: False
1335
+ - `include_num_input_tokens_seen`: False
1336
+ - `neftune_noise_alpha`: None
1337
+ - `optim_target_modules`: None
1338
+ - `batch_eval_metrics`: False
1339
+ - `eval_on_start`: False
1340
+ - `use_liger_kernel`: False
1341
+ - `eval_use_gather_object`: False
1342
+ - `batch_sampler`: batch_sampler
1343
+ - `multi_dataset_batch_sampler`: round_robin
1344
+
1345
+ </details>
1346
+
1347
+ ### Training Logs
1348
+ | Epoch | Step | cosine_map@100 |
1349
+ |:-------:|:----:|:--------------:|
1350
+ | 1.0 | 7 | 0.4864 |
1351
+ | 2.0 | 14 | 0.5209 |
1352
+ | 3.0 | 21 | 0.5131 |
1353
+ | 4.0 | 28 | 0.5047 |
1354
+ | 5.0 | 35 | 0.5480 |
1355
+ | 6.0 | 42 | 0.5808 |
1356
+ | 7.0 | 49 | 0.5950 |
1357
+ | 7.1429 | 50 | 0.5975 |
1358
+ | 8.0 | 56 | 0.6145 |
1359
+ | 9.0 | 63 | 0.6268 |
1360
+ | 10.0 | 70 | 0.6292 |
1361
+ | 11.0 | 77 | 0.6385 |
1362
+ | 12.0 | 84 | 0.6445 |
1363
+ | 13.0 | 91 | 0.6279 |
1364
+ | 14.0 | 98 | 0.6296 |
1365
+ | 14.2857 | 100 | 0.6321 |
1366
+ | 15.0 | 105 | 0.6317 |
1367
+ | 16.0 | 112 | 0.6401 |
1368
+ | 17.0 | 119 | 0.6590 |
1369
+ | 18.0 | 126 | 0.6562 |
1370
+ | 19.0 | 133 | 0.6599 |
1371
+
1372
+
1373
+ ### Framework Versions
1374
+ - Python: 3.11.8
1375
+ - Sentence Transformers: 3.1.1
1376
+ - Transformers: 4.45.2
1377
+ - PyTorch: 2.2.2+cu121
1378
+ - Accelerate: 1.1.1
1379
+ - Datasets: 3.1.0
1380
+ - Tokenizers: 0.20.3
1381
+
1382
+ ## Citation
1383
+
1384
+ ### BibTeX
1385
+
1386
+ #### Sentence Transformers
1387
+ ```bibtex
1388
+ @inproceedings{reimers-2019-sentence-bert,
1389
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
1390
+ author = "Reimers, Nils and Gurevych, Iryna",
1391
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
1392
+ month = "11",
1393
+ year = "2019",
1394
+ publisher = "Association for Computational Linguistics",
1395
+ url = "https://arxiv.org/abs/1908.10084",
1396
+ }
1397
+ ```
1398
+
1399
+ #### MultipleNegativesRankingLoss
1400
+ ```bibtex
1401
+ @misc{henderson2017efficient,
1402
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1403
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1404
+ year={2017},
1405
+ eprint={1705.00652},
1406
+ archivePrefix={arXiv},
1407
+ primaryClass={cs.CL}
1408
+ }
1409
+ ```
1410
+
1411
+ <!--
1412
+ ## Glossary
1413
+
1414
+ *Clearly define terms in order to be accessible across audiences.*
1415
+ -->
1416
+
1417
+ <!--
1418
+ ## Model Card Authors
1419
+
1420
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
1421
+ -->
1422
+
1423
+ <!--
1424
+ ## Model Card Contact
1425
+
1426
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
1427
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.45.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.45.2",
5
+ "pytorch": "2.2.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ac4f5f8449662cdebbd438b0b0e3104266c6118d823dfee05e7448e280d5f1d
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff