KnutJaegersberg commited on
Commit
b09fe7b
1 Parent(s): 81515e4

Upload 10 files

Browse files
README.md CHANGED
@@ -1,3 +1,467 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ [cognitivecomputations/dolphin-2.9.1-mixtral-1x22b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b) converted to Mistral format. A Mixtral model with a single expert is mathematically equivalent to the corresponding Mistral model. This allows to remove 344k parameters and to avoid software bugs when encountering a Mixtral with 1 expert.
6
+
7
+ Note that ChatML is entirely broken in the original and converted model. I have no plausible explanation why it's broken. Alpaca seems to work even though the model is not trained on it.
8
+
9
+ Original model card below.
10
+
11
+ ---
12
+
13
+ # Dolphin 2.9.1 Mixtral 1x22b 🐬
14
+
15
+ Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
16
+
17
+ [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations)
18
+ Discord: https://discord.gg/cognitivecomputations
19
+
20
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
21
+
22
+ This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed.
23
+
24
+ The base model has 64k context, and the full-weight fine-tuning was with 16k sequence length.
25
+
26
+ It took 27 hours on 8xH100 provided by Crusoe Cloud.
27
+
28
+ This model was fully fine-tuned, targeting all layers.
29
+
30
+ The model is an extracted expert using SLERP and a custom script that we've open-sourced. It extracts a single expert which is the combined SLERP of all 8 experts from a Mixtral architecture. We decided to not fully convert to a dense model, for the sake of trying to keep as much of the original model's performance as possible, as this process is already quite surgical and there are a lot of variables to take into account.
31
+
32
+ Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
33
+
34
+ Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
35
+
36
+ Dolphin is licensed under Apache 2.0. We grant permission for any use, including commercial, as long as it complies with the Apache-2.0 license. Dolphin was trained using data generated from GPT-4, among other models. For more details on the extraction process of the expert model, visit our GitHub repository: https://github.com/cognitivecomputations/extract-expert/tree/main
37
+ ## Evals
38
+
39
+ ![image/png](https://i.ibb.co/yNmCv76/file-nkvf-Q9-Mg-X57-GB7-Ayrl-YA2-Zsp.png)
40
+
41
+
42
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
43
+ <details><summary>See axolotl config</summary>
44
+
45
+ axolotl version: `0.4.0`
46
+ ```yaml
47
+ base_model: cognitivecomputations/mixtral-1x22b-base
48
+ model_type: AutoModelForCausalLM
49
+ tokenizer_type: AutoTokenizer
50
+
51
+ # trust_remote_code: true
52
+
53
+ # load_in_8bit: true
54
+ # load_in_4bit: true
55
+ # strict: false
56
+
57
+ datasets:
58
+ - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
59
+ type: sharegpt
60
+ conversation: chatml
61
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
62
+ type: sharegpt
63
+ conversation: chatml
64
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
65
+ type: sharegpt
66
+ conversation: chatml
67
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
68
+ type: sharegpt
69
+ conversation: chatml
70
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
71
+ type: sharegpt
72
+ conversation: chatml
73
+ - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
74
+ type: sharegpt
75
+ conversation: chatml
76
+ - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
77
+ type: sharegpt
78
+ conversation: chatml
79
+ - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
80
+ type: sharegpt
81
+ conversation: chatml
82
+ - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
83
+ type: sharegpt
84
+ conversation: chatml
85
+ - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
86
+ type: sharegpt
87
+ conversation: chatml
88
+ - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
89
+ type: sharegpt
90
+ conversation: chatml
91
+ - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
92
+ type: sharegpt
93
+ conversation: chatml
94
+ - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
95
+ type: sharegpt
96
+ conversation: chatml
97
+
98
+ chat_template: chatml
99
+ dataset_prepared_path: yi34b-prepared
100
+ val_set_size: 0.01
101
+ output_dir: ./1x22b-out
102
+
103
+ # adapter: qlora
104
+ # lora_r: 16
105
+ # lora_alpha: 16
106
+ # lora_modules_to_save: [embed_tokens, lm_head]
107
+ # lora_dropout: 0.05
108
+ # lora_target_linear: true
109
+
110
+ # unfrozen_parameters:
111
+ # - ^lm_head.weight$
112
+ # - ^model.embed_tokens.weight$
113
+ # # input_layernorm layers
114
+ # - model.layers.0.input_layernorm
115
+ # - model.layers.1.input_layernorm
116
+ # - model.layers.2.input_layernorm
117
+ # - model.layers.3.input_layernorm
118
+ # - model.layers.4.input_layernorm
119
+ # - model.layers.5.input_layernorm
120
+ # - model.layers.6.input_layernorm
121
+ # - model.layers.7.input_layernorm
122
+ # - model.layers.8.input_layernorm
123
+ # - model.layers.9.input_layernorm
124
+ # - model.layers.10.input_layernorm
125
+ # - model.layers.11.input_layernorm
126
+ # - model.layers.12.input_layernorm
127
+ # - model.layers.13.input_layernorm
128
+ # - model.layers.14.input_layernorm
129
+ # - model.layers.15.input_layernorm
130
+ # - model.layers.16.input_layernorm
131
+ # - model.layers.17.input_layernorm
132
+ # - model.layers.18.input_layernorm
133
+ # - model.layers.19.input_layernorm
134
+ # - model.layers.20.input_layernorm
135
+ # - model.layers.21.input_layernorm
136
+ # - model.layers.22.input_layernorm
137
+ # - model.layers.23.input_layernorm
138
+ # # lm_head layers
139
+ # # mlp.down_proj layers
140
+ # - model.layers.17.mlp.down_proj
141
+ # - model.layers.18.mlp.down_proj
142
+ # - model.layers.19.mlp.down_proj
143
+ # - model.layers.20.mlp.down_proj
144
+ # - model.layers.21.mlp.down_proj
145
+ # - model.layers.22.mlp.down_proj
146
+ # - model.layers.23.mlp.down_proj
147
+ # - model.layers.24.mlp.down_proj
148
+ # - model.layers.25.mlp.down_proj
149
+ # - model.layers.26.mlp.down_proj
150
+ # - model.layers.27.mlp.down_proj
151
+ # - model.layers.28.mlp.down_proj
152
+ # - model.layers.29.mlp.down_proj
153
+ # - model.layers.30.mlp.down_proj
154
+ # - model.layers.31.mlp.down_proj
155
+ # - model.layers.32.mlp.down_proj
156
+ # - model.layers.33.mlp.down_proj
157
+ # - model.layers.34.mlp.down_proj
158
+ # - model.layers.35.mlp.down_proj
159
+ # - model.layers.36.mlp.down_proj
160
+ # - model.layers.37.mlp.down_proj
161
+ # - model.layers.38.mlp.down_proj
162
+ # - model.layers.39.mlp.down_proj
163
+ # - model.layers.40.mlp.down_proj
164
+ # # mlp.gate_proj layers
165
+ # - model.layers.51.mlp.gate_proj
166
+ # - model.layers.50.mlp.gate_proj
167
+ # - model.layers.53.mlp.gate_proj
168
+ # - model.layers.52.mlp.gate_proj
169
+ # - model.layers.49.mlp.gate_proj
170
+ # - model.layers.45.mlp.gate_proj
171
+ # - model.layers.46.mlp.gate_proj
172
+ # - model.layers.47.mlp.gate_proj
173
+ # - model.layers.57.mlp.gate_proj
174
+ # - model.layers.48.mlp.gate_proj
175
+ # - model.layers.56.mlp.gate_proj
176
+ # - model.layers.41.mlp.gate_proj
177
+ # - model.layers.54.mlp.gate_proj
178
+ # - model.layers.43.mlp.gate_proj
179
+ # - model.layers.44.mlp.gate_proj
180
+ # - model.layers.60.mlp.gate_proj
181
+ # - model.layers.55.mlp.gate_proj
182
+ # - model.layers.40.mlp.gate_proj
183
+ # - model.layers.42.mlp.gate_proj
184
+ # - model.layers.58.mlp.gate_proj
185
+ # - model.layers.36.mlp.gate_proj
186
+ # - model.layers.37.mlp.gate_proj
187
+ # - model.layers.38.mlp.gate_proj
188
+ # - model.layers.39.mlp.gate_proj
189
+ # # mlp.up_proj layers
190
+ # - model.layers.50.mlp.up_proj
191
+ # - model.layers.51.mlp.up_proj
192
+ # - model.layers.41.mlp.up_proj
193
+ # - model.layers.49.mlp.up_proj
194
+ # - model.layers.43.mlp.up_proj
195
+ # - model.layers.44.mlp.up_proj
196
+ # - model.layers.40.mlp.up_proj
197
+ # - model.layers.45.mlp.up_proj
198
+ # - model.layers.47.mlp.up_proj
199
+ # - model.layers.48.mlp.up_proj
200
+ # - model.layers.46.mlp.up_proj
201
+ # - model.layers.42.mlp.up_proj
202
+ # - model.layers.39.mlp.up_proj
203
+ # - model.layers.36.mlp.up_proj
204
+ # - model.layers.37.mlp.up_proj
205
+ # - model.layers.38.mlp.up_proj
206
+ # - model.layers.56.mlp.up_proj
207
+ # - model.layers.57.mlp.up_proj
208
+ # - model.layers.53.mlp.up_proj
209
+ # - model.layers.31.mlp.up_proj
210
+ # - model.layers.32.mlp.up_proj
211
+ # - model.layers.34.mlp.up_proj
212
+ # - model.layers.35.mlp.up_proj
213
+ # - model.layers.33.mlp.up_proj
214
+ # # model.embed_tokens layers
215
+ # # model.norm layers
216
+ # # post_attention_layernorm layers
217
+ # - model.layers.0.post_attention_layernorm
218
+ # - model.layers.1.post_attention_layernorm
219
+ # - model.layers.2.post_attention_layernorm
220
+ # - model.layers.3.post_attention_layernorm
221
+ # - model.layers.4.post_attention_layernorm
222
+ # - model.layers.5.post_attention_layernorm
223
+ # - model.layers.6.post_attention_layernorm
224
+ # - model.layers.7.post_attention_layernorm
225
+ # - model.layers.8.post_attention_layernorm
226
+ # - model.layers.9.post_attention_layernorm
227
+ # - model.layers.10.post_attention_layernorm
228
+ # - model.layers.11.post_attention_layernorm
229
+ # - model.layers.12.post_attention_layernorm
230
+ # - model.layers.13.post_attention_layernorm
231
+ # - model.layers.14.post_attention_layernorm
232
+ # - model.layers.15.post_attention_layernorm
233
+ # - model.layers.16.post_attention_layernorm
234
+ # - model.layers.17.post_attention_layernorm
235
+ # - model.layers.18.post_attention_layernorm
236
+ # - model.layers.19.post_attention_layernorm
237
+ # - model.layers.20.post_attention_layernorm
238
+ # - model.layers.21.post_attention_layernorm
239
+ # - model.layers.22.post_attention_layernorm
240
+ # - model.layers.23.post_attention_layernorm
241
+ # # self_attn.k_proj layers
242
+ # - model.layers.42.self_attn.k_proj
243
+ # - model.layers.41.self_attn.k_proj
244
+ # - model.layers.39.self_attn.k_proj
245
+ # - model.layers.35.self_attn.k_proj
246
+ # - model.layers.28.self_attn.k_proj
247
+ # - model.layers.79.self_attn.k_proj
248
+ # - model.layers.43.self_attn.k_proj
249
+ # - model.layers.32.self_attn.k_proj
250
+ # - model.layers.73.self_attn.k_proj
251
+ # - model.layers.31.self_attn.k_proj
252
+ # - model.layers.29.self_attn.k_proj
253
+ # - model.layers.76.self_attn.k_proj
254
+ # - model.layers.30.self_attn.k_proj
255
+ # - model.layers.40.self_attn.k_proj
256
+ # - model.layers.33.self_attn.k_proj
257
+ # - model.layers.78.self_attn.k_proj
258
+ # - model.layers.34.self_attn.k_proj
259
+ # - model.layers.37.self_attn.k_proj
260
+ # - model.layers.45.self_attn.k_proj
261
+ # - model.layers.44.self_attn.k_proj
262
+ # - model.layers.71.self_attn.k_proj
263
+ # - model.layers.26.self_attn.k_proj
264
+ # - model.layers.74.self_attn.k_proj
265
+ # - model.layers.27.self_attn.k_proj
266
+ # # self_attn.o_proj layers
267
+ # - model.layers.35.self_attn.o_proj
268
+ # - model.layers.34.self_attn.o_proj
269
+ # - model.layers.37.self_attn.o_proj
270
+ # - model.layers.33.self_attn.o_proj
271
+ # - model.layers.31.self_attn.o_proj
272
+ # - model.layers.27.self_attn.o_proj
273
+ # - model.layers.38.self_attn.o_proj
274
+ # - model.layers.24.self_attn.o_proj
275
+ # - model.layers.39.self_attn.o_proj
276
+ # - model.layers.43.self_attn.o_proj
277
+ # - model.layers.29.self_attn.o_proj
278
+ # - model.layers.0.self_attn.o_proj
279
+ # - model.layers.50.self_attn.o_proj
280
+ # - model.layers.32.self_attn.o_proj
281
+ # - model.layers.45.self_attn.o_proj
282
+ # - model.layers.30.self_attn.o_proj
283
+ # - model.layers.60.self_attn.o_proj
284
+ # - model.layers.23.self_attn.o_proj
285
+ # - model.layers.18.self_attn.o_proj
286
+ # - model.layers.67.self_attn.o_proj
287
+ # - model.layers.57.self_attn.o_proj
288
+ # - model.layers.20.self_attn.o_proj
289
+ # - model.layers.76.self_attn.o_proj
290
+ # - model.layers.28.self_attn.o_proj
291
+ # # self_attn.q_proj layers
292
+ # - model.layers.1.self_attn.q_proj
293
+ # - model.layers.6.self_attn.q_proj
294
+ # - model.layers.0.self_attn.q_proj
295
+ # - model.layers.5.self_attn.q_proj
296
+ # - model.layers.2.self_attn.q_proj
297
+ # - model.layers.7.self_attn.q_proj
298
+ # - model.layers.3.self_attn.q_proj
299
+ # - model.layers.4.self_attn.q_proj
300
+ # - model.layers.8.self_attn.q_proj
301
+ # - model.layers.9.self_attn.q_proj
302
+ # - model.layers.61.self_attn.q_proj
303
+ # - model.layers.10.self_attn.q_proj
304
+ # - model.layers.62.self_attn.q_proj
305
+ # - model.layers.36.self_attn.q_proj
306
+ # - model.layers.15.self_attn.q_proj
307
+ # - model.layers.11.self_attn.q_proj
308
+ # - model.layers.17.self_attn.q_proj
309
+ # - model.layers.60.self_attn.q_proj
310
+ # - model.layers.63.self_attn.q_proj
311
+ # - model.layers.64.self_attn.q_proj
312
+ # - model.layers.29.self_attn.q_proj
313
+ # - model.layers.30.self_attn.q_proj
314
+ # - model.layers.55.self_attn.q_proj
315
+ # - model.layers.34.self_attn.q_proj
316
+ # # self_attn.v_proj layers
317
+ # - model.layers.12.self_attn.v_proj
318
+ # - model.layers.16.self_attn.v_proj
319
+ # - model.layers.18.self_attn.v_proj
320
+ # - model.layers.19.self_attn.v_proj
321
+ # - model.layers.20.self_attn.v_proj
322
+ # - model.layers.21.self_attn.v_proj
323
+ # - model.layers.22.self_attn.v_proj
324
+ # - model.layers.23.self_attn.v_proj
325
+ # - model.layers.24.self_attn.v_proj
326
+ # - model.layers.25.self_attn.v_proj
327
+ # - model.layers.26.self_attn.v_proj
328
+ # - model.layers.27.self_attn.v_proj
329
+ # - model.layers.28.self_attn.v_proj
330
+ # - model.layers.29.self_attn.v_proj
331
+ # - model.layers.30.self_attn.v_proj
332
+ # - model.layers.31.self_attn.v_proj
333
+ # - model.layers.32.self_attn.v_proj
334
+ # - model.layers.33.self_attn.v_proj
335
+ # - model.layers.34.self_attn.v_proj
336
+ # - model.layers.35.self_attn.v_proj
337
+ # - model.layers.36.self_attn.v_proj
338
+ # - model.layers.37.self_attn.v_proj
339
+ # - model.layers.38.self_attn.v_proj
340
+ # - model.layers.39.self_attn.v_proj
341
+
342
+
343
+
344
+ sequence_len: 16384
345
+ sample_packing: true
346
+ pad_to_sequence_len: true
347
+
348
+ # adapter: lora
349
+ # lora_model_dir:
350
+ # lora_r: 32
351
+ # lora_alpha: 16
352
+ # lora_dropout: 0.05
353
+ # lora_target_linear: true
354
+ # lora_fan_in_fan_out:
355
+
356
+ wandb_project: dolphin-mixtral1x22b
357
+ wandb_entity:
358
+ wandb_watch:
359
+ wandb_name:
360
+ wandb_log_model:
361
+
362
+ gradient_accumulation_steps: 8
363
+ micro_batch_size: 1
364
+ num_epochs: 3
365
+ optimizer: adamw_8bit
366
+ lr_scheduler: cosine
367
+ learning_rate: 1e-5
368
+
369
+ train_on_inputs: false
370
+ group_by_length: false
371
+ bf16: auto
372
+ fp16:
373
+ tf32: false
374
+
375
+ gradient_checkpointing: true
376
+ early_stopping_patience:
377
+ resume_from_checkpoint: /workspace/axolotl2/axolotl/1x22b-out/checkpoint-507
378
+ local_rank:
379
+ logging_steps: 1
380
+ xformers_attention:
381
+ flash_attention: true
382
+
383
+ warmup_steps: 10
384
+ evals_per_epoch: 4
385
+ eval_table_size:
386
+ eval_max_new_tokens: 128
387
+ saves_per_epoch: 4
388
+ save_total_limit: 2
389
+ debug:
390
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
391
+ weight_decay: 0.01
392
+ fsdp:
393
+ fsdp_config:
394
+ special_tokens:
395
+ eos_token: "<|im_end|>"
396
+ bos_token: "<s>"
397
+ # pad_token: "<unk>"
398
+ unk_token: "<unk>"
399
+ tokens:
400
+ - "<|im_start|>"
401
+
402
+
403
+
404
+
405
+
406
+ ```
407
+
408
+ </details><br>
409
+
410
+ # 1x22b-out
411
+
412
+ ## Model description
413
+
414
+ More information needed
415
+
416
+ ## Intended uses & limitations
417
+
418
+ More information needed
419
+
420
+ ## Training and evaluation data
421
+
422
+ More information needed
423
+
424
+ ## Training procedure
425
+
426
+ ### Training hyperparameters
427
+
428
+ The following hyperparameters were used during training:
429
+ - learning_rate: 1e-05
430
+ - train_batch_size: 1
431
+ - eval_batch_size: 1
432
+ - seed: 42
433
+ - distributed_type: multi-GPU
434
+ - num_devices: 8
435
+ - gradient_accumulation_steps: 8
436
+ - total_train_batch_size: 64
437
+ - total_eval_batch_size: 8
438
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
439
+ - lr_scheduler_type: cosine
440
+ - lr_scheduler_warmup_steps: 10
441
+ - num_epochs: 3
442
+
443
+ ### Training results
444
+
445
+ | Training Loss | Epoch | Step | Validation Loss |
446
+ |:-------------:|:------:|:----:|:---------------:|
447
+ | 0.9818 | 0.0015 | 1 | 0.9854 |
448
+ | 0.4783 | 0.2499 | 169 | 0.5042 |
449
+ | 0.464 | 0.4997 | 338 | 0.4755 |
450
+ | 0.4561 | 0.7496 | 507 | 0.4593 |
451
+ | 0.3981 | 0.9994 | 676 | 0.4553 |
452
+ | 0.3725 | 1.2378 | 845 | 0.4525 |
453
+ | 0.3624 | 1.4877 | 1014 | 0.4457 |
454
+ | 0.359 | 1.7376 | 1183 | 0.4393 |
455
+ | 0.375 | 1.9874 | 1352 | 0.4345 |
456
+ | 0.2899 | 2.2260 | 1521 | 0.4488 |
457
+ | 0.2848 | 2.4759 | 1690 | 0.4473 |
458
+ | 0.2935 | 2.7257 | 1859 | 0.4470 |
459
+ | 0.2065 | 2.9756 | 2028 | 0.4572 |
460
+
461
+
462
+ ### Framework versions
463
+
464
+ - Transformers 4.40.2
465
+ - Pytorch 2.3.0+cu121
466
+ - Datasets 2.19.1
467
+ - Tokenizers 0.19.1
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<|im_end|>": 32000,
3
+ "<|im_start|>": 32001
4
+ }
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "dolphin-2.9.1-mixtral-1x22b",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 32000,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 6144,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 16384,
13
+ "max_position_embeddings": 65536,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 48,
16
+ "num_hidden_layers": 56,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 1000000,
20
+ "sliding_window": null,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.40.1",
24
+ "use_cache": false,
25
+ "vocab_size": 32002
26
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "transformers_version": "4.40.2"
7
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 44475740160
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00009-of-00009.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00009.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00009.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00009.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00009.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00009.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00003-of-00009.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00003-of-00009.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00003-of-00009.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00009.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00009.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00009.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00004-of-00009.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00004-of-00009.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00009.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00004-of-00009.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00004-of-00009.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00004-of-00009.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00004-of-00009.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00005-of-00009.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00005-of-00009.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00005-of-00009.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00005-of-00009.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00005-of-00009.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00005-of-00009.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00009.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00005-of-00009.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00006-of-00009.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
242
+ "model.layers.32.input_layernorm.weight": "model-00006-of-00009.safetensors",
243
+ "model.layers.32.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
244
+ "model.layers.32.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
245
+ "model.layers.32.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
246
+ "model.layers.32.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
247
+ "model.layers.32.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
248
+ "model.layers.32.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
249
+ "model.layers.32.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
250
+ "model.layers.32.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
251
+ "model.layers.33.input_layernorm.weight": "model-00006-of-00009.safetensors",
252
+ "model.layers.33.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
253
+ "model.layers.33.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
254
+ "model.layers.33.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
255
+ "model.layers.33.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
256
+ "model.layers.33.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
257
+ "model.layers.33.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
258
+ "model.layers.33.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
259
+ "model.layers.33.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
260
+ "model.layers.34.input_layernorm.weight": "model-00006-of-00009.safetensors",
261
+ "model.layers.34.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
262
+ "model.layers.34.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
263
+ "model.layers.34.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
264
+ "model.layers.34.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
265
+ "model.layers.34.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
266
+ "model.layers.34.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
267
+ "model.layers.34.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
268
+ "model.layers.34.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
269
+ "model.layers.35.input_layernorm.weight": "model-00006-of-00009.safetensors",
270
+ "model.layers.35.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
271
+ "model.layers.35.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
272
+ "model.layers.35.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
273
+ "model.layers.35.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
274
+ "model.layers.35.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
275
+ "model.layers.35.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
276
+ "model.layers.35.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
277
+ "model.layers.35.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
278
+ "model.layers.36.input_layernorm.weight": "model-00006-of-00009.safetensors",
279
+ "model.layers.36.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
280
+ "model.layers.36.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
281
+ "model.layers.36.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
282
+ "model.layers.36.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
283
+ "model.layers.36.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
284
+ "model.layers.36.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
285
+ "model.layers.36.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
286
+ "model.layers.36.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
287
+ "model.layers.37.input_layernorm.weight": "model-00007-of-00009.safetensors",
288
+ "model.layers.37.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
289
+ "model.layers.37.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
290
+ "model.layers.37.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
291
+ "model.layers.37.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
292
+ "model.layers.37.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
293
+ "model.layers.37.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
294
+ "model.layers.37.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
295
+ "model.layers.37.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
296
+ "model.layers.38.input_layernorm.weight": "model-00007-of-00009.safetensors",
297
+ "model.layers.38.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
298
+ "model.layers.38.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
299
+ "model.layers.38.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
300
+ "model.layers.38.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
301
+ "model.layers.38.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
302
+ "model.layers.38.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
303
+ "model.layers.38.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
304
+ "model.layers.38.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
305
+ "model.layers.39.input_layernorm.weight": "model-00007-of-00009.safetensors",
306
+ "model.layers.39.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
307
+ "model.layers.39.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
308
+ "model.layers.39.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
309
+ "model.layers.39.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
310
+ "model.layers.39.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
311
+ "model.layers.39.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
312
+ "model.layers.39.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
313
+ "model.layers.39.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
314
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00009.safetensors",
315
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
316
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
317
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
318
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
319
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
320
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
321
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
322
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
323
+ "model.layers.40.input_layernorm.weight": "model-00007-of-00009.safetensors",
324
+ "model.layers.40.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
325
+ "model.layers.40.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
326
+ "model.layers.40.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
327
+ "model.layers.40.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
328
+ "model.layers.40.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
329
+ "model.layers.40.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
330
+ "model.layers.40.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
331
+ "model.layers.40.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
332
+ "model.layers.41.input_layernorm.weight": "model-00007-of-00009.safetensors",
333
+ "model.layers.41.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
334
+ "model.layers.41.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
335
+ "model.layers.41.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
336
+ "model.layers.41.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
337
+ "model.layers.41.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
338
+ "model.layers.41.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
339
+ "model.layers.41.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
340
+ "model.layers.41.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
341
+ "model.layers.42.input_layernorm.weight": "model-00007-of-00009.safetensors",
342
+ "model.layers.42.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
343
+ "model.layers.42.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
344
+ "model.layers.42.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
345
+ "model.layers.42.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
346
+ "model.layers.42.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
347
+ "model.layers.42.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
348
+ "model.layers.42.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
349
+ "model.layers.42.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
350
+ "model.layers.43.input_layernorm.weight": "model-00008-of-00009.safetensors",
351
+ "model.layers.43.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
352
+ "model.layers.43.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
353
+ "model.layers.43.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
354
+ "model.layers.43.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
355
+ "model.layers.43.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
356
+ "model.layers.43.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
357
+ "model.layers.43.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
358
+ "model.layers.43.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
359
+ "model.layers.44.input_layernorm.weight": "model-00008-of-00009.safetensors",
360
+ "model.layers.44.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
361
+ "model.layers.44.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
362
+ "model.layers.44.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
363
+ "model.layers.44.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
364
+ "model.layers.44.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
365
+ "model.layers.44.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
366
+ "model.layers.44.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
367
+ "model.layers.44.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
368
+ "model.layers.45.input_layernorm.weight": "model-00008-of-00009.safetensors",
369
+ "model.layers.45.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
370
+ "model.layers.45.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
371
+ "model.layers.45.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
372
+ "model.layers.45.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
373
+ "model.layers.45.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
374
+ "model.layers.45.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
375
+ "model.layers.45.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
376
+ "model.layers.45.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
377
+ "model.layers.46.input_layernorm.weight": "model-00008-of-00009.safetensors",
378
+ "model.layers.46.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
379
+ "model.layers.46.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
380
+ "model.layers.46.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
381
+ "model.layers.46.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
382
+ "model.layers.46.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
383
+ "model.layers.46.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
384
+ "model.layers.46.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
385
+ "model.layers.46.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
386
+ "model.layers.47.input_layernorm.weight": "model-00008-of-00009.safetensors",
387
+ "model.layers.47.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
388
+ "model.layers.47.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
389
+ "model.layers.47.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
390
+ "model.layers.47.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
391
+ "model.layers.47.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
392
+ "model.layers.47.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
393
+ "model.layers.47.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
394
+ "model.layers.47.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
395
+ "model.layers.48.input_layernorm.weight": "model-00008-of-00009.safetensors",
396
+ "model.layers.48.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
397
+ "model.layers.48.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
398
+ "model.layers.48.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
399
+ "model.layers.48.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
400
+ "model.layers.48.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
401
+ "model.layers.48.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
402
+ "model.layers.48.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
403
+ "model.layers.48.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
404
+ "model.layers.49.input_layernorm.weight": "model-00008-of-00009.safetensors",
405
+ "model.layers.49.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
406
+ "model.layers.49.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
407
+ "model.layers.49.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
408
+ "model.layers.49.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
409
+ "model.layers.49.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
410
+ "model.layers.49.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
411
+ "model.layers.49.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
412
+ "model.layers.49.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
413
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00009.safetensors",
414
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
415
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
416
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
417
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
418
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
419
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
420
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
421
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
422
+ "model.layers.50.input_layernorm.weight": "model-00009-of-00009.safetensors",
423
+ "model.layers.50.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
424
+ "model.layers.50.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
425
+ "model.layers.50.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
426
+ "model.layers.50.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
427
+ "model.layers.50.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
428
+ "model.layers.50.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
429
+ "model.layers.50.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
430
+ "model.layers.50.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
431
+ "model.layers.51.input_layernorm.weight": "model-00009-of-00009.safetensors",
432
+ "model.layers.51.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
433
+ "model.layers.51.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
434
+ "model.layers.51.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
435
+ "model.layers.51.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
436
+ "model.layers.51.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
437
+ "model.layers.51.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
438
+ "model.layers.51.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
439
+ "model.layers.51.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
440
+ "model.layers.52.input_layernorm.weight": "model-00009-of-00009.safetensors",
441
+ "model.layers.52.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
442
+ "model.layers.52.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
443
+ "model.layers.52.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
444
+ "model.layers.52.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
445
+ "model.layers.52.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
446
+ "model.layers.52.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
447
+ "model.layers.52.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
448
+ "model.layers.52.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
449
+ "model.layers.53.input_layernorm.weight": "model-00009-of-00009.safetensors",
450
+ "model.layers.53.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
451
+ "model.layers.53.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
452
+ "model.layers.53.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
453
+ "model.layers.53.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
454
+ "model.layers.53.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
455
+ "model.layers.53.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
456
+ "model.layers.53.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
457
+ "model.layers.53.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
458
+ "model.layers.54.input_layernorm.weight": "model-00009-of-00009.safetensors",
459
+ "model.layers.54.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
460
+ "model.layers.54.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
461
+ "model.layers.54.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
462
+ "model.layers.54.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
463
+ "model.layers.54.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
464
+ "model.layers.54.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
465
+ "model.layers.54.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
466
+ "model.layers.54.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
467
+ "model.layers.55.input_layernorm.weight": "model-00009-of-00009.safetensors",
468
+ "model.layers.55.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
469
+ "model.layers.55.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
470
+ "model.layers.55.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
471
+ "model.layers.55.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
472
+ "model.layers.55.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
473
+ "model.layers.55.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
474
+ "model.layers.55.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
475
+ "model.layers.55.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
476
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00009.safetensors",
477
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
478
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
479
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
480
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
481
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
482
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
483
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
484
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
485
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00009.safetensors",
486
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
487
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
488
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
489
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
490
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
491
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
492
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
493
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
494
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00009.safetensors",
495
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
496
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
497
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
498
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
499
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
500
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
501
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
502
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
503
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00009.safetensors",
504
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
505
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
506
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
507
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
508
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
509
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
510
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
511
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
512
+ "model.norm.weight": "model-00009-of-00009.safetensors"
513
+ }
514
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|im_end|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32000": {
30
+ "content": "<|im_end|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "<|im_start|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": false
44
+ }
45
+ },
46
+ "additional_special_tokens": [],
47
+ "bos_token": "<s>",
48
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
49
+ "clean_up_tokenization_spaces": false,
50
+ "eos_token": "<|im_end|>",
51
+ "legacy": true,
52
+ "model_max_length": 1000000000000000019884624838656,
53
+ "pad_token": "</s>",
54
+ "sp_model_kwargs": {},
55
+ "spaces_between_special_tokens": false,
56
+ "tokenizer_class": "LlamaTokenizer",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": false
59
+ }