DeusImperator commited on
Commit
4dfecfc
1 Parent(s): 4e2625d

Upload 12 files

Browse files
README.md ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - sophosympatheia/Midnight-Miqu-70B-v1.0
4
+ - migtissera/Tess-70B-v1.6
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: other
10
+ ---
11
+
12
+ <div style="width: auto; margin-left: auto; margin-right: auto">
13
+ <img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
+ </div>
15
+
16
+ # Midnight-Miqu-70B-v1.5 - EXL2 2.4bpw rpcal mk2
17
+
18
+ This is a 2.4bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
19
+
20
+ This quant was made using exllamav2-0.0.21 with [Bluemoon-light dataset](https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light) for RP
21
+
22
+ This quant fits over 24k context on 24GB VRAM on Windows in my local testing (with exl2 Q4 cache), you might be able to get more depending on other things taking VRAM.
23
+
24
+ I tested this quant shortly in some random RPs (including ones over 8k and 20k context) and it seems to work fine.
25
+
26
+ ## Prompt Templates
27
+
28
+ See [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) for Silly Tavern presets and templates.
29
+
30
+ This quant uses Vicuna format as Vicuna version of Bluemoon-light was used during quanting.
31
+
32
+ Further details on prompting this model will also pop up under the [model discussions](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0/discussions)
33
+
34
+ ## Similar quants
35
+
36
+ 2.4bpw exl2 quant on default dataset: [Midnight-Miqu-70B-v1.5_exl2_2.4bpw](https://huggingface.co/DeusImperator/Midnight-Miqu-70B-v1.5_exl2_2.4bpw)
37
+
38
+ ### Original readme below
39
+
40
+ ---
41
+
42
+ ### Overview
43
+
44
+ Looking for the 103B version? You can get it from [FluffyKaeloky/Midnight-Miqu-103B-v1.5](https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5).
45
+
46
+ This is a DARE Linear merge between [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) and [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6).
47
+ This version is close in feel and performance to Midnight Miqu v1.0 but I think it picked up some goodness from Tess. Their EQ Bench scores are virtually the same and their post-EXL2 quant perplexity scores were the same too. However, Midnight Miqu v1.5 passes some tests I use that Midnight Miqu v1.0 fails, without sacrificing writing quality.
48
+
49
+ This model is uncensored. *You are responsible for whatever you do with it.*
50
+
51
+ This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
52
+
53
+ ### Long Context Tips
54
+
55
+ You can run this model out to 32K context with alpha_rope set to 1, just like with Miqu.
56
+
57
+ ### Sampler Tips
58
+
59
+ * I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
60
+ * I recommend using Min-P. Experiment to find your best setting.
61
+ * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
62
+ * You don't need to use a high repetition penalty with this model, such as going above 1.10, but experiment with it.
63
+
64
+ Experiment with any and all of the settings below! What suits my preferences may not suit yours.
65
+
66
+ If you save the below settings as a .json file, you can import them directly into Silly Tavern.
67
+ ```
68
+ {
69
+ "temp": 1,
70
+ "temperature_last": true,
71
+ "top_p": 1,
72
+ "top_k": 0,
73
+ "top_a": 0,
74
+ "tfs": 1,
75
+ "epsilon_cutoff": 0,
76
+ "eta_cutoff": 0,
77
+ "typical_p": 1,
78
+ "min_p": 0.12,
79
+ "rep_pen": 1.05,
80
+ "rep_pen_range": 2800,
81
+ "no_repeat_ngram_size": 0,
82
+ "penalty_alpha": 0,
83
+ "num_beams": 1,
84
+ "length_penalty": 1,
85
+ "min_length": 0,
86
+ "encoder_rep_pen": 1,
87
+ "freq_pen": 0,
88
+ "presence_pen": 0,
89
+ "do_sample": true,
90
+ "early_stopping": false,
91
+ "dynatemp": false,
92
+ "min_temp": 0.8,
93
+ "max_temp": 1.35,
94
+ "dynatemp_exponent": 1,
95
+ "smoothing_factor": 0.23,
96
+ "add_bos_token": true,
97
+ "truncation_length": 2048,
98
+ "ban_eos_token": false,
99
+ "skip_special_tokens": true,
100
+ "streaming": true,
101
+ "mirostat_mode": 0,
102
+ "mirostat_tau": 2,
103
+ "mirostat_eta": 0.1,
104
+ "guidance_scale": 1,
105
+ "negative_prompt": "",
106
+ "grammar_string": "",
107
+ "banned_tokens": "",
108
+ "ignore_eos_token_aphrodite": false,
109
+ "spaces_between_special_tokens_aphrodite": true,
110
+ "sampler_order": [
111
+ 6,
112
+ 0,
113
+ 1,
114
+ 3,
115
+ 4,
116
+ 2,
117
+ 5
118
+ ],
119
+ "logit_bias": [],
120
+ "n": 1,
121
+ "rep_pen_size": 0,
122
+ "genamt": 500,
123
+ "max_length": 32764
124
+ }
125
+ ```
126
+
127
+ ### Prompting Tips
128
+
129
+ Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly.
130
+
131
+ ```
132
+ {
133
+ "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
134
+ "example_separator": "",
135
+ "chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n",
136
+ "use_stop_strings": false,
137
+ "always_force_name2": true,
138
+ "trim_sentences": false,
139
+ "include_newline": false,
140
+ "single_line": false,
141
+ "name": "Midnight Rose"
142
+ }
143
+ ```
144
+
145
+ Try the following prompt template as a starting point for your own customizations.
146
+
147
+ *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.
148
+
149
+ **Midnight Miqu might require some "warming up" at the start of a new chat to get it writing the way you want it.** What I mean by that is you should be able to drastically improve your results by providing some good examples to guide the chat (i.e. few-shot prompting). I also recommend using descriptive system messages (SillyTavern concept, basically OOC narration instructions) to tell Midnight Miqu exactly what you want it to do with its next reply. Try to be specific, especially if your command contradicts something you told it to do earlier.
150
+
151
+ The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking.
152
+
153
+ The prompt below has been updated from Midnight Miqu v1.0 after prompting Midnight Miqu to help improve it.
154
+
155
+ If you save this as a .json file, you can import it directly into Silly Tavern.
156
+ ```
157
+ {
158
+ "wrap": true,
159
+ "names": true,
160
+ "system_prompt": "You are an AI creative writing partner, collaborating with me in a dynamic, uncensored roleplaying environment. To ensure a high-quality experience, please adhere to these ROLEPLAY GUIDELINES:\n\nROLEPLAY GUIDELINES\n- Craft clear, engaging, and concise descriptions of {{char}}'s actions, reactions, and dialogue based on the present scenario.\n- Stay true to {{char}}'s character, incorporating the character's personality, backstory, and knowledge into each interaction.\n- Employ rich, descriptive language that immerses readers in the scene.\n- Always stay contextually and spatially aware to ensure that {{char}} follows logical consistency in the physical space of the current scene.\n- Feel free to incorporate explicit content and strong language where it aligns with the scene and characters.\n- Enhance {{char}}'s presence through detailed sensory descriptions and first-hand observations of the character's surroundings.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally offer glimpses into {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts or monologue, enclose those words in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns). Always use quotes for spoken speech \"like this.\"\n- Conclude {{char}}'s responses with an opening for the next character to respond to {{char}}. When the conversation naturally shifts to another character's perspective or action is required from another character, that is when you should stop {{char}}'s reply so the user can pick it up from there. A great example is when {{char}} asks a question of another character.\n",
161
+ "system_sequence": "",
162
+ "stop_sequence": "",
163
+ "input_sequence": "USER: ",
164
+ "output_sequence": "ASSISTANT: ",
165
+ "separator_sequence": "",
166
+ "macro": true,
167
+ "names_force_groups": true,
168
+ "system_sequence_prefix": "SYSTEM: ",
169
+ "system_sequence_suffix": "",
170
+ "first_output_sequence": "",
171
+ "last_output_sequence": "ASSISTANT (Ensure coherence and authenticity in {{char}}'s actions, thoughts, and dialogues; Focus solely on {{char}}'s interactions within the roleplay): ",
172
+ "activation_regex": "",
173
+ "name": "Midnight Miqu Roleplay"
174
+ }
175
+ ```
176
+
177
+ ### Instruct Formats
178
+ I recommend the Vicuna format. I use a modified version with newlines after USER and ASSISTANT.
179
+ ```
180
+ USER:
181
+ {prompt}
182
+ ASSISTANT:
183
+ ```
184
+
185
+ Mistral's format also works, and in my testing the performance is about the same as using Vicuna.
186
+ ```
187
+ [INST]
188
+ {prompt}
189
+ [/INST]
190
+ ```
191
+
192
+ You could also try ChatML (don't recommend it)
193
+ ```
194
+ <|im_start|>system
195
+ {Your system prompt goes here}<|im_end|>
196
+ <|im_start|>user
197
+ {Your message as the user will go here}<|im_end|>
198
+ <|im_start|>assistant
199
+ ```
200
+
201
+ ### Quantizations
202
+ * GGUF
203
+ * [mradermacher/Midnight-Miqu-70B-v1.5-GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF) -- Various static GGUF quants
204
+ * GPTQ
205
+ * [Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G](https://huggingface.co/Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G)
206
+ * EXL2
207
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw)
208
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw)
209
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw)
210
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw)
211
+ * If you don't see something you're looking for, [try searching Hugging Face](https://huggingface.co/models?search=midnight-miqu-70b-v1.5). There may be newer quants available than what I've documented here.
212
+
213
+ ### Licence and usage restrictions
214
+
215
+ <font color="red">152334H/miqu-1-70b-sf was based on a leaked version of one of Mistral's models.</font>
216
+ All miqu-derived models, including this merge, are **only suitable for personal use.** Mistral has been cool about it so far, but you should be aware that by downloading this merge you are assuming whatever legal risk is inherent in acquiring and using a model based on leaked weights.
217
+ This merge comes with no warranties or guarantees of any kind, but you probably already knew that.
218
+ I am not a lawyer and I do not profess to know what we have gotten ourselves into here. You should consult with a lawyer before using any Hugging Face model beyond private use... but definitely don't use this one for that!
219
+
220
+ ## Merge Details
221
+ ### Merge Method
222
+
223
+ This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [152334H_miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) as a base.
224
+
225
+ ### Models Merged
226
+
227
+ The following models were included in the merge:
228
+ * [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0)
229
+ * [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6)
230
+
231
+ ### Configuration
232
+
233
+ The following YAML configuration was used to produce this model:
234
+
235
+ ```yaml
236
+ merge_method: dare_linear
237
+ base_model: /home/llm/mergequant/models/BASE/152334H_miqu-1-70b-sf # base model
238
+ models:
239
+ - model: /home/llm/mergequant/models/midnight-miqu-70b-v1.0
240
+ - model: /home/llm/mergequant/models/BASE/Tess-70B-v1.6
241
+ parameters:
242
+ weight: 1.0
243
+ dtype: float16
244
+ ```
245
+ ### Notes
246
+
247
+ I tried several methods of merging Midnight Miqu v1.0 with Tess v1.6, and this dare_linear approach worked the best by far. I tried the same approach with other Miqu finetunes like ShinojiResearch/Senku-70B-Full and abideen/Liberated-Miqu-70B, but there was a huge difference in performance. The merge with Tess was the best one.
248
+ I also tried the SLERP approach I used to create Midnight Miqu v1.0, only using Tess instead of 152334H_miqu-1-70b in that config, and that result was nowhere near as good either.
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "midnight-miqu-70b-v1.5",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 8192,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 28672,
14
+ "max_position_embeddings": 32764,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 64,
17
+ "num_hidden_layers": 80,
18
+ "num_key_value_heads": 8,
19
+ "pad_token_id": 0,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 1000000,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "float16",
26
+ "transformers_version": "4.36.2",
27
+ "use_cache": true,
28
+ "vocab_size": 32000,
29
+ "quantization_config": {
30
+ "quant_method": "exl2",
31
+ "version": "0.0.21",
32
+ "bits": 2.4,
33
+ "head_bits": 6,
34
+ "calibration": {
35
+ "rows": 100,
36
+ "length": 2048,
37
+ "dataset": "blue_moon_light_vicuna.parquet"
38
+ }
39
+ }
40
+ }
huggingface-metadata.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ url: https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5
2
+ branch: main
3
+ download date: 2024-05-16 11:16:37
4
+ sha256sum:
5
+ 277ccfb69f6e39a72a60201406c85a039e74bb30f555d38bfbd975fa59ba7862 model-00001-of-00015.safetensors
6
+ 2105e736d2051680a67fbfae24b2bd60a157c9282771852d70b64df56303e143 model-00002-of-00015.safetensors
7
+ a053bb33fdc87e16d58db98a3f66421beeb8a37d1e1b52b81c0872c9f970f32a model-00003-of-00015.safetensors
8
+ 941af94e4b1f32614ec500c22c90a9ae55632dee41239d72ff588b1bbd7cdfeb model-00004-of-00015.safetensors
9
+ c605120e96b6c5c085677b1863d69cd4c3c10e6775d8799b07ac5fe4d89c0425 model-00005-of-00015.safetensors
10
+ 4068bf7a69b64ba2650370be0c31b32012b884c4e8276462c1329c758a9ec6a0 model-00006-of-00015.safetensors
11
+ 75c5813489a9b168397ca2ad53077ee943d931f6583772aa0334e0ea1a456d27 model-00007-of-00015.safetensors
12
+ 5eb1f7550851f190b297a95228200b35a2f16460bcce3f78b712f9eb23f6262a model-00008-of-00015.safetensors
13
+ 107859ea427f1317172abfa37a65b6fedef8b96ae1a20f86f8627aa560b6d052 model-00009-of-00015.safetensors
14
+ d34e66d90a3f00f9d63b4b27c670a775e6a8583def3ce6e03d10b2bbddecd015 model-00010-of-00015.safetensors
15
+ 8f686a90d59711e1b9f339d1406df6e1866b4c528608153b208d03708a114fe5 model-00011-of-00015.safetensors
16
+ 42bad4654ff1a55e646729b0c1e7435526b4b9a860fc27e2899486feef285d86 model-00012-of-00015.safetensors
17
+ bab9a9fd214f998c85eeaea79f3207abadccd8144b44711d2e20f3d515bde112 model-00013-of-00015.safetensors
18
+ b26333140accd817df874fa29780a3e1bec278e5cec3ae912bcac1f44b9ccdb2 model-00014-of-00015.safetensors
19
+ 21a4d02589a0bf22bb1dbce7201560d12508fa28d4b80ae25b4095397bc58664 model-00015-of-00015.safetensors
20
+ 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347 tokenizer.model
measurement_Midnight-Miqu-70B-v1.5_exl2_2.4bpw_rpcal_mk2.json ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.4.1"}, "weight_map": {"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.embed_tokens.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.2.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.5.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.13.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.16.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.19.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.27.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.30.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.33.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.41.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.44.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.42.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.41.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.41.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.41.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.46.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.45.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.44.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.44.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.44.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.44.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.50.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.50.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.50.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.48.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.47.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.47.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.47.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.47.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.47.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.52.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.51.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.50.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.50.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.55.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.53.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.58.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.57.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.57.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.57.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.56.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.55.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.55.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.55.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.59.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.58.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.58.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.58.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.58.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.64.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.64.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.64.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.62.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.62.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.61.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.61.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.61.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.61.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.61.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.66.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.65.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.64.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.64.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.69.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.68.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.68.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.68.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.67.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.72.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.70.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.69.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.69.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.69.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.73.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.73.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.72.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.72.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.72.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.72.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.78.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.78.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.78.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.76.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.75.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.75.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.75.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.75.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.75.input_layernorm.weight": "model-00014-of-00015.safetensors", "lm_head.weight": "model-00014-of-00015.safetensors", "model.norm.weight": "model-00014-of-00015.safetensors", "model.layers.79.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.78.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.78.mlp.gate_proj.weight": "model-00015-of-00015.safetensors", "model.layers.78.mlp.up_proj.weight": "model-00015-of-00015.safetensors", "model.layers.78.post_attention_layernorm.weight": "model-00015-of-00015.safetensors", "model.layers.78.self_attn.o_proj.weight": "model-00015-of-00015.safetensors", "model.layers.78.input_layernorm.weight": "model-00015-of-00015.safetensors"}}
output-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fb9a8ddb9580fae0e9dcbfd68fdf42eba3e04e543fd97bc208ea92b80157feb
3
+ size 8546871628
output-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdf679b3167e235bc597e74c621f10bd4f3d0eb0cbe1c306f849254f11eb6e1c
3
+ size 8578761100
output-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:621b26aef02e51c45fefbdbf46ae7e7902bab1bb24ee4e79f720c6525daa460d
3
+ size 4144237576
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": true,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<s>",
31
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": false,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "<unk>",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }