XelotX sophosympatheia commited on
Commit
e846c4d
0 Parent(s):

Duplicate from sophosympatheia/Midnight-Miqu-70B-v1.5

Browse files

Co-authored-by: Sophosympatheia <sophosympatheia@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
File without changes
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - sophosympatheia/Midnight-Miqu-70B-v1.0
4
+ - migtissera/Tess-70B-v1.6
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: other
10
+ ---
11
+
12
+ <div style="width: auto; margin-left: auto; margin-right: auto">
13
+ <img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
+ </div>
15
+
16
+ ### Overview
17
+
18
+ Looking for the 103B version? You can get it from [FluffyKaeloky/Midnight-Miqu-103B-v1.5](https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5).
19
+
20
+ This is a DARE Linear merge between [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) and [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6).
21
+ This version is close in feel and performance to Midnight Miqu v1.0 but I think it picked up some goodness from Tess. Their EQ Bench scores are virtually the same and their post-EXL2 quant perplexity scores were the same too. However, Midnight Miqu v1.5 passes some tests I use that Midnight Miqu v1.0 fails, without sacrificing writing quality.
22
+
23
+ This model is uncensored. *You are responsible for whatever you do with it.*
24
+
25
+ This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
26
+
27
+ ### Long Context Tips
28
+
29
+ You can run this model out to 32K context with alpha_rope set to 1, just like with Miqu.
30
+
31
+ ### Sampler Tips
32
+
33
+ * I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
34
+ * I recommend using Min-P. Experiment to find your best setting.
35
+ * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
36
+ * You don't need to use a high repetition penalty with this model, such as going above 1.10, but experiment with it.
37
+
38
+ Experiment with any and all of the settings below! What suits my preferences may not suit yours.
39
+
40
+ If you save the below settings as a .json file, you can import them directly into Silly Tavern.
41
+ ```
42
+ {
43
+ "temp": 1,
44
+ "temperature_last": true,
45
+ "top_p": 1,
46
+ "top_k": 0,
47
+ "top_a": 0,
48
+ "tfs": 1,
49
+ "epsilon_cutoff": 0,
50
+ "eta_cutoff": 0,
51
+ "typical_p": 1,
52
+ "min_p": 0.12,
53
+ "rep_pen": 1.05,
54
+ "rep_pen_range": 2800,
55
+ "no_repeat_ngram_size": 0,
56
+ "penalty_alpha": 0,
57
+ "num_beams": 1,
58
+ "length_penalty": 1,
59
+ "min_length": 0,
60
+ "encoder_rep_pen": 1,
61
+ "freq_pen": 0,
62
+ "presence_pen": 0,
63
+ "do_sample": true,
64
+ "early_stopping": false,
65
+ "dynatemp": false,
66
+ "min_temp": 0.8,
67
+ "max_temp": 1.35,
68
+ "dynatemp_exponent": 1,
69
+ "smoothing_factor": 0.23,
70
+ "add_bos_token": true,
71
+ "truncation_length": 2048,
72
+ "ban_eos_token": false,
73
+ "skip_special_tokens": true,
74
+ "streaming": true,
75
+ "mirostat_mode": 0,
76
+ "mirostat_tau": 2,
77
+ "mirostat_eta": 0.1,
78
+ "guidance_scale": 1,
79
+ "negative_prompt": "",
80
+ "grammar_string": "",
81
+ "banned_tokens": "",
82
+ "ignore_eos_token_aphrodite": false,
83
+ "spaces_between_special_tokens_aphrodite": true,
84
+ "sampler_order": [
85
+ 6,
86
+ 0,
87
+ 1,
88
+ 3,
89
+ 4,
90
+ 2,
91
+ 5
92
+ ],
93
+ "logit_bias": [],
94
+ "n": 1,
95
+ "rep_pen_size": 0,
96
+ "genamt": 500,
97
+ "max_length": 32764
98
+ }
99
+ ```
100
+
101
+ ### Prompting Tips
102
+
103
+ Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly.
104
+
105
+ ```
106
+ {
107
+ "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
108
+ "example_separator": "",
109
+ "chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n",
110
+ "use_stop_strings": false,
111
+ "always_force_name2": true,
112
+ "trim_sentences": false,
113
+ "include_newline": false,
114
+ "single_line": false,
115
+ "name": "Midnight Rose"
116
+ }
117
+ ```
118
+
119
+ Try the following prompt template as a starting point for your own customizations.
120
+
121
+ *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.
122
+
123
+ **Midnight Miqu might require some "warming up" at the start of a new chat to get it writing the way you want it.** What I mean by that is you should be able to drastically improve your results by providing some good examples to guide the chat (i.e. few-shot prompting). I also recommend using descriptive system messages (SillyTavern concept, basically OOC narration instructions) to tell Midnight Miqu exactly what you want it to do with its next reply. Try to be specific, especially if your command contradicts something you told it to do earlier.
124
+
125
+ The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking.
126
+
127
+ The prompt below has been updated from Midnight Miqu v1.0 after prompting Midnight Miqu to help improve it.
128
+
129
+ If you save this as a .json file, you can import it directly into Silly Tavern.
130
+ ```
131
+ {
132
+ "wrap": true,
133
+ "names": true,
134
+ "system_prompt": "You are an AI creative writing partner, collaborating with me in a dynamic, uncensored roleplaying environment. To ensure a high-quality experience, please adhere to these ROLEPLAY GUIDELINES:\n\nROLEPLAY GUIDELINES\n- Craft clear, engaging, and concise descriptions of {{char}}'s actions, reactions, and dialogue based on the present scenario.\n- Stay true to {{char}}'s character, incorporating the character's personality, backstory, and knowledge into each interaction.\n- Employ rich, descriptive language that immerses readers in the scene.\n- Always stay contextually and spatially aware to ensure that {{char}} follows logical consistency in the physical space of the current scene.\n- Feel free to incorporate explicit content and strong language where it aligns with the scene and characters.\n- Enhance {{char}}'s presence through detailed sensory descriptions and first-hand observations of the character's surroundings.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally offer glimpses into {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts or monologue, enclose those words in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns). Always use quotes for spoken speech \"like this.\"\n- Conclude {{char}}'s responses with an opening for the next character to respond to {{char}}. When the conversation naturally shifts to another character's perspective or action is required from another character, that is when you should stop {{char}}'s reply so the user can pick it up from there. A great example is when {{char}} asks a question of another character.\n",
135
+ "system_sequence": "",
136
+ "stop_sequence": "",
137
+ "input_sequence": "USER: ",
138
+ "output_sequence": "ASSISTANT: ",
139
+ "separator_sequence": "",
140
+ "macro": true,
141
+ "names_force_groups": true,
142
+ "system_sequence_prefix": "SYSTEM: ",
143
+ "system_sequence_suffix": "",
144
+ "first_output_sequence": "",
145
+ "last_output_sequence": "ASSISTANT (Ensure coherence and authenticity in {{char}}'s actions, thoughts, and dialogues; Focus solely on {{char}}'s interactions within the roleplay): ",
146
+ "activation_regex": "",
147
+ "name": "Midnight Miqu Roleplay"
148
+ }
149
+ ```
150
+
151
+ ### Instruct Formats
152
+ I recommend the Vicuna format. I use a modified version with newlines after USER and ASSISTANT.
153
+ ```
154
+ USER:
155
+ {prompt}
156
+ ASSISTANT:
157
+ ```
158
+
159
+ Mistral's format also works, and in my testing the performance is about the same as using Vicuna.
160
+ ```
161
+ [INST]
162
+ {prompt}
163
+ [/INST]
164
+ ```
165
+
166
+ You could also try ChatML (don't recommend it)
167
+ ```
168
+ <|im_start|>system
169
+ {Your system prompt goes here}<|im_end|>
170
+ <|im_start|>user
171
+ {Your message as the user will go here}<|im_end|>
172
+ <|im_start|>assistant
173
+ ```
174
+
175
+ ### Quantizations
176
+ * GGUF
177
+ * [mradermacher/Midnight-Miqu-70B-v1.5-GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF) -- Various static GGUF quants
178
+ * GPTQ
179
+ * [Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G](https://huggingface.co/Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G)
180
+ * EXL2
181
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw)
182
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw)
183
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw)
184
+ * [Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw)
185
+ * If you don't see something you're looking for, [try searching Hugging Face](https://huggingface.co/models?search=midnight-miqu-70b-v1.5). There may be newer quants available than what I've documented here.
186
+
187
+ ### Licence and usage restrictions
188
+
189
+ <font color="red">152334H/miqu-1-70b-sf was based on a leaked version of one of Mistral's models.</font>
190
+ All miqu-derived models, including this merge, are **only suitable for personal use.** Mistral has been cool about it so far, but you should be aware that by downloading this merge you are assuming whatever legal risk is inherent in acquiring and using a model based on leaked weights.
191
+ This merge comes with no warranties or guarantees of any kind, but you probably already knew that.
192
+ I am not a lawyer and I do not profess to know what we have gotten ourselves into here. You should consult with a lawyer before using any Hugging Face model beyond private use... but definitely don't use this one for that!
193
+
194
+ ## Merge Details
195
+ ### Merge Method
196
+
197
+ This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [152334H_miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) as a base.
198
+
199
+ ### Models Merged
200
+
201
+ The following models were included in the merge:
202
+ * [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0)
203
+ * [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6)
204
+
205
+ ### Configuration
206
+
207
+ The following YAML configuration was used to produce this model:
208
+
209
+ ```yaml
210
+ merge_method: dare_linear
211
+ base_model: /home/llm/mergequant/models/BASE/152334H_miqu-1-70b-sf # base model
212
+ models:
213
+ - model: /home/llm/mergequant/models/midnight-miqu-70b-v1.0
214
+ - model: /home/llm/mergequant/models/BASE/Tess-70B-v1.6
215
+ parameters:
216
+ weight: 1.0
217
+ dtype: float16
218
+ ```
219
+ ### Notes
220
+
221
+ I tried several methods of merging Midnight Miqu v1.0 with Tess v1.6, and this dare_linear approach worked the best by far. I tried the same approach with other Miqu finetunes like ShinojiResearch/Senku-70B-Full and abideen/Liberated-Miqu-70B, but there was a huge difference in performance. The merge with Tess was the best one.
222
+ I also tried the SLERP approach I used to create Midnight Miqu v1.0, only using Tess instead of 152334H_miqu-1-70b in that config, and that result was nowhere near as good either.
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "midnight-miqu-70b-v1.5",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 8192,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 28672,
14
+ "max_position_embeddings": 32764,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 64,
17
+ "num_hidden_layers": 80,
18
+ "num_key_value_heads": 8,
19
+ "pad_token_id": 0,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 1000000,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "float16",
26
+ "transformers_version": "4.36.2",
27
+ "use_cache": true,
28
+ "vocab_size": 32000
29
+ }
mergekit_config.yml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ merge_method: dare_linear
2
+ base_model: /home/llm/mergequant/models/BASE/152334H_miqu-1-70b-sf # base model
3
+ models:
4
+ - model: /home/llm/mergequant/models/midnight-miqu-70b-v1.0 # your merge goes here
5
+ - model: /home/llm/mergequant/models/BASE/Tess-70B-v1.6 # new "base model" goes here
6
+ parameters:
7
+ weight: 1.0
8
+ dtype: float16
model-00001-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:277ccfb69f6e39a72a60201406c85a039e74bb30f555d38bfbd975fa59ba7862
3
+ size 9550600160
model-00002-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2105e736d2051680a67fbfae24b2bd60a157c9282771852d70b64df56303e143
3
+ size 9798096168
model-00003-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a053bb33fdc87e16d58db98a3f66421beeb8a37d1e1b52b81c0872c9f970f32a
3
+ size 9630292104
model-00004-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:941af94e4b1f32614ec500c22c90a9ae55632dee41239d72ff588b1bbd7cdfeb
3
+ size 9831650864
model-00005-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c605120e96b6c5c085677b1863d69cd4c3c10e6775d8799b07ac5fe4d89c0425
3
+ size 9932281752
model-00006-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4068bf7a69b64ba2650370be0c31b32012b884c4e8276462c1329c758a9ec6a0
3
+ size 9663879064
model-00007-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75c5813489a9b168397ca2ad53077ee943d931f6583772aa0334e0ea1a456d27
3
+ size 9965868712
model-00008-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5eb1f7550851f190b297a95228200b35a2f16460bcce3f78b712f9eb23f6262a
3
+ size 9932298264
model-00009-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:107859ea427f1317172abfa37a65b6fedef8b96ae1a20f86f8627aa560b6d052
3
+ size 9663895552
model-00010-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d34e66d90a3f00f9d63b4b27c670a775e6a8583def3ce6e03d10b2bbddecd015
3
+ size 9630258400
model-00011-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f686a90d59711e1b9f339d1406df6e1866b4c528608153b208d03708a114fe5
3
+ size 9965902416
model-00012-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42bad4654ff1a55e646729b0c1e7435526b4b9a860fc27e2899486feef285d86
3
+ size 9798063208
model-00013-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bab9a9fd214f998c85eeaea79f3207abadccd8144b44711d2e20f3d515bde112
3
+ size 9630325104
model-00014-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b26333140accd817df874fa29780a3e1bec278e5cec3ae912bcac1f44b9ccdb2
3
+ size 9886193280
model-00015-of-00015.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21a4d02589a0bf22bb1dbce7201560d12508fa28d4b80ae25b4095397bc58664
3
+ size 1073775184
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.4.1"}, "weight_map": {"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.embed_tokens.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.4.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.3.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00001-of-00015.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.2.input_layernorm.weight": "model-00001-of-00015.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00015.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00015.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00015.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.7.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.6.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.5.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.10.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00015.safetensors", "model.layers.9.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00015.safetensors", "model.layers.8.input_layernorm.weight": "model-00002-of-00015.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.12.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.11.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.15.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.14.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00003-of-00015.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.13.input_layernorm.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00015.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00015.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.18.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.17.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.16.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.21.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00004-of-00015.safetensors", "model.layers.20.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00004-of-00015.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00004-of-00015.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.19.input_layernorm.weight": "model-00004-of-00015.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00004-of-00015.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.24.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.23.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.22.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.26.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.25.input_layernorm.weight": "model-00005-of-00015.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00005-of-00015.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00005-of-00015.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.29.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.28.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.27.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.32.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.31.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00006-of-00015.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.30.input_layernorm.weight": "model-00006-of-00015.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00006-of-00015.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00006-of-00015.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00006-of-00015.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00006-of-00015.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00006-of-00015.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.35.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.34.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.33.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.38.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00007-of-00015.safetensors", "model.layers.37.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.36.input_layernorm.weight": "model-00007-of-00015.safetensors", "model.layers.41.mlp.gate_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.mlp.up_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.o_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.v_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.k_proj.weight": "model-00007-of-00015.safetensors", "model.layers.41.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.40.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.39.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.44.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.44.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.43.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.42.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.42.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.41.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.41.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.41.input_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.o_proj.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.v_proj.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.k_proj.weight": "model-00008-of-00015.safetensors", "model.layers.47.self_attn.q_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.mlp.down_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.mlp.gate_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.mlp.up_proj.weight": "model-00008-of-00015.safetensors", "model.layers.46.post_attention_layernorm.weight": "model-00008-of-00015.safetensors", "model.layers.46.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.46.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.45.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.45.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.44.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.44.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.44.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.44.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.50.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.50.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.50.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.49.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.48.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.48.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.47.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.47.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.47.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.47.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.47.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.52.mlp.down_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.mlp.gate_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.mlp.up_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.post_attention_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.o_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.v_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.k_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.self_attn.q_proj.weight": "model-00009-of-00015.safetensors", "model.layers.52.input_layernorm.weight": "model-00009-of-00015.safetensors", "model.layers.51.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.51.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.50.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.50.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.50.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.55.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.55.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.54.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.53.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.mlp.up_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.post_attention_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.53.input_layernorm.weight": "model-00010-of-00015.safetensors", "model.layers.58.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.o_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.v_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.k_proj.weight": "model-00010-of-00015.safetensors", "model.layers.58.self_attn.q_proj.weight": "model-00010-of-00015.safetensors", "model.layers.57.mlp.down_proj.weight": "model-00010-of-00015.safetensors", "model.layers.57.mlp.gate_proj.weight": "model-00010-of-00015.safetensors", "model.layers.57.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.57.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.56.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.56.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.55.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.55.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.55.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.61.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.60.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.59.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.59.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.58.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.58.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.58.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.58.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.64.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.64.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.64.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.mlp.gate_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.mlp.up_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.post_attention_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.o_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.v_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.k_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.self_attn.q_proj.weight": "model-00011-of-00015.safetensors", "model.layers.63.input_layernorm.weight": "model-00011-of-00015.safetensors", "model.layers.62.mlp.down_proj.weight": "model-00011-of-00015.safetensors", "model.layers.62.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.62.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.61.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.61.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.61.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.61.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.61.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.66.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.66.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.65.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.65.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.64.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.post_attention_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.64.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.64.input_layernorm.weight": "model-00012-of-00015.safetensors", "model.layers.69.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.mlp.up_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.o_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.v_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.k_proj.weight": "model-00012-of-00015.safetensors", "model.layers.69.self_attn.q_proj.weight": "model-00012-of-00015.safetensors", "model.layers.68.mlp.down_proj.weight": "model-00012-of-00015.safetensors", "model.layers.68.mlp.gate_proj.weight": "model-00012-of-00015.safetensors", "model.layers.68.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.68.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.67.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.67.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.72.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.72.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.71.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.70.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.70.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.69.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.69.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.69.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.75.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.mlp.gate_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.mlp.up_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.post_attention_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.o_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.v_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.k_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.self_attn.q_proj.weight": "model-00013-of-00015.safetensors", "model.layers.74.input_layernorm.weight": "model-00013-of-00015.safetensors", "model.layers.73.mlp.down_proj.weight": "model-00013-of-00015.safetensors", "model.layers.73.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.73.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.72.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.72.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.72.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.72.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.78.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.78.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.78.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.77.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.76.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.76.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.75.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.75.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.75.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.75.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.75.input_layernorm.weight": "model-00014-of-00015.safetensors", "lm_head.weight": "model-00014-of-00015.safetensors", "model.norm.weight": "model-00014-of-00015.safetensors", "model.layers.79.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.mlp.gate_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.mlp.up_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.post_attention_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.o_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.v_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.k_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.self_attn.q_proj.weight": "model-00014-of-00015.safetensors", "model.layers.79.input_layernorm.weight": "model-00014-of-00015.safetensors", "model.layers.78.mlp.down_proj.weight": "model-00014-of-00015.safetensors", "model.layers.78.mlp.gate_proj.weight": "model-00015-of-00015.safetensors", "model.layers.78.mlp.up_proj.weight": "model-00015-of-00015.safetensors", "model.layers.78.post_attention_layernorm.weight": "model-00015-of-00015.safetensors", "model.layers.78.self_attn.o_proj.weight": "model-00015-of-00015.safetensors", "model.layers.78.input_layernorm.weight": "model-00015-of-00015.safetensors"}}
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": true,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<s>",
31
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": false,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "<unk>",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }