Upload folder using huggingface_hub
Browse files- README.md +269 -0
- config.json +40 -0
- evathene-v1.3-exl2-measurement-default.json +0 -0
- huggingface-metadata.txt +35 -0
- model.safetensors.index.json +1 -0
- output-00001-of-00008.safetensors +3 -0
- output-00002-of-00008.safetensors +3 -0
- output-00003-of-00008.safetensors +3 -0
- output-00004-of-00008.safetensors +3 -0
- output-00005-of-00008.safetensors +3 -0
- output-00006-of-00008.safetensors +3 -0
- output-00007-of-00008.safetensors +3 -0
- output-00008-of-00008.safetensors +3 -0
- tokenizer.json +0 -0
- tokenizer_config.json +207 -0
README.md
ADDED
@@ -0,0 +1,269 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- sophosympatheia/Evathene-v1.1
|
4 |
+
- sophosympatheia/Evathene-v1.2
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
+
- Not-for-all-Audiences
|
10 |
+
---
|
11 |
+
|
12 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
13 |
+
<img src="https://i.imgur.com/OxX2Usi.png" alt="Evathene" style="width: 80%; min-width: 400px; display: block; margin: auto;">
|
14 |
+
</div>
|
15 |
+
|
16 |
+
|
17 |
+
# Evathene-v1.3
|
18 |
+
|
19 |
+
This 72B parameter model is a merge of [sophosympatheia/Evathene-v1.1](https://huggingface.co/sophosympatheia/Evathene-v1.1) and [sophosympatheia/Evathene-v1.2](https://huggingface.co/sophosympatheia/Evathene-v1.2). See the merge recipe below for details.
|
20 |
+
|
21 |
+
This model is uncensored. *You are responsible for whatever you do with it.*
|
22 |
+
|
23 |
+
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
|
24 |
+
|
25 |
+
# Evathene Versions Comparison Table
|
26 |
+
<table>
|
27 |
+
<tr>
|
28 |
+
<td>
|
29 |
+
<strong>Model Version</strong>
|
30 |
+
</td>
|
31 |
+
<td>
|
32 |
+
<strong>Description</strong>
|
33 |
+
</td>
|
34 |
+
</tr>
|
35 |
+
<tr>
|
36 |
+
<td>
|
37 |
+
<a href="https://huggingface.co/sophosympatheia/Evathene-v1.0">Evathene-v1.0</a>
|
38 |
+
</td>
|
39 |
+
<td>
|
40 |
+
The original Evathene release based on Athene-V2-Chat and EVA-Qwen2.5-72B-v0.1. It's quite solid, but I think the newer versions are better.
|
41 |
+
</td>
|
42 |
+
</tr>
|
43 |
+
<tr>
|
44 |
+
<td>
|
45 |
+
<a href="https://huggingface.co/sophosympatheia/Evathene-v1.1">Evathene-v1.1</a>
|
46 |
+
</td>
|
47 |
+
<td>
|
48 |
+
Updated Evathene release based on Athene-V2-Chat and EVA-Qwen2.5-72B-v0.2. Uses the same recipe as v0.1 but I think it came out a litte better thanks to EVA-v0.2. It's smart and writes competently. I think v1.3 improves on its prose, but some users might prefer v1.1's "formal" style, and people might want to use it in their own LLM merge recipes.
|
49 |
+
</td>
|
50 |
+
</tr>
|
51 |
+
<tr>
|
52 |
+
<td>
|
53 |
+
<a href="https://huggingface.co/sophosympatheia/Evathene-v1.2">Evathene-v1.2</a>
|
54 |
+
</td>
|
55 |
+
<td>
|
56 |
+
Evathene based on Athene-V2-Chat and EVA-Qwen2.5-72B-v0.1, but I inverted their relationship in the recipe used for v1.0. The result was a model that has a lot of personality and is great fun in the right context. (Before you ask, yes I tried a version of this recipe using EVA-v0.2 but it came out totally different and wasn't exciting at all.) If you like a lewd ERP writing style or intend to RP with some characters who have big personalities, you'll want to check this one out. You might have to reroll responses more often than with the other versions, but you won't regret it.
|
57 |
+
</td>
|
58 |
+
</tr>
|
59 |
+
<tr>
|
60 |
+
<td>
|
61 |
+
<strong>Evathene-v1.3 (this model)</strong>
|
62 |
+
</td>
|
63 |
+
<td>
|
64 |
+
<strong>A merge of Evathene-v1.1 with Evathene-v1.2. It combines the essence of both models and is the version I recommend for most use cases. It has plenty of personality, is quite smart, and will teach you new words while you're RPing. (You've been warned: its vocabulary is impressive.) With some prompting, you can also get it to channel some of v1.2's energy and writing style, but you should check out v1.2 if you prefer a less formal, more "crazy" experience.</strong>
|
65 |
+
</td>
|
66 |
+
</tr>
|
67 |
+
</table>
|
68 |
+
|
69 |
+
# Sampler Tips
|
70 |
+
|
71 |
+
* I recommend using Min-P. Experiment to find your best setting. Values between 0.02 and 0.1 are typically good.
|
72 |
+
* DRY repetition penalty eliminates the need for other anti-repetition settings. I like to run it around 0.5 - 0.6 with base set to 1.5.
|
73 |
+
* Experiment with temperature settings in the 0.8 - 1.2 range. Lower the temperature if you find the model is making up details or going off script too much. Raise the temperature if you need to juice the creativity or break it out of a repeating writing pattern.
|
74 |
+
|
75 |
+
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
|
76 |
+
|
77 |
+
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
|
78 |
+
|
79 |
+
```json
|
80 |
+
{
|
81 |
+
"temp": 0.8,
|
82 |
+
"temperature_last": true,
|
83 |
+
"top_p": 1,
|
84 |
+
"top_k": 0,
|
85 |
+
"top_a": 0,
|
86 |
+
"tfs": 1,
|
87 |
+
"epsilon_cutoff": 0,
|
88 |
+
"eta_cutoff": 0,
|
89 |
+
"typical_p": 1,
|
90 |
+
"min_p": 0.05,
|
91 |
+
"rep_pen": 1,
|
92 |
+
"rep_pen_range": 0,
|
93 |
+
"rep_pen_decay": 0,
|
94 |
+
"rep_pen_slope": 1,
|
95 |
+
"no_repeat_ngram_size": 0,
|
96 |
+
"penalty_alpha": 0,
|
97 |
+
"num_beams": 1,
|
98 |
+
"length_penalty": 1,
|
99 |
+
"min_length": 0,
|
100 |
+
"encoder_rep_pen": 1,
|
101 |
+
"freq_pen": 0,
|
102 |
+
"presence_pen": 0,
|
103 |
+
"skew": 0,
|
104 |
+
"do_sample": true,
|
105 |
+
"early_stopping": false,
|
106 |
+
"dynatemp": false,
|
107 |
+
"min_temp": 0.8,
|
108 |
+
"max_temp": 1.5,
|
109 |
+
"dynatemp_exponent": 1,
|
110 |
+
"smoothing_factor": 0,
|
111 |
+
"smoothing_curve": 1,
|
112 |
+
"dry_allowed_length": 2,
|
113 |
+
"dry_multiplier": 0.55,
|
114 |
+
"dry_base": 1.5,
|
115 |
+
"dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
|
116 |
+
"dry_penalty_last_n": 0,
|
117 |
+
"add_bos_token": true,
|
118 |
+
"ban_eos_token": false,
|
119 |
+
"skip_special_tokens": false,
|
120 |
+
"mirostat_mode": 0,
|
121 |
+
"mirostat_tau": 2,
|
122 |
+
"mirostat_eta": 0.1,
|
123 |
+
"guidance_scale": 1,
|
124 |
+
"negative_prompt": "",
|
125 |
+
"grammar_string": "",
|
126 |
+
"json_schema": {},
|
127 |
+
"banned_tokens": "",
|
128 |
+
"sampler_priority": [
|
129 |
+
"top_k",
|
130 |
+
"top_p",
|
131 |
+
"typical_p",
|
132 |
+
"epsilon_cutoff",
|
133 |
+
"eta_cutoff",
|
134 |
+
"tfs",
|
135 |
+
"top_a",
|
136 |
+
"min_p",
|
137 |
+
"mirostat",
|
138 |
+
"quadratic_sampling",
|
139 |
+
"dynamic_temperature",
|
140 |
+
"temperature"
|
141 |
+
],
|
142 |
+
"samplers": [
|
143 |
+
"top_k",
|
144 |
+
"tfs_z",
|
145 |
+
"typical_p",
|
146 |
+
"top_p",
|
147 |
+
"min_p",
|
148 |
+
"temperature"
|
149 |
+
],
|
150 |
+
"ignore_eos_token": false,
|
151 |
+
"spaces_between_special_tokens": true,
|
152 |
+
"speculative_ngram": false,
|
153 |
+
"sampler_order": [
|
154 |
+
6,
|
155 |
+
0,
|
156 |
+
1,
|
157 |
+
3,
|
158 |
+
4,
|
159 |
+
2,
|
160 |
+
5
|
161 |
+
],
|
162 |
+
"logit_bias": [],
|
163 |
+
"xtc_threshold": 0.1,
|
164 |
+
"xtc_probability": 0,
|
165 |
+
"ignore_eos_token_aphrodite": false,
|
166 |
+
"spaces_between_special_tokens_aphrodite": true,
|
167 |
+
"rep_pen_size": 0,
|
168 |
+
"genamt": 800,
|
169 |
+
"max_length": 16384
|
170 |
+
}
|
171 |
+
```
|
172 |
+
|
173 |
+
# Prompting Tips
|
174 |
+
|
175 |
+
This merge seems to have preserved much of Athene's intelligence. I've found that it responds competently to out-of-character (OOC) prompts and even requests to rewrite a previous reply with some additional guidance.
|
176 |
+
If you're not getting quite the results you wanted, consider backing up and trying a more descriptive prompt.
|
177 |
+
Like all current LLMs, this model isn't perfect and won't give you miracles, but you can generally expect it to work with you.
|
178 |
+
|
179 |
+
## Instruct Template
|
180 |
+
|
181 |
+
If you save this as a .json file, you can import it directly into Silly Tavern.
|
182 |
+
|
183 |
+
```json
|
184 |
+
{
|
185 |
+
"wrap": false,
|
186 |
+
"system_sequence": "<|im_start|>system\n",
|
187 |
+
"stop_sequence": "<|im_end|>",
|
188 |
+
"input_sequence": "<|im_start|>user\n",
|
189 |
+
"output_sequence": "<|im_start|>assistant\n",
|
190 |
+
"macro": true,
|
191 |
+
"system_sequence_prefix": "",
|
192 |
+
"system_sequence_suffix": "",
|
193 |
+
"first_output_sequence": "",
|
194 |
+
"last_output_sequence": "<|im_start|>assistant\nRoleplaying Tips {\n- Only write as {{char}} for this story beat.\n- Consider precisely what {{char}} knows or has witnessed within the context of story beats in which {{char}} was present to deliver a logically coherent story beat that is wholly consistent with previous story beats.\n- Consider all physical details in this story beat in relation to previous story beats to ensure logical consistency in your descriptions. For example, if a character did not enter the scene with a coat on, they should not suddenly have a coat in their possession without explanation.\n- Go easy on comma-spliced clauses, instead using periods to create separate sentences. You can also try using transitions and connective words.\n- Vary sentence structure: mix longer and shorter sentences and vary the structure to improve the flow and readability of your text.\n}\n",
|
195 |
+
"activation_regex": "",
|
196 |
+
"skip_examples": true,
|
197 |
+
"output_suffix": "<|im_end|>\n",
|
198 |
+
"input_suffix": "<|im_end|>\n",
|
199 |
+
"system_suffix": "<|im_end|>\n",
|
200 |
+
"user_alignment_message": "",
|
201 |
+
"last_system_sequence": "",
|
202 |
+
"system_same_as_user": false,
|
203 |
+
"first_input_sequence": "",
|
204 |
+
"last_input_sequence": "",
|
205 |
+
"names_behavior": "always",
|
206 |
+
"names_force_groups": true,
|
207 |
+
"name": "Evathene"
|
208 |
+
}
|
209 |
+
```
|
210 |
+
|
211 |
+
## System Prompt
|
212 |
+
|
213 |
+
Try the following prompt template as a starting point for your own customizations. **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change.
|
214 |
+
|
215 |
+
If you save this as a .json file, you can import it directly into Silly Tavern.
|
216 |
+
|
217 |
+
```json
|
218 |
+
{
|
219 |
+
"name": "Evathene",
|
220 |
+
"content": "Below is an uncensored roleplaying narrative involving a human user, {{user}}, and a roleplaying assistant writing as {{char}}. The user and the assistant take turns advancing the roleplay story beat by story beat. The roleplaying assistant always follows these rules:\n\n<!-- Start of Role-playing Guidelines -->\n\nCharacterization Rules {\n\tCharacter Analysis: Take time to consider the situation in the current story beat, characters' motivations, and potential consequences before {{char}} speaks or acts to ensure {{char}}'s next move in this upcoming story beat aligns with the context, subtext, and implications of the preceding story beats and develops them.\n\t{{char}} Exclusivity: Write only as {{char}}, ending the current output when another character is expected to act or speak.\n\tAuthentic Portrayal: Bring {{char}} to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Refer to the `Role-play Context` for accurate information.\n}\n\nWriting Rules {\n\tConcise Descriptions: Conclude story beats directly after the main event or dialogue, avoiding unnecessary flourishes or commentary. Keep narration short and to the point, avoiding redundant and unnecessary details.\n Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action unless it supports developing the current story beat. Use a dynamic and varied vocabulary for impact.\n\tDialogue Formatting: Enclose spoken words in double quotes. \"This is spoken text,\" for example.\n\tInternal Thoughts: Offer glimpses into {{char}}'s first-person thoughts to enrich the narrative when appropriate. Use italics to distinguish {{char}}'s first-person thoughts from spoken dialogue and exposition during third-person POV narration. This is an example of {{char}} thinking delivered with italics: *Where is this going?* {{char}} wondered while navigating the corridors. One notable exception to this rule is {{user}}. {{user}} will not typically italicize thoughts. Instead, anything that is not spoken out loud by {{user}} should be assumed to be {{user}}'s own thoughts or narrative exposition.\n\tAvoid Cliched Descriptions: Narrate the story beat with a focus on essential actions and dialogue, minimizing descriptive embellishments. Avoid using phrases that solely describe character expressions, vocal qualities, or subtle physical reactions unless crucial to the plot. Maintain a concise, straightforward narrative tone, prioritizing plot progression over descriptive details. Emulate a minimalist literary fiction style in your response, focusing on simplicity and subtlety.\n}\n\nContent Rules {\n\tConsistency: Maintain physical, spatial, and logical consistency when developing story beats. Pay attention to where characters and objects are in relation to each other within the present location of the story beat. Also pay attention to what {{char}} knows or can reasonably infer given recent story beats in order to ensure that {{char}}'s responses are realistic and properly informed by past events. Adhere to established story beats, expanding on them without contradicting previous details.\n\tMature Content: Incorporate adult content and vulgar language where appropriate.\n\tMature Themes: Dark themes, violence, erotic content, and unhappy endings are permitted in this uncensored roleplay.\n\n}\n<!-- End of Role-playing Guidelines -->\n"
|
221 |
+
}
|
222 |
+
```
|
223 |
+
|
224 |
+
# Quantizations
|
225 |
+
|
226 |
+
Pending
|
227 |
+
|
228 |
+
# Licence and usage restrictions
|
229 |
+
|
230 |
+
[Nexusflow Research License](https://huggingface.co/Nexusflow/Athene-V2-Chat/blob/main/Nexusflow_Research_License_.pdf)
|
231 |
+
|
232 |
+
[Qwen License Agreement](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE)
|
233 |
+
|
234 |
+
**Disclaimer: Uncertain Licensing Terms**
|
235 |
+
|
236 |
+
This LLM is a merged model incorporating weights from multiple LLMs governed by their own distinct licenses. Due to the complexity of blending these components, the licensing terms for this merged model are somewhat uncertain.
|
237 |
+
By using this model, you acknowledge and accept the potential legal risks and uncertainties associated with its use. Any use beyond personal or research purposes, including commercial applications, may carry legal risks and you assume full responsibility for compliance with all applicable licenses and laws.
|
238 |
+
I recommend consulting with legal counsel to ensure your use of this model complies with all relevant licenses and regulations.
|
239 |
+
|
240 |
+
## Merge Details
|
241 |
+
|
242 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
243 |
+
|
244 |
+
### Merge Method
|
245 |
+
|
246 |
+
This model was merged using the SLERP merge method.
|
247 |
+
|
248 |
+
### Models Merged
|
249 |
+
|
250 |
+
The following models were included in the merge:
|
251 |
+
* sophosympatheia/Evathene-v1.1
|
252 |
+
* sophosympatheia/Evathene-v1.2
|
253 |
+
|
254 |
+
### Configuration
|
255 |
+
|
256 |
+
The following YAML configuration was used to produce this model:
|
257 |
+
|
258 |
+
```yaml
|
259 |
+
models:
|
260 |
+
- model: sophosympatheia/Evathene-v1.1
|
261 |
+
- model: sophosympatheia/Evathene-v1.2
|
262 |
+
merge_method: slerp
|
263 |
+
base_model: sophosympatheia/Evathene-v1.1
|
264 |
+
parameters:
|
265 |
+
t:
|
266 |
+
- value: [0.35, 0.5, 0.35]
|
267 |
+
dtype: bfloat16
|
268 |
+
|
269 |
+
```
|
config.json
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "evathene-v1.3",
|
3 |
+
"architectures": [
|
4 |
+
"Qwen2ForCausalLM"
|
5 |
+
],
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"bos_token_id": 151643,
|
8 |
+
"eos_token_id": 151643,
|
9 |
+
"hidden_act": "silu",
|
10 |
+
"hidden_size": 8192,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"intermediate_size": 29568,
|
13 |
+
"max_position_embeddings": 131072,
|
14 |
+
"max_window_layers": 80,
|
15 |
+
"model_type": "qwen2",
|
16 |
+
"num_attention_heads": 64,
|
17 |
+
"num_hidden_layers": 80,
|
18 |
+
"num_key_value_heads": 8,
|
19 |
+
"rms_norm_eps": 1e-05,
|
20 |
+
"rope_scaling": null,
|
21 |
+
"rope_theta": 1000000.0,
|
22 |
+
"sliding_window": null,
|
23 |
+
"tie_word_embeddings": false,
|
24 |
+
"torch_dtype": "bfloat16",
|
25 |
+
"transformers_version": "4.46.2",
|
26 |
+
"use_cache": true,
|
27 |
+
"use_sliding_window": false,
|
28 |
+
"vocab_size": 152064,
|
29 |
+
"quantization_config": {
|
30 |
+
"quant_method": "exl2",
|
31 |
+
"version": "0.2.3",
|
32 |
+
"bits": 8.0,
|
33 |
+
"head_bits": 8,
|
34 |
+
"calibration": {
|
35 |
+
"rows": 115,
|
36 |
+
"length": 2048,
|
37 |
+
"dataset": "(default)"
|
38 |
+
}
|
39 |
+
}
|
40 |
+
}
|
evathene-v1.3-exl2-measurement-default.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
huggingface-metadata.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
url: https://huggingface.co/sophosympatheia/Evathene-v1.3
|
2 |
+
branch: main
|
3 |
+
download date: 2024-12-03 12:10:59
|
4 |
+
sha256sum:
|
5 |
+
f651d8e2ea89a314cf66706e0f19c107c8f70c6db28912329d6facc2efdb8e62 model-00001-of-00031.safetensors
|
6 |
+
1f686d38d2b763a42bab4a9f311d09b21fe1c026bf6c6056f70eafc85d7086e8 model-00002-of-00031.safetensors
|
7 |
+
bf5c86cd3674fa160554422cd7b0646d6a1199e8029483ad6599e82a41c1f224 model-00003-of-00031.safetensors
|
8 |
+
8f422e6d06df7a8284e7e264cf955806baed9d36a311c1eb84712f54dd2e6335 model-00004-of-00031.safetensors
|
9 |
+
89862c3144a2273d041a21171282655b0277ac9bd6564c485d4278cab516e710 model-00005-of-00031.safetensors
|
10 |
+
d83de9484a41e63592e1166079a2b93389eb54cf46327174bc3f629586e68422 model-00006-of-00031.safetensors
|
11 |
+
637b4a6f20e03efff2321ca4b99145e3d83ae6ecb7cefdde7ef57bf5fdfd2064 model-00007-of-00031.safetensors
|
12 |
+
7d49806c17ea0d8436d45a65e8376ce17e2bfe5aa77fdc106426e1147d781c94 model-00008-of-00031.safetensors
|
13 |
+
91cf37533c5d0bb2a9ad9d5afbeaa70a1ed771807c41ba3d5924d070ed82312b model-00009-of-00031.safetensors
|
14 |
+
a2674c363f08b77d2c7046d02ba31f090116f1f7bf4c1ed4016ac822218a26c5 model-00010-of-00031.safetensors
|
15 |
+
dc97ae720ad5186590f58feb23f350df5c1704a5e4c3334c264c91f1bcbcbb1b model-00011-of-00031.safetensors
|
16 |
+
c8500382e8f32d75875d0b00defb6299f1f93747b5e3b47b16bd7df54e397f07 model-00012-of-00031.safetensors
|
17 |
+
1168aa9508df844d8a4721b8801709c9a73cae283ed48c44025bfd10fd013b58 model-00013-of-00031.safetensors
|
18 |
+
3884326545f979fe62e6cfe44809be30d28101dcced7243ffdbc38cfe3b70416 model-00014-of-00031.safetensors
|
19 |
+
bd2b95264db61cf0fd404fd3480ab1cf7520404308c5358ebc8b96a6b41a9e5a model-00015-of-00031.safetensors
|
20 |
+
952071cd3a9b3e6f84e085be43e7727dab186e5707f45ee76bc67d82d1dfcca7 model-00016-of-00031.safetensors
|
21 |
+
f39e73c5f28d19e735a4e7b20501f5d15b4aff171e483cf6639aca193ba65e65 model-00017-of-00031.safetensors
|
22 |
+
c8c707442f40b72610d347255a5809ccb7937b2c8cec88fb8b4f09e69560ea02 model-00018-of-00031.safetensors
|
23 |
+
18b3aacba8db749d86be090762427e1574a018c7f254f742b051080982bcd55c model-00019-of-00031.safetensors
|
24 |
+
86c931de73dcc157706e9db0ab6a8a88c2e0fdf78a7375ad2e33592d86c1f33b model-00020-of-00031.safetensors
|
25 |
+
e5cacd7297e4ec52583a7749681b6ca5aead1e857da6470ad672ff559ea91e2d model-00021-of-00031.safetensors
|
26 |
+
3902617ae409c4bd97cb1c23f31404012129849ef4b09c2e293284edd0625265 model-00022-of-00031.safetensors
|
27 |
+
ed24ceb01bef834e153464810016cddea3b5815fa8febd2c7c14ce2c1a5a766b model-00023-of-00031.safetensors
|
28 |
+
cfe471d146895ea009c7b61f7ee74212152533d361c78a6a6c860f6df777ed0b model-00024-of-00031.safetensors
|
29 |
+
6f3fcbe738fead2c933bdee3561d85ac27b7a050461ad1534481fbc413672ea8 model-00025-of-00031.safetensors
|
30 |
+
70cca3cf4271a2ecad46153727240d0d61d35e6490914813169c07d072f9d1a8 model-00026-of-00031.safetensors
|
31 |
+
8295e89420ca627c938ef31b471e15eba6664b13ff6be01eef9e5ec9dfaa70c7 model-00027-of-00031.safetensors
|
32 |
+
b75fc954754a19d99c564f3a2e2b9a2ff5e20a54c378527dca224618cb74fb60 model-00028-of-00031.safetensors
|
33 |
+
431b0e9a67559c52e79c908920eee146b49b89c54341f87bf5b02efe2b32e637 model-00029-of-00031.safetensors
|
34 |
+
5c4086d5a13f7c0e86f556a7382427949eec28e308febed08eb222737bcc074d model-00030-of-00031.safetensors
|
35 |
+
39bf218e755ff1d7457d2f9a39aaf92a5197f3a85a030908b4f7bcb7f3cb9560 model-00031-of-00031.safetensors
|
model.safetensors.index.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"metadata": {"mergekit_version": "0.0.5.1", "total_size": 145412407296}, "weight_map": {"lm_head.weight": "model-00001-of-00031.safetensors", "model.embed_tokens.weight": "model-00001-of-00031.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00031.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00002-of-00031.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00002-of-00031.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00002-of-00031.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.k_proj.bias": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.q_proj.bias": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.v_proj.bias": "model-00002-of-00031.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.input_layernorm.weight": "model-00002-of-00031.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.k_proj.bias": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.q_proj.bias": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.v_proj.bias": "model-00002-of-00031.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00002-of-00031.safetensors", "model.layers.10.input_layernorm.weight": "model-00002-of-00031.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00002-of-00031.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00031.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00002-of-00031.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00031.safetensors", "model.layers.10.self_attn.k_proj.bias": "model-00002-of-00031.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00031.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00003-of-00031.safetensors", "model.layers.10.self_attn.q_proj.bias": "model-00003-of-00031.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00003-of-00031.safetensors", "model.layers.10.self_attn.v_proj.bias": "model-00003-of-00031.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.input_layernorm.weight": "model-00003-of-00031.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.k_proj.bias": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.q_proj.bias": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.v_proj.bias": "model-00003-of-00031.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.input_layernorm.weight": "model-00003-of-00031.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.k_proj.bias": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.q_proj.bias": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.v_proj.bias": "model-00003-of-00031.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00003-of-00031.safetensors", "model.layers.13.input_layernorm.weight": "model-00003-of-00031.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00003-of-00031.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00003-of-00031.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00004-of-00031.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.k_proj.bias": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.q_proj.bias": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.v_proj.bias": "model-00004-of-00031.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.input_layernorm.weight": "model-00004-of-00031.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.k_proj.bias": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.q_proj.bias": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.v_proj.bias": "model-00004-of-00031.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.input_layernorm.weight": "model-00004-of-00031.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.k_proj.bias": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.q_proj.bias": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.v_proj.bias": "model-00004-of-00031.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00004-of-00031.safetensors", "model.layers.16.input_layernorm.weight": "model-00004-of-00031.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00004-of-00031.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00005-of-00031.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00005-of-00031.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.k_proj.bias": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.q_proj.bias": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.v_proj.bias": "model-00005-of-00031.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.input_layernorm.weight": "model-00005-of-00031.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.k_proj.bias": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.q_proj.bias": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.v_proj.bias": "model-00005-of-00031.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.input_layernorm.weight": "model-00005-of-00031.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.k_proj.bias": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.q_proj.bias": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.v_proj.bias": "model-00005-of-00031.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00005-of-00031.safetensors", "model.layers.19.input_layernorm.weight": "model-00005-of-00031.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00006-of-00031.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00006-of-00031.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00006-of-00031.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.k_proj.bias": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.q_proj.bias": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.v_proj.bias": "model-00006-of-00031.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.input_layernorm.weight": "model-00006-of-00031.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.mlp.gate_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.k_proj.bias": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.q_proj.bias": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.v_proj.bias": "model-00006-of-00031.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00006-of-00031.safetensors", "model.layers.20.input_layernorm.weight": "model-00006-of-00031.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00006-of-00031.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00006-of-00031.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00006-of-00031.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00006-of-00031.safetensors", "model.layers.20.self_attn.k_proj.bias": "model-00006-of-00031.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00006-of-00031.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00007-of-00031.safetensors", "model.layers.20.self_attn.q_proj.bias": "model-00007-of-00031.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00007-of-00031.safetensors", "model.layers.20.self_attn.v_proj.bias": "model-00007-of-00031.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.input_layernorm.weight": "model-00007-of-00031.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.k_proj.bias": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.q_proj.bias": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.v_proj.bias": "model-00007-of-00031.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.input_layernorm.weight": "model-00007-of-00031.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.k_proj.bias": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.q_proj.bias": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.v_proj.bias": "model-00007-of-00031.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00007-of-00031.safetensors", "model.layers.23.input_layernorm.weight": "model-00007-of-00031.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00007-of-00031.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00007-of-00031.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00008-of-00031.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.k_proj.bias": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.q_proj.bias": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.v_proj.bias": "model-00008-of-00031.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.input_layernorm.weight": "model-00008-of-00031.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.k_proj.bias": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.q_proj.bias": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.v_proj.bias": "model-00008-of-00031.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.input_layernorm.weight": "model-00008-of-00031.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.k_proj.bias": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.q_proj.bias": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.v_proj.bias": "model-00008-of-00031.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00008-of-00031.safetensors", "model.layers.26.input_layernorm.weight": "model-00008-of-00031.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00008-of-00031.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00009-of-00031.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00009-of-00031.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.k_proj.bias": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.q_proj.bias": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.v_proj.bias": "model-00009-of-00031.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.input_layernorm.weight": "model-00009-of-00031.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.k_proj.bias": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.q_proj.bias": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.v_proj.bias": "model-00009-of-00031.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.input_layernorm.weight": "model-00009-of-00031.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.k_proj.bias": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.q_proj.bias": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.v_proj.bias": "model-00009-of-00031.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00009-of-00031.safetensors", "model.layers.29.input_layernorm.weight": "model-00009-of-00031.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00010-of-00031.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00010-of-00031.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00010-of-00031.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.k_proj.bias": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.q_proj.bias": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.v_proj.bias": "model-00010-of-00031.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.input_layernorm.weight": "model-00010-of-00031.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.k_proj.bias": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.q_proj.bias": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.v_proj.bias": "model-00010-of-00031.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00010-of-00031.safetensors", "model.layers.30.input_layernorm.weight": "model-00010-of-00031.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00010-of-00031.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00010-of-00031.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00010-of-00031.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00010-of-00031.safetensors", "model.layers.30.self_attn.k_proj.bias": "model-00010-of-00031.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00010-of-00031.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00011-of-00031.safetensors", "model.layers.30.self_attn.q_proj.bias": "model-00011-of-00031.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00011-of-00031.safetensors", "model.layers.30.self_attn.v_proj.bias": "model-00011-of-00031.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.input_layernorm.weight": "model-00011-of-00031.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.k_proj.bias": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.q_proj.bias": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.v_proj.bias": "model-00011-of-00031.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.input_layernorm.weight": "model-00011-of-00031.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.k_proj.bias": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.q_proj.bias": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.v_proj.bias": "model-00011-of-00031.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00011-of-00031.safetensors", "model.layers.33.input_layernorm.weight": "model-00011-of-00031.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00011-of-00031.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00011-of-00031.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00012-of-00031.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.k_proj.bias": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.q_proj.bias": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.v_proj.bias": "model-00012-of-00031.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.input_layernorm.weight": "model-00012-of-00031.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.k_proj.bias": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.q_proj.bias": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.v_proj.bias": "model-00012-of-00031.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.input_layernorm.weight": "model-00012-of-00031.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.k_proj.bias": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.q_proj.bias": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.v_proj.bias": "model-00012-of-00031.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00012-of-00031.safetensors", "model.layers.36.input_layernorm.weight": "model-00012-of-00031.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00012-of-00031.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00013-of-00031.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00013-of-00031.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.k_proj.bias": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.q_proj.bias": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.v_proj.bias": "model-00013-of-00031.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.input_layernorm.weight": "model-00013-of-00031.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.k_proj.bias": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.q_proj.bias": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.v_proj.bias": "model-00013-of-00031.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.input_layernorm.weight": "model-00013-of-00031.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.k_proj.bias": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.q_proj.bias": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.v_proj.bias": "model-00013-of-00031.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00013-of-00031.safetensors", "model.layers.39.input_layernorm.weight": "model-00013-of-00031.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00014-of-00031.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00014-of-00031.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00014-of-00031.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.k_proj.bias": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.q_proj.bias": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.v_proj.bias": "model-00014-of-00031.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.input_layernorm.weight": "model-00014-of-00031.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.k_proj.bias": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.q_proj.bias": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.v_proj.bias": "model-00014-of-00031.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00014-of-00031.safetensors", "model.layers.40.input_layernorm.weight": "model-00014-of-00031.safetensors", "model.layers.40.mlp.down_proj.weight": "model-00014-of-00031.safetensors", "model.layers.40.mlp.gate_proj.weight": "model-00014-of-00031.safetensors", "model.layers.40.mlp.up_proj.weight": "model-00014-of-00031.safetensors", "model.layers.40.post_attention_layernorm.weight": "model-00014-of-00031.safetensors", "model.layers.40.self_attn.k_proj.bias": "model-00014-of-00031.safetensors", "model.layers.40.self_attn.k_proj.weight": "model-00014-of-00031.safetensors", "model.layers.40.self_attn.o_proj.weight": "model-00015-of-00031.safetensors", "model.layers.40.self_attn.q_proj.bias": "model-00015-of-00031.safetensors", "model.layers.40.self_attn.q_proj.weight": "model-00015-of-00031.safetensors", "model.layers.40.self_attn.v_proj.bias": "model-00015-of-00031.safetensors", "model.layers.40.self_attn.v_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.input_layernorm.weight": "model-00015-of-00031.safetensors", "model.layers.41.mlp.down_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.mlp.gate_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.mlp.up_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.post_attention_layernorm.weight": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.k_proj.bias": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.k_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.o_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.q_proj.bias": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.q_proj.weight": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.v_proj.bias": "model-00015-of-00031.safetensors", "model.layers.41.self_attn.v_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.input_layernorm.weight": "model-00015-of-00031.safetensors", "model.layers.42.mlp.down_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.mlp.gate_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.mlp.up_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.post_attention_layernorm.weight": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.k_proj.bias": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.k_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.o_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.q_proj.bias": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.q_proj.weight": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.v_proj.bias": "model-00015-of-00031.safetensors", "model.layers.42.self_attn.v_proj.weight": "model-00015-of-00031.safetensors", "model.layers.43.input_layernorm.weight": "model-00015-of-00031.safetensors", "model.layers.43.mlp.down_proj.weight": "model-00015-of-00031.safetensors", "model.layers.43.mlp.gate_proj.weight": "model-00015-of-00031.safetensors", "model.layers.43.mlp.up_proj.weight": "model-00016-of-00031.safetensors", "model.layers.43.post_attention_layernorm.weight": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.k_proj.bias": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.k_proj.weight": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.o_proj.weight": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.q_proj.bias": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.q_proj.weight": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.v_proj.bias": "model-00016-of-00031.safetensors", "model.layers.43.self_attn.v_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.input_layernorm.weight": "model-00016-of-00031.safetensors", "model.layers.44.mlp.down_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.mlp.gate_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.mlp.up_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.post_attention_layernorm.weight": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.k_proj.bias": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.k_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.o_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.q_proj.bias": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.q_proj.weight": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.v_proj.bias": "model-00016-of-00031.safetensors", "model.layers.44.self_attn.v_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.input_layernorm.weight": "model-00016-of-00031.safetensors", "model.layers.45.mlp.down_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.mlp.gate_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.mlp.up_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.post_attention_layernorm.weight": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.k_proj.bias": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.k_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.o_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.q_proj.bias": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.q_proj.weight": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.v_proj.bias": "model-00016-of-00031.safetensors", "model.layers.45.self_attn.v_proj.weight": "model-00016-of-00031.safetensors", "model.layers.46.input_layernorm.weight": "model-00016-of-00031.safetensors", "model.layers.46.mlp.down_proj.weight": "model-00016-of-00031.safetensors", "model.layers.46.mlp.gate_proj.weight": "model-00017-of-00031.safetensors", "model.layers.46.mlp.up_proj.weight": "model-00017-of-00031.safetensors", "model.layers.46.post_attention_layernorm.weight": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.k_proj.bias": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.k_proj.weight": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.o_proj.weight": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.q_proj.bias": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.q_proj.weight": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.v_proj.bias": "model-00017-of-00031.safetensors", "model.layers.46.self_attn.v_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.input_layernorm.weight": "model-00017-of-00031.safetensors", "model.layers.47.mlp.down_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.mlp.gate_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.mlp.up_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.post_attention_layernorm.weight": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.k_proj.bias": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.k_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.o_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.q_proj.bias": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.q_proj.weight": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.v_proj.bias": "model-00017-of-00031.safetensors", "model.layers.47.self_attn.v_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.input_layernorm.weight": "model-00017-of-00031.safetensors", "model.layers.48.mlp.down_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.mlp.gate_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.mlp.up_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.post_attention_layernorm.weight": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.k_proj.bias": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.k_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.o_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.q_proj.bias": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.q_proj.weight": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.v_proj.bias": "model-00017-of-00031.safetensors", "model.layers.48.self_attn.v_proj.weight": "model-00017-of-00031.safetensors", "model.layers.49.input_layernorm.weight": "model-00017-of-00031.safetensors", "model.layers.49.mlp.down_proj.weight": "model-00018-of-00031.safetensors", "model.layers.49.mlp.gate_proj.weight": "model-00018-of-00031.safetensors", "model.layers.49.mlp.up_proj.weight": "model-00018-of-00031.safetensors", "model.layers.49.post_attention_layernorm.weight": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.k_proj.bias": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.k_proj.weight": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.o_proj.weight": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.q_proj.bias": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.q_proj.weight": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.v_proj.bias": "model-00018-of-00031.safetensors", "model.layers.49.self_attn.v_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.input_layernorm.weight": "model-00018-of-00031.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.k_proj.bias": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.q_proj.bias": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.v_proj.bias": "model-00018-of-00031.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00018-of-00031.safetensors", "model.layers.50.input_layernorm.weight": "model-00018-of-00031.safetensors", "model.layers.50.mlp.down_proj.weight": "model-00018-of-00031.safetensors", "model.layers.50.mlp.gate_proj.weight": "model-00018-of-00031.safetensors", "model.layers.50.mlp.up_proj.weight": "model-00018-of-00031.safetensors", "model.layers.50.post_attention_layernorm.weight": "model-00018-of-00031.safetensors", "model.layers.50.self_attn.k_proj.bias": "model-00018-of-00031.safetensors", "model.layers.50.self_attn.k_proj.weight": "model-00018-of-00031.safetensors", "model.layers.50.self_attn.o_proj.weight": "model-00019-of-00031.safetensors", "model.layers.50.self_attn.q_proj.bias": "model-00019-of-00031.safetensors", "model.layers.50.self_attn.q_proj.weight": "model-00019-of-00031.safetensors", "model.layers.50.self_attn.v_proj.bias": "model-00019-of-00031.safetensors", "model.layers.50.self_attn.v_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.input_layernorm.weight": "model-00019-of-00031.safetensors", "model.layers.51.mlp.down_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.mlp.gate_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.mlp.up_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.post_attention_layernorm.weight": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.k_proj.bias": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.k_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.o_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.q_proj.bias": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.q_proj.weight": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.v_proj.bias": "model-00019-of-00031.safetensors", "model.layers.51.self_attn.v_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.input_layernorm.weight": "model-00019-of-00031.safetensors", "model.layers.52.mlp.down_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.mlp.gate_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.mlp.up_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.post_attention_layernorm.weight": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.k_proj.bias": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.k_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.o_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.q_proj.bias": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.q_proj.weight": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.v_proj.bias": "model-00019-of-00031.safetensors", "model.layers.52.self_attn.v_proj.weight": "model-00019-of-00031.safetensors", "model.layers.53.input_layernorm.weight": "model-00019-of-00031.safetensors", "model.layers.53.mlp.down_proj.weight": "model-00019-of-00031.safetensors", "model.layers.53.mlp.gate_proj.weight": "model-00019-of-00031.safetensors", "model.layers.53.mlp.up_proj.weight": "model-00020-of-00031.safetensors", "model.layers.53.post_attention_layernorm.weight": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.k_proj.bias": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.k_proj.weight": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.o_proj.weight": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.q_proj.bias": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.q_proj.weight": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.v_proj.bias": "model-00020-of-00031.safetensors", "model.layers.53.self_attn.v_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.input_layernorm.weight": "model-00020-of-00031.safetensors", "model.layers.54.mlp.down_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.mlp.gate_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.mlp.up_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.post_attention_layernorm.weight": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.k_proj.bias": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.k_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.o_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.q_proj.bias": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.q_proj.weight": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.v_proj.bias": "model-00020-of-00031.safetensors", "model.layers.54.self_attn.v_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.input_layernorm.weight": "model-00020-of-00031.safetensors", "model.layers.55.mlp.down_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.mlp.gate_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.mlp.up_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.post_attention_layernorm.weight": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.k_proj.bias": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.k_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.o_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.q_proj.bias": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.q_proj.weight": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.v_proj.bias": "model-00020-of-00031.safetensors", "model.layers.55.self_attn.v_proj.weight": "model-00020-of-00031.safetensors", "model.layers.56.input_layernorm.weight": "model-00020-of-00031.safetensors", "model.layers.56.mlp.down_proj.weight": "model-00020-of-00031.safetensors", "model.layers.56.mlp.gate_proj.weight": "model-00021-of-00031.safetensors", "model.layers.56.mlp.up_proj.weight": "model-00021-of-00031.safetensors", "model.layers.56.post_attention_layernorm.weight": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.k_proj.bias": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.k_proj.weight": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.o_proj.weight": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.q_proj.bias": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.q_proj.weight": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.v_proj.bias": "model-00021-of-00031.safetensors", "model.layers.56.self_attn.v_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.input_layernorm.weight": "model-00021-of-00031.safetensors", "model.layers.57.mlp.down_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.mlp.gate_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.mlp.up_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.post_attention_layernorm.weight": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.k_proj.bias": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.k_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.o_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.q_proj.bias": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.q_proj.weight": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.v_proj.bias": "model-00021-of-00031.safetensors", "model.layers.57.self_attn.v_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.input_layernorm.weight": "model-00021-of-00031.safetensors", "model.layers.58.mlp.down_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.mlp.gate_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.mlp.up_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.post_attention_layernorm.weight": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.k_proj.bias": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.k_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.o_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.q_proj.bias": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.q_proj.weight": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.v_proj.bias": "model-00021-of-00031.safetensors", "model.layers.58.self_attn.v_proj.weight": "model-00021-of-00031.safetensors", "model.layers.59.input_layernorm.weight": "model-00021-of-00031.safetensors", "model.layers.59.mlp.down_proj.weight": "model-00022-of-00031.safetensors", "model.layers.59.mlp.gate_proj.weight": "model-00022-of-00031.safetensors", "model.layers.59.mlp.up_proj.weight": "model-00022-of-00031.safetensors", "model.layers.59.post_attention_layernorm.weight": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.k_proj.bias": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.k_proj.weight": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.o_proj.weight": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.q_proj.bias": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.q_proj.weight": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.v_proj.bias": "model-00022-of-00031.safetensors", "model.layers.59.self_attn.v_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.input_layernorm.weight": "model-00022-of-00031.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.k_proj.bias": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.q_proj.bias": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.v_proj.bias": "model-00022-of-00031.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00022-of-00031.safetensors", "model.layers.60.input_layernorm.weight": "model-00022-of-00031.safetensors", "model.layers.60.mlp.down_proj.weight": "model-00022-of-00031.safetensors", "model.layers.60.mlp.gate_proj.weight": "model-00022-of-00031.safetensors", "model.layers.60.mlp.up_proj.weight": "model-00022-of-00031.safetensors", "model.layers.60.post_attention_layernorm.weight": "model-00022-of-00031.safetensors", "model.layers.60.self_attn.k_proj.bias": "model-00022-of-00031.safetensors", "model.layers.60.self_attn.k_proj.weight": "model-00022-of-00031.safetensors", "model.layers.60.self_attn.o_proj.weight": "model-00023-of-00031.safetensors", "model.layers.60.self_attn.q_proj.bias": "model-00023-of-00031.safetensors", "model.layers.60.self_attn.q_proj.weight": "model-00023-of-00031.safetensors", "model.layers.60.self_attn.v_proj.bias": "model-00023-of-00031.safetensors", "model.layers.60.self_attn.v_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.input_layernorm.weight": "model-00023-of-00031.safetensors", "model.layers.61.mlp.down_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.mlp.gate_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.mlp.up_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.post_attention_layernorm.weight": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.k_proj.bias": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.k_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.o_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.q_proj.bias": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.q_proj.weight": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.v_proj.bias": "model-00023-of-00031.safetensors", "model.layers.61.self_attn.v_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.input_layernorm.weight": "model-00023-of-00031.safetensors", "model.layers.62.mlp.down_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.mlp.gate_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.mlp.up_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.post_attention_layernorm.weight": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.k_proj.bias": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.k_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.o_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.q_proj.bias": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.q_proj.weight": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.v_proj.bias": "model-00023-of-00031.safetensors", "model.layers.62.self_attn.v_proj.weight": "model-00023-of-00031.safetensors", "model.layers.63.input_layernorm.weight": "model-00023-of-00031.safetensors", "model.layers.63.mlp.down_proj.weight": "model-00023-of-00031.safetensors", "model.layers.63.mlp.gate_proj.weight": "model-00023-of-00031.safetensors", "model.layers.63.mlp.up_proj.weight": "model-00024-of-00031.safetensors", "model.layers.63.post_attention_layernorm.weight": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.k_proj.bias": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.k_proj.weight": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.o_proj.weight": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.q_proj.bias": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.q_proj.weight": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.v_proj.bias": "model-00024-of-00031.safetensors", "model.layers.63.self_attn.v_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.input_layernorm.weight": "model-00024-of-00031.safetensors", "model.layers.64.mlp.down_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.mlp.gate_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.mlp.up_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.post_attention_layernorm.weight": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.k_proj.bias": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.k_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.o_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.q_proj.bias": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.q_proj.weight": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.v_proj.bias": "model-00024-of-00031.safetensors", "model.layers.64.self_attn.v_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.input_layernorm.weight": "model-00024-of-00031.safetensors", "model.layers.65.mlp.down_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.mlp.gate_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.mlp.up_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.post_attention_layernorm.weight": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.k_proj.bias": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.k_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.o_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.q_proj.bias": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.q_proj.weight": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.v_proj.bias": "model-00024-of-00031.safetensors", "model.layers.65.self_attn.v_proj.weight": "model-00024-of-00031.safetensors", "model.layers.66.input_layernorm.weight": "model-00024-of-00031.safetensors", "model.layers.66.mlp.down_proj.weight": "model-00024-of-00031.safetensors", "model.layers.66.mlp.gate_proj.weight": "model-00025-of-00031.safetensors", "model.layers.66.mlp.up_proj.weight": "model-00025-of-00031.safetensors", "model.layers.66.post_attention_layernorm.weight": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.k_proj.bias": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.k_proj.weight": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.o_proj.weight": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.q_proj.bias": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.q_proj.weight": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.v_proj.bias": "model-00025-of-00031.safetensors", "model.layers.66.self_attn.v_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.input_layernorm.weight": "model-00025-of-00031.safetensors", "model.layers.67.mlp.down_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.mlp.gate_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.mlp.up_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.post_attention_layernorm.weight": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.k_proj.bias": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.k_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.o_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.q_proj.bias": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.q_proj.weight": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.v_proj.bias": "model-00025-of-00031.safetensors", "model.layers.67.self_attn.v_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.input_layernorm.weight": "model-00025-of-00031.safetensors", "model.layers.68.mlp.down_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.mlp.gate_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.mlp.up_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.post_attention_layernorm.weight": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.k_proj.bias": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.k_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.o_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.q_proj.bias": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.q_proj.weight": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.v_proj.bias": "model-00025-of-00031.safetensors", "model.layers.68.self_attn.v_proj.weight": "model-00025-of-00031.safetensors", "model.layers.69.input_layernorm.weight": "model-00025-of-00031.safetensors", "model.layers.69.mlp.down_proj.weight": "model-00026-of-00031.safetensors", "model.layers.69.mlp.gate_proj.weight": "model-00026-of-00031.safetensors", "model.layers.69.mlp.up_proj.weight": "model-00026-of-00031.safetensors", "model.layers.69.post_attention_layernorm.weight": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.k_proj.bias": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.k_proj.weight": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.o_proj.weight": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.q_proj.bias": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.q_proj.weight": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.v_proj.bias": "model-00026-of-00031.safetensors", "model.layers.69.self_attn.v_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.input_layernorm.weight": "model-00026-of-00031.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.k_proj.bias": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.q_proj.bias": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.v_proj.bias": "model-00026-of-00031.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00026-of-00031.safetensors", "model.layers.70.input_layernorm.weight": "model-00026-of-00031.safetensors", "model.layers.70.mlp.down_proj.weight": "model-00026-of-00031.safetensors", "model.layers.70.mlp.gate_proj.weight": "model-00026-of-00031.safetensors", "model.layers.70.mlp.up_proj.weight": "model-00026-of-00031.safetensors", "model.layers.70.post_attention_layernorm.weight": "model-00026-of-00031.safetensors", "model.layers.70.self_attn.k_proj.bias": "model-00026-of-00031.safetensors", "model.layers.70.self_attn.k_proj.weight": "model-00026-of-00031.safetensors", "model.layers.70.self_attn.o_proj.weight": "model-00027-of-00031.safetensors", "model.layers.70.self_attn.q_proj.bias": "model-00027-of-00031.safetensors", "model.layers.70.self_attn.q_proj.weight": "model-00027-of-00031.safetensors", "model.layers.70.self_attn.v_proj.bias": "model-00027-of-00031.safetensors", "model.layers.70.self_attn.v_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.input_layernorm.weight": "model-00027-of-00031.safetensors", "model.layers.71.mlp.down_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.mlp.gate_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.mlp.up_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.post_attention_layernorm.weight": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.k_proj.bias": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.k_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.o_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.q_proj.bias": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.q_proj.weight": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.v_proj.bias": "model-00027-of-00031.safetensors", "model.layers.71.self_attn.v_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.input_layernorm.weight": "model-00027-of-00031.safetensors", "model.layers.72.mlp.down_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.mlp.gate_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.mlp.up_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.post_attention_layernorm.weight": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.k_proj.bias": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.k_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.o_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.q_proj.bias": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.q_proj.weight": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.v_proj.bias": "model-00027-of-00031.safetensors", "model.layers.72.self_attn.v_proj.weight": "model-00027-of-00031.safetensors", "model.layers.73.input_layernorm.weight": "model-00027-of-00031.safetensors", "model.layers.73.mlp.down_proj.weight": "model-00027-of-00031.safetensors", "model.layers.73.mlp.gate_proj.weight": "model-00027-of-00031.safetensors", "model.layers.73.mlp.up_proj.weight": "model-00028-of-00031.safetensors", "model.layers.73.post_attention_layernorm.weight": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.k_proj.bias": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.k_proj.weight": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.o_proj.weight": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.q_proj.bias": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.q_proj.weight": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.v_proj.bias": "model-00028-of-00031.safetensors", "model.layers.73.self_attn.v_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.input_layernorm.weight": "model-00028-of-00031.safetensors", "model.layers.74.mlp.down_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.mlp.gate_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.mlp.up_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.post_attention_layernorm.weight": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.k_proj.bias": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.k_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.o_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.q_proj.bias": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.q_proj.weight": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.v_proj.bias": "model-00028-of-00031.safetensors", "model.layers.74.self_attn.v_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.input_layernorm.weight": "model-00028-of-00031.safetensors", "model.layers.75.mlp.down_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.mlp.gate_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.mlp.up_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.post_attention_layernorm.weight": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.k_proj.bias": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.k_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.o_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.q_proj.bias": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.q_proj.weight": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.v_proj.bias": "model-00028-of-00031.safetensors", "model.layers.75.self_attn.v_proj.weight": "model-00028-of-00031.safetensors", "model.layers.76.input_layernorm.weight": "model-00028-of-00031.safetensors", "model.layers.76.mlp.down_proj.weight": "model-00028-of-00031.safetensors", "model.layers.76.mlp.gate_proj.weight": "model-00029-of-00031.safetensors", "model.layers.76.mlp.up_proj.weight": "model-00029-of-00031.safetensors", "model.layers.76.post_attention_layernorm.weight": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.k_proj.bias": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.k_proj.weight": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.o_proj.weight": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.q_proj.bias": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.q_proj.weight": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.v_proj.bias": "model-00029-of-00031.safetensors", "model.layers.76.self_attn.v_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.input_layernorm.weight": "model-00029-of-00031.safetensors", "model.layers.77.mlp.down_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.mlp.gate_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.mlp.up_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.post_attention_layernorm.weight": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.k_proj.bias": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.k_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.o_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.q_proj.bias": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.q_proj.weight": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.v_proj.bias": "model-00029-of-00031.safetensors", "model.layers.77.self_attn.v_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.input_layernorm.weight": "model-00029-of-00031.safetensors", "model.layers.78.mlp.down_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.mlp.gate_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.mlp.up_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.post_attention_layernorm.weight": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.k_proj.bias": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.k_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.o_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.q_proj.bias": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.q_proj.weight": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.v_proj.bias": "model-00029-of-00031.safetensors", "model.layers.78.self_attn.v_proj.weight": "model-00029-of-00031.safetensors", "model.layers.79.input_layernorm.weight": "model-00029-of-00031.safetensors", "model.layers.79.mlp.down_proj.weight": "model-00030-of-00031.safetensors", "model.layers.79.mlp.gate_proj.weight": "model-00030-of-00031.safetensors", "model.layers.79.mlp.up_proj.weight": "model-00030-of-00031.safetensors", "model.layers.79.post_attention_layernorm.weight": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.k_proj.bias": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.k_proj.weight": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.o_proj.weight": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.q_proj.bias": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.q_proj.weight": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.v_proj.bias": "model-00030-of-00031.safetensors", "model.layers.79.self_attn.v_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.input_layernorm.weight": "model-00030-of-00031.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.k_proj.bias": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.q_proj.bias": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.v_proj.bias": "model-00030-of-00031.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00030-of-00031.safetensors", "model.layers.9.input_layernorm.weight": "model-00030-of-00031.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00030-of-00031.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00030-of-00031.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00030-of-00031.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00030-of-00031.safetensors", "model.layers.9.self_attn.k_proj.bias": "model-00030-of-00031.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00030-of-00031.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00031-of-00031.safetensors", "model.layers.9.self_attn.q_proj.bias": "model-00031-of-00031.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00031-of-00031.safetensors", "model.layers.9.self_attn.v_proj.bias": "model-00031-of-00031.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00031-of-00031.safetensors", "model.norm.weight": "model-00031-of-00031.safetensors"}}
|
output-00001-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85ea21b622651a63c3ebd8b23d761912cda360fc61dc7773d46e9420c5df362e
|
3 |
+
size 8588328996
|
output-00002-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1b162dd90631bf77f8ed97944ce03c7cc453462fe2726c67226b4076776833bf
|
3 |
+
size 8441542494
|
output-00003-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4317dc4ce18f66531bc36edababcd8ca3e2ca50372d0495101ec7dc132298f50
|
3 |
+
size 8372838222
|
output-00004-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:843ccc9484afd77b3efc9145ba3c1801948895015bdfa26a5a8f62ed25fb5164
|
3 |
+
size 8560510100
|
output-00005-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:89d4aa7eff93a5ee3ccc9e54c1a30b72916b862f5585584ccda3b5b14006028b
|
3 |
+
size 8460017676
|
output-00006-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:21ea5b717e833f2b5a14d76b9595e78d5cea240601e5728d10eb0d46635278d1
|
3 |
+
size 8493396500
|
output-00007-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ee044e57b7fc727841daeff1d0110db5adfd352a9be7c42b2490d07b9627ddf0
|
3 |
+
size 8575138356
|
output-00008-of-00008.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ccf06dd7ba024370fb038dd0b062ce575ad8a70fd1213f7a9b060973fca8174
|
3 |
+
size 6830837716
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
}
|
181 |
+
},
|
182 |
+
"additional_special_tokens": [
|
183 |
+
"<|im_start|>",
|
184 |
+
"<|im_end|>",
|
185 |
+
"<|object_ref_start|>",
|
186 |
+
"<|object_ref_end|>",
|
187 |
+
"<|box_start|>",
|
188 |
+
"<|box_end|>",
|
189 |
+
"<|quad_start|>",
|
190 |
+
"<|quad_end|>",
|
191 |
+
"<|vision_start|>",
|
192 |
+
"<|vision_end|>",
|
193 |
+
"<|vision_pad|>",
|
194 |
+
"<|image_pad|>",
|
195 |
+
"<|video_pad|>"
|
196 |
+
],
|
197 |
+
"bos_token": null,
|
198 |
+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
|
199 |
+
"clean_up_tokenization_spaces": false,
|
200 |
+
"eos_token": "<|endoftext|>",
|
201 |
+
"errors": "replace",
|
202 |
+
"model_max_length": 131072,
|
203 |
+
"pad_token": "<|endoftext|>",
|
204 |
+
"split_special_tokens": false,
|
205 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
206 |
+
"unk_token": null
|
207 |
+
}
|