Pinaster commited on
Commit
1c53d2d
·
verified ·
1 Parent(s): dbb8645

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ library_name: transformers
6
+ license: mit
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # GLM-5
11
+
12
+ <div align="center">
13
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-5/refs/heads/main/resources/logo.svg width="15%"/>
14
+ </div>
15
+ <p align="center">
16
+ 👋 Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-5/refs/heads/main/resources/wechat.png" target="_blank">WeChat</a> or <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
17
+ <br>
18
+ 📖 Check out the GLM-5 <a href="https://z.ai/blog/glm-5" target="_blank">technical blog</a>.
19
+ <br>
20
+ 📍 Use GLM-5 API services on <a href="https://docs.z.ai/guides/llm/glm-5">Z.ai API Platform. </a>
21
+ <br>
22
+ 👉 One click to <a href="https://chat.z.ai">GLM-5</a>.
23
+ </p>
24
+
25
+ ## Introduction
26
+
27
+ We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), largely reducing deployment cost while preserving long-context capacity.
28
+
29
+ Reinforcement learning aims to bridge the gap between competence and excellence in pre-trained models. However, deploying it at scale for LLMs is a challenge due to the RL training inefficiency. To this end, we developed [slime](https://github.com/THUDM/slime), a novel **asynchronous RL infrastructure** that substantially improves training throughput and efficiency, enabling more fine-grained post-training iterations. With advances in both pre-training and post-training, GLM-5 delivers significant improvement compared to GLM-4.7 across a wide range of academic benchmarks and achieves best-in-class performance among all open-source models in the world on reasoning, coding, and agentic tasks, closing the gap with frontier models.
30
+
31
+ ## Benchmark
32
+
33
+ | | GLM-5 | GLM-4.7 | DeepSeek-V3.2 | Kimi K2.5 | Claude Opus 4.5 | Gemini 3 Pro | GPT-5.2 (xhigh) |
34
+ | -------------------------------- | ---------------------- | --------- | ------------- |-----------| --------------- | ------------ | --------------- |
35
+ | HLE | 30.5 | 24.8 | 25.1 | 31.5 | 28.4 | 37.2 | 35.4 |
36
+ | HLE (w/ Tools) | 50.4 | 42.8 | 40.8 | 51.8 | 43.4* | 45.8* | 45.5* |
37
+ | AIME 2026 I | 92.7 | 92.9 | 92.7 | 92.5 | 93.3 | 90.6 | - |
38
+ | HMMT Nov. 2025 | 96.9 | 93.5 | 90.2 | 91.1 | 91.7 | 93.0 | 97.1 |
39
+ | IMOAnswerBench | 82.5 | 82.0 | 78.3 | 81.8 | 78.5 | 83.3 | 86.3 |
40
+ | GPQA-Diamond | 86.0 | 85.7 | 82.4 | 87.6 | 87.0 | 91.9 | 92.4 |
41
+ | SWE-bench Verified | 77.8 | 73.8 | 73.1 | 76.8 | 80.9 | 76.2 | 80.0 |
42
+ | SWE-bench Multilingual | 73.3 | 66.7 | 70.2 | 73.0 | 77.5 | 65.0 | 72.0 |
43
+ | Terminal-Bench 2.0 (Terminus 2) | 56.2 / 60.7 † | 41.0 | 39.3 | 50.8 | 59.3 | 54.2 | 54.0 |
44
+ | Terminal-Bench 2.0 (Claude Code) | 56.2 / 61.1 † | 32.8 | 46.4 | - | 57.9 | - | - |
45
+ | CyberGym | 43.2 | 23.5 | 17.3 | 41.3 | 50.6 | 39.9 | - |
46
+ | BrowseComp | 62.0 | 52.0 | 51.4 | 60.6 | 37.0 | 37.8 | - |
47
+ | BrowseComp (w/ Context Manage) | 75.9 | 67.5 | 67.6 | 74.9 | 67.8 | 59.2 | 65.8 |
48
+ | BrowseComp-Zh | 72.7 | 66.6 | 65.0 | 62.3 | 62.4 | 66.8 | 76.1 |
49
+ | τ²-Bench | 89.7 | 87.4 | 85.3 | 80.2 | 91.6 | 90.7 | 85.5 |
50
+ | MCP-Atlas (Public Set) | 67.8 | 52.0 | 62.2 | 63.8 | 65.2 | 66.6 | 68.0 |
51
+ | Tool-Decathlon | 38.0 | 23.8 | 35.2 | 27.8 | 43.5 | 36.4 | 46.3 |
52
+ | Vending Bench 2 | $4,432.12 | $2,376.82 | $1,034.00 | $1,198.46 | $4,967.06 | $5,478.16 | $3,591.33 |
53
+
54
+ > *: refers to their scores of full set.
55
+ >
56
+ > †: A verified version of Terminal-Bench 2.0 that fixes some ambiguous instructions.
57
+ See footnote for more evaluation details.
58
+
59
+ ### Footnote
60
+
61
+ * **Humanity’s Last Exam (HLE) & other reasoning tasks**: We evaluate with a maximum generation length of 131,072 tokens (`temperature=1.0, top_p=0.95, max_new_tokens=131072`). By default, we report the text-only subset; results marked with * are from the full set. We use GPT-5.2 (medium) as the judge model. For HLE-with-tools, we use a maximum context length of 202,752 tokens.
62
+ * **SWE-bench & SWE-bench Multilingual**: We run the SWE-bench suite with OpenHands using a tailored instruction prompt. Settings: `temperature=0.7, top_p=0.95, max_new_tokens=16384`, with a 200K context window.
63
+ * **BrowserComp**: Without context management, we retain details from the most recent 5 turns. With context management, we use the same discard-all strategy as DeepSeek-v3.2 and Kimi K2.5.
64
+ * **Terminal-Bench 2.0 (Terminus 2)**: We evaluate with the Terminus framework using `timeout=2h, temperature=0.7, top_p=1.0, max_new_tokens=8192`, with a 128K context window. Resource limits are capped at 16 CPUs and 32 GB RAM.
65
+ * **Terminal-Bench 2.0 (Claude Code)**: We evaluate in Claude Code 2.1.14 (think mode, default effort) with `temperature=1.0, top_p=0.95, max_new_tokens=65536`. We remove wall-clock time limits due to generation speed, while preserving per-task CPU and memory constraints. Scores are averaged over 5 runs. We fix environment issues introduced by Claude Code and also report results on a verified Terminal-Bench 2.0 dataset that resolves ambiguous instructions (see: [https://huggingface.co/datasets/zai-org/terminal-bench-2-verified](https://huggingface.co/datasets/zai-org/terminal-bench-2-verified)).
66
+ * **CyberGym**: We evaluate in Claude Code 2.1.18 (think mode, no web tools) with (`temperature=1.0, top_p=1.0, max_new_tokens=32000`) and a 250-minute timeout per task. Results are single-run Pass@1 over 1,507 tasks.
67
+ * **MCP-Atlas**: All models are evaluated in think mode on the 500-task public subset with a 10-minute timeout per task. We use Gemini 3 Pro as the judge model.
68
+ * **τ²-bench**: We add a small prompt adjustment in Retail and Telecom to avoid failures caused by premature user termination. For Airline, we apply the domain fixes proposed in the Claude Opus 4.5 system card.
69
+ * **Vending Bench 2**: Runs are conducted independently by [Andon Labs](https://andonlabs.com/evals/vending-bench-2).
70
+
71
+
72
+ ## Serve GLM-5 Locally
73
+
74
+ ### Prepare environment
75
+
76
+ vLLM, SGLang, KTransformers, and xLLM all support local deployment of GLM-5. A simple deployment guide is provided here.
77
+
78
+ + vLLM
79
+
80
+ Using Docker as:
81
+
82
+ ```shell
83
+ docker pull vllm/vllm-openai:nightly
84
+ ```
85
+
86
+ or using pip:
87
+
88
+ ```shell
89
+ pip install -U vllm --pre --index-url https://pypi.org/simple --extra-index-url https://wheels.vllm.ai/nightly
90
+ ```
91
+
92
+ then upgrade transformers:
93
+
94
+ ```
95
+ pip install git+https://github.com/huggingface/transformers.git
96
+ ```
97
+
98
+ + SGLang
99
+
100
+ Using Docker as:
101
+ ```bash
102
+ docker pull lmsysorg/sglang:glm5-hopper # For Hopper GPU
103
+ docker pull lmsysorg/sglang:glm5-blackwell # For Blackwell GPU
104
+ ```
105
+
106
+ ### Deploy
107
+
108
+ + vLLM
109
+
110
+ ```shell
111
+ vllm serve zai-org/GLM-5-FP8 \
112
+ --tensor-parallel-size 8 \
113
+ --gpu-memory-utilization 0.85 \
114
+ --speculative-config.method mtp \
115
+ --speculative-config.num_speculative_tokens 1 \
116
+ --tool-call-parser glm47 \
117
+ --reasoning-parser glm45 \
118
+ --enable-auto-tool-choice \
119
+ --served-model-name glm-5-fp8
120
+ ```
121
+
122
+ Check the [recipes](https://github.com/vllm-project/recipes/blob/main/GLM/GLM5.md) for more details.
123
+
124
+ + SGLang
125
+
126
+ ```shell
127
+ python3 -m sglang.launch_server \
128
+ --model-path zai-org/GLM-5-FP8 \
129
+ --tp-size 8 \
130
+ --tool-call-parser glm47 \
131
+ --reasoning-parser glm45 \
132
+ --speculative-algorithm EAGLE \
133
+ --speculative-num-steps 3 \
134
+ --speculative-eagle-topk 1 \
135
+ --speculative-num-draft-tokens 4 \
136
+ --mem-fraction-static 0.85 \
137
+ --served-model-name glm-5-fp8
138
+ ```
139
+
140
+ Check the [sglang cookbook](https://cookbook.sglang.io/autoregressive/GLM/GLM-5) for more details.
141
+
142
+ + xLLM and other Ascend NPU
143
+
144
+ Please check the deployment guide [here](https://github.com/zai-org/GLM-5/blob/main/example/ascend.md).
145
+
146
+ + KTransformers
147
+
148
+ Please check the deployment guide [here](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/GLM-5-Tutorial.md).
149
+
150
+ ## Citation
151
+
152
+ Our technical report is coming soon.
chat_template.jinja ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [gMASK]<sop>
2
+ {%- if tools -%}
3
+ <|system|>
4
+ # Tools
5
+
6
+ You may call one or more functions to assist with the user query.
7
+
8
+ You are provided with function signatures within <tools></tools> XML tags:
9
+ <tools>
10
+ {% for tool in tools %}
11
+ {{ tool | tojson(ensure_ascii=False) }}
12
+ {% endfor %}
13
+ </tools>
14
+
15
+ For each function call, output the function name and arguments within the following XML format:
16
+ <tool_call>{function-name}<arg_key>{arg-key-1}</arg_key><arg_value>{arg-value-1}</arg_value><arg_key>{arg-key-2}</arg_key><arg_value>{arg-value-2}</arg_value>...</tool_call>{%- endif -%}
17
+ {%- macro visible_text(content) -%}
18
+ {%- if content is string -%}
19
+ {{- content }}
20
+ {%- elif content is iterable and content is not mapping -%}
21
+ {%- for item in content -%}
22
+ {%- if item is mapping and item.type == 'text' -%}
23
+ {{- item.text }}
24
+ {%- elif item is string -%}
25
+ {{- item }}
26
+ {%- endif -%}
27
+ {%- endfor -%}
28
+ {%- else -%}
29
+ {{- content }}
30
+ {%- endif -%}
31
+ {%- endmacro -%}
32
+ {%- set ns = namespace(last_user_index=-1) %}
33
+ {%- for m in messages %}
34
+ {%- if m.role == 'user' %}
35
+ {% set ns.last_user_index = loop.index0 -%}
36
+ {%- endif %}
37
+ {%- endfor %}
38
+ {% for m in messages %}
39
+ {%- if m.role == 'user' -%}<|user|>{{ visible_text(m.content) }}
40
+ {%- elif m.role == 'assistant' -%}
41
+ <|assistant|>
42
+ {%- set reasoning_content = '' %}
43
+ {%- set content = visible_text(m.content) %}
44
+ {%- if m.reasoning_content is string %}
45
+ {%- set reasoning_content = m.reasoning_content %}
46
+ {%- else %}
47
+ {%- if '</think>' in content %}
48
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
49
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- if ((clear_thinking is defined and not clear_thinking) or loop.index0 > ns.last_user_index) and reasoning_content -%}
53
+ {{ '<think>' + reasoning_content.strip() + '</think>'}}
54
+ {%- else -%}
55
+ {{ '</think>' }}
56
+ {%- endif -%}
57
+ {%- if content.strip() -%}
58
+ {{ content.strip() }}
59
+ {%- endif -%}
60
+ {% if m.tool_calls %}
61
+ {% for tc in m.tool_calls %}
62
+ {%- if tc.function %}
63
+ {%- set tc = tc.function %}
64
+ {%- endif %}
65
+ {{- '<tool_call>' + tc.name -}}
66
+ {% set _args = tc.arguments %}{% for k, v in _args.items() %}<arg_key>{{ k }}</arg_key><arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>{% endfor %}</tool_call>{% endfor %}
67
+ {% endif %}
68
+ {%- elif m.role == 'tool' -%}
69
+ {%- if m.content is string -%}
70
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
71
+ {{- '<|observation|>' }}
72
+ {%- endif %}
73
+ {{- '<tool_response>' }}
74
+ {{- m.content }}
75
+ {{- '</tool_response>' }}
76
+ {%- else -%}
77
+ <|observation|>{% for tr in m.content %}
78
+ <tool_response>{{ tr.output if tr.output is defined else tr }}</tool_response>{% endfor -%}
79
+ {% endif -%}
80
+ {%- elif m.role == 'system' -%}
81
+ <|system|>{{ visible_text(m.content) }}
82
+ {%- endif -%}
83
+ {%- endfor -%}
84
+ {%- if add_generation_prompt -%}
85
+ <|assistant|>{{- '</think>' if (enable_thinking is defined and not enable_thinking) else '<think>' -}}
86
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DeepseekV32ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "dtype": "bfloat16",
8
+ "eos_token_id": [
9
+ 154820,
10
+ 154827,
11
+ 154829
12
+ ],
13
+ "ep_size": 1,
14
+ "first_k_dense_replace": 3,
15
+ "hidden_act": "silu",
16
+ "head_dim": 64,
17
+ "hidden_size": 6144,
18
+ "index_head_dim": 128,
19
+ "index_n_heads": 32,
20
+ "index_topk": 2048,
21
+ "indexer_rope_interleave": true,
22
+ "initializer_range": 0.02,
23
+ "intermediate_size": 12288,
24
+ "kv_lora_rank": 512,
25
+ "max_position_embeddings": 202752,
26
+ "moe_intermediate_size": 2048,
27
+ "moe_layer_freq": 1,
28
+ "model_type": "deepseek_v32",
29
+ "n_group": 1,
30
+ "n_routed_experts": 256,
31
+ "n_shared_experts": 1,
32
+ "norm_topk_prob": true,
33
+ "num_attention_heads": 64,
34
+ "num_experts_per_tok": 8,
35
+ "num_hidden_layers": 4,
36
+ "num_key_value_heads": 64,
37
+ "num_nextn_predict_layers": 1,
38
+ "pad_token_id": 154820,
39
+ "pretraining_tp": 1,
40
+ "q_lora_rank": 2048,
41
+ "qk_head_dim": 256,
42
+ "qk_nope_head_dim": 192,
43
+ "qk_rope_head_dim": 64,
44
+ "rms_norm_eps": 1e-05,
45
+ "rope_interleave": true,
46
+ "rope_parameters": {
47
+ "rope_theta": 1000000,
48
+ "rope_type": "default"
49
+ },
50
+ "routed_scaling_factor": 2.5,
51
+ "scoring_func": "sigmoid",
52
+ "tie_word_embeddings": false,
53
+ "topk_group": 1,
54
+ "topk_method": "noaux_tc",
55
+ "transformers_version": "5.0.2.dev0",
56
+ "use_cache": true,
57
+ "v_head_dim": 256,
58
+ "vocab_size": 154880,
59
+ "auto_map": {
60
+ "AutoConfig": "configuration_deepseek_v32.DeepseekV32Config",
61
+ "AutoModelForCausalLM": "modeling_deepseek_v32.DeepseekV32ForCausalLM"
62
+ }
63
+ }
configuration_deepseek_v32.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 bzantium and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on the DeepSeekV3 implementations from the DeepSeek AI team. (https://huggingface.co/deepseek-ai/DeepSeek-V3)
5
+
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ """DeepSeekV3.2 model configuration"""
18
+
19
+ from typing import Optional
20
+
21
+ from transformers.models.deepseek_v3.configuration_deepseek_v3 import DeepseekV3Config
22
+
23
+
24
+ DEEPSEEK_V32_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
25
+
26
+
27
+ class DeepseekV32Config(DeepseekV3Config):
28
+ r"""
29
+ This is the configuration class to store the configuration of a [`DeepseekV32Model`]. It is used to instantiate a DeepSeek
30
+ V3.2 model according to the specified arguments, defining the model architecture.
31
+
32
+ DeepSeek V3.2 extends DeepSeek V3 with native sparse attention mechanism using an indexer for efficient
33
+ attention computation on long sequences.
34
+ Configuration objects inherit from [`DeepseekV3Config`] and can be used to control the model outputs. Read the
35
+ documentation from [`PreTrainedConfig`] for more information.
36
+ Args:
37
+ index_topk (`int`, *optional*, defaults to 2048):
38
+ Number of top-k tokens to select for sparse attention. This enables the native sparse attention
39
+ mechanism in DeepSeek V3.2.
40
+ **kwargs:
41
+ All other arguments from DeepseekV3Config.
42
+ ```python
43
+ >>> from transformers import DeepseekV32Model, DeepseekV32Config
44
+ >>> # Initializing a Deepseek-V3.2 style configuration
45
+ >>> configuration = DeepseekV32Config()
46
+ >>> # Accessing the model configuration
47
+ >>> configuration = model.config
48
+ ```"""
49
+
50
+ model_type = "deepseek_v32"
51
+
52
+ def __init__(
53
+ self,
54
+ index_topk: Optional[int] = 2048,
55
+ **kwargs,
56
+ ):
57
+ super().__init__(**kwargs)
58
+ self.index_topk = index_topk
59
+
60
+
61
+ __all__ = ["DeepseekV32Config"]
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": [
4
+ 154820,
5
+ 154827,
6
+ 154829
7
+ ],
8
+ "pad_token_id": 154820,
9
+ "temperature": 1.0,
10
+ "top_p": 0.95,
11
+ "transformers_version": "5.0.2.dev0"
12
+ }
model-00001-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:198ef923a7ca4effc5ead8ebf799fee10beb8ce081352fb099636f805d1deda9
3
+ size 5342821416
model-00002-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd6c14c9bf4a9695408f1c8e62c03287c532a3759a7cd84952a13073d9dbe313
3
+ size 67108984
model-00038-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c648ea022d4fd7171598777d4e6d9b52c548eae99b1cdad74ddf7c74a7c6713d
3
+ size 709524696
model-00039-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2709fa9fee5c27e8e8a4093db5eaa577e8c612e442a4691054e8aada6f19c67b
3
+ size 92274928
model-00075-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61269b6447c93de8cbcb9350251a8443aaad6e818f758855e76f0a1c1b4fdb1a
3
+ size 679492952
model-00076-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:807bf5e2cd4beaa0594d692cb8939a5be25a404cdb188c23e9697791115b1e77
3
+ size 5360347104
model-00077-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:316499d677b61f3748fe237624c07ca84342fd79b84beb8dc7fe4ac64b2918c0
3
+ size 5360347104
model-00078-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b7faf14ea9f2eebd31f1c600031adece58e82e42cec6ad1ef57cc9a80ae4133
3
+ size 5360346968
model-00079-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d52dfd2e3d0e8c8a09c682bded9597256640f2c27549ff8b2d1b5f727df7034f
3
+ size 2994373544
model-00282-of-00282.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee2233ec70c39f22fa82951102c588c2fc0fb24802dad52fb14b285731eb118b
3
+ size 12376
model.safetensors.index.json ADDED
@@ -0,0 +1,848 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 25966545920
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00001-of-00282.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00282.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00282.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00282.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00282.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00282.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00282.safetensors",
13
+ "model.layers.0.self_attn.indexer.k_norm.bias": "model-00001-of-00282.safetensors",
14
+ "model.layers.0.self_attn.indexer.k_norm.weight": "model-00001-of-00282.safetensors",
15
+ "model.layers.0.self_attn.indexer.weights_proj.weight": "model-00001-of-00282.safetensors",
16
+ "model.layers.0.self_attn.indexer.wk.weight": "model-00001-of-00282.safetensors",
17
+ "model.layers.0.self_attn.indexer.wq_b.weight": "model-00001-of-00282.safetensors",
18
+ "model.layers.0.self_attn.kv_a_layernorm.weight": "model-00001-of-00282.safetensors",
19
+ "model.layers.0.self_attn.kv_a_proj_with_mqa.weight": "model-00001-of-00282.safetensors",
20
+ "model.layers.0.self_attn.kv_b_proj.weight": "model-00001-of-00282.safetensors",
21
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00282.safetensors",
22
+ "model.layers.0.self_attn.q_a_layernorm.weight": "model-00001-of-00282.safetensors",
23
+ "model.layers.0.self_attn.q_a_proj.weight": "model-00001-of-00282.safetensors",
24
+ "model.layers.0.self_attn.q_b_proj.weight": "model-00001-of-00282.safetensors",
25
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00282.safetensors",
26
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00282.safetensors",
27
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00282.safetensors",
28
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00282.safetensors",
29
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00282.safetensors",
30
+ "model.layers.1.self_attn.indexer.k_norm.bias": "model-00001-of-00282.safetensors",
31
+ "model.layers.1.self_attn.indexer.k_norm.weight": "model-00001-of-00282.safetensors",
32
+ "model.layers.1.self_attn.indexer.weights_proj.weight": "model-00001-of-00282.safetensors",
33
+ "model.layers.1.self_attn.indexer.wk.weight": "model-00001-of-00282.safetensors",
34
+ "model.layers.1.self_attn.indexer.wq_b.weight": "model-00001-of-00282.safetensors",
35
+ "model.layers.1.self_attn.kv_a_layernorm.weight": "model-00001-of-00282.safetensors",
36
+ "model.layers.1.self_attn.kv_a_proj_with_mqa.weight": "model-00001-of-00282.safetensors",
37
+ "model.layers.1.self_attn.kv_b_proj.weight": "model-00001-of-00282.safetensors",
38
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00282.safetensors",
39
+ "model.layers.1.self_attn.q_a_layernorm.weight": "model-00001-of-00282.safetensors",
40
+ "model.layers.1.self_attn.q_a_proj.weight": "model-00001-of-00282.safetensors",
41
+ "model.layers.1.self_attn.q_b_proj.weight": "model-00002-of-00282.safetensors",
42
+ "model.layers.2.input_layernorm.weight": "model-00038-of-00282.safetensors",
43
+ "model.layers.2.mlp.down_proj.weight": "model-00038-of-00282.safetensors",
44
+ "model.layers.2.mlp.gate_proj.weight": "model-00038-of-00282.safetensors",
45
+ "model.layers.2.mlp.up_proj.weight": "model-00038-of-00282.safetensors",
46
+ "model.layers.2.post_attention_layernorm.weight": "model-00038-of-00282.safetensors",
47
+ "model.layers.2.self_attn.indexer.k_norm.bias": "model-00038-of-00282.safetensors",
48
+ "model.layers.2.self_attn.indexer.k_norm.weight": "model-00038-of-00282.safetensors",
49
+ "model.layers.2.self_attn.indexer.weights_proj.weight": "model-00038-of-00282.safetensors",
50
+ "model.layers.2.self_attn.indexer.wk.weight": "model-00038-of-00282.safetensors",
51
+ "model.layers.2.self_attn.indexer.wq_b.weight": "model-00038-of-00282.safetensors",
52
+ "model.layers.2.self_attn.kv_a_layernorm.weight": "model-00038-of-00282.safetensors",
53
+ "model.layers.2.self_attn.kv_a_proj_with_mqa.weight": "model-00038-of-00282.safetensors",
54
+ "model.layers.2.self_attn.kv_b_proj.weight": "model-00038-of-00282.safetensors",
55
+ "model.layers.2.self_attn.o_proj.weight": "model-00038-of-00282.safetensors",
56
+ "model.layers.2.self_attn.q_a_layernorm.weight": "model-00038-of-00282.safetensors",
57
+ "model.layers.2.self_attn.q_a_proj.weight": "model-00039-of-00282.safetensors",
58
+ "model.layers.2.self_attn.q_b_proj.weight": "model-00039-of-00282.safetensors",
59
+ "model.layers.3.input_layernorm.weight": "model-00075-of-00282.safetensors",
60
+ "model.layers.3.mlp.experts.0.down_proj.weight": "model-00075-of-00282.safetensors",
61
+ "model.layers.3.mlp.experts.0.gate_proj.weight": "model-00075-of-00282.safetensors",
62
+ "model.layers.3.mlp.experts.0.up_proj.weight": "model-00075-of-00282.safetensors",
63
+ "model.layers.3.mlp.experts.1.down_proj.weight": "model-00075-of-00282.safetensors",
64
+ "model.layers.3.mlp.experts.1.gate_proj.weight": "model-00075-of-00282.safetensors",
65
+ "model.layers.3.mlp.experts.1.up_proj.weight": "model-00075-of-00282.safetensors",
66
+ "model.layers.3.mlp.experts.10.down_proj.weight": "model-00075-of-00282.safetensors",
67
+ "model.layers.3.mlp.experts.10.gate_proj.weight": "model-00075-of-00282.safetensors",
68
+ "model.layers.3.mlp.experts.10.up_proj.weight": "model-00075-of-00282.safetensors",
69
+ "model.layers.3.mlp.experts.100.down_proj.weight": "model-00075-of-00282.safetensors",
70
+ "model.layers.3.mlp.experts.100.gate_proj.weight": "model-00075-of-00282.safetensors",
71
+ "model.layers.3.mlp.experts.100.up_proj.weight": "model-00075-of-00282.safetensors",
72
+ "model.layers.3.mlp.experts.101.down_proj.weight": "model-00075-of-00282.safetensors",
73
+ "model.layers.3.mlp.experts.101.gate_proj.weight": "model-00075-of-00282.safetensors",
74
+ "model.layers.3.mlp.experts.101.up_proj.weight": "model-00075-of-00282.safetensors",
75
+ "model.layers.3.mlp.experts.102.down_proj.weight": "model-00075-of-00282.safetensors",
76
+ "model.layers.3.mlp.experts.102.gate_proj.weight": "model-00075-of-00282.safetensors",
77
+ "model.layers.3.mlp.experts.102.up_proj.weight": "model-00075-of-00282.safetensors",
78
+ "model.layers.3.mlp.experts.103.down_proj.weight": "model-00075-of-00282.safetensors",
79
+ "model.layers.3.mlp.experts.103.gate_proj.weight": "model-00075-of-00282.safetensors",
80
+ "model.layers.3.mlp.experts.103.up_proj.weight": "model-00075-of-00282.safetensors",
81
+ "model.layers.3.mlp.experts.104.down_proj.weight": "model-00075-of-00282.safetensors",
82
+ "model.layers.3.mlp.experts.104.gate_proj.weight": "model-00075-of-00282.safetensors",
83
+ "model.layers.3.mlp.experts.104.up_proj.weight": "model-00075-of-00282.safetensors",
84
+ "model.layers.3.mlp.experts.105.down_proj.weight": "model-00075-of-00282.safetensors",
85
+ "model.layers.3.mlp.experts.105.gate_proj.weight": "model-00075-of-00282.safetensors",
86
+ "model.layers.3.mlp.experts.105.up_proj.weight": "model-00075-of-00282.safetensors",
87
+ "model.layers.3.mlp.experts.106.down_proj.weight": "model-00076-of-00282.safetensors",
88
+ "model.layers.3.mlp.experts.106.gate_proj.weight": "model-00076-of-00282.safetensors",
89
+ "model.layers.3.mlp.experts.106.up_proj.weight": "model-00076-of-00282.safetensors",
90
+ "model.layers.3.mlp.experts.107.down_proj.weight": "model-00076-of-00282.safetensors",
91
+ "model.layers.3.mlp.experts.107.gate_proj.weight": "model-00076-of-00282.safetensors",
92
+ "model.layers.3.mlp.experts.107.up_proj.weight": "model-00076-of-00282.safetensors",
93
+ "model.layers.3.mlp.experts.108.down_proj.weight": "model-00076-of-00282.safetensors",
94
+ "model.layers.3.mlp.experts.108.gate_proj.weight": "model-00076-of-00282.safetensors",
95
+ "model.layers.3.mlp.experts.108.up_proj.weight": "model-00076-of-00282.safetensors",
96
+ "model.layers.3.mlp.experts.109.down_proj.weight": "model-00076-of-00282.safetensors",
97
+ "model.layers.3.mlp.experts.109.gate_proj.weight": "model-00076-of-00282.safetensors",
98
+ "model.layers.3.mlp.experts.109.up_proj.weight": "model-00076-of-00282.safetensors",
99
+ "model.layers.3.mlp.experts.11.down_proj.weight": "model-00076-of-00282.safetensors",
100
+ "model.layers.3.mlp.experts.11.gate_proj.weight": "model-00076-of-00282.safetensors",
101
+ "model.layers.3.mlp.experts.11.up_proj.weight": "model-00076-of-00282.safetensors",
102
+ "model.layers.3.mlp.experts.110.down_proj.weight": "model-00076-of-00282.safetensors",
103
+ "model.layers.3.mlp.experts.110.gate_proj.weight": "model-00076-of-00282.safetensors",
104
+ "model.layers.3.mlp.experts.110.up_proj.weight": "model-00076-of-00282.safetensors",
105
+ "model.layers.3.mlp.experts.111.down_proj.weight": "model-00076-of-00282.safetensors",
106
+ "model.layers.3.mlp.experts.111.gate_proj.weight": "model-00076-of-00282.safetensors",
107
+ "model.layers.3.mlp.experts.111.up_proj.weight": "model-00076-of-00282.safetensors",
108
+ "model.layers.3.mlp.experts.112.down_proj.weight": "model-00076-of-00282.safetensors",
109
+ "model.layers.3.mlp.experts.112.gate_proj.weight": "model-00076-of-00282.safetensors",
110
+ "model.layers.3.mlp.experts.112.up_proj.weight": "model-00076-of-00282.safetensors",
111
+ "model.layers.3.mlp.experts.113.down_proj.weight": "model-00076-of-00282.safetensors",
112
+ "model.layers.3.mlp.experts.113.gate_proj.weight": "model-00076-of-00282.safetensors",
113
+ "model.layers.3.mlp.experts.113.up_proj.weight": "model-00076-of-00282.safetensors",
114
+ "model.layers.3.mlp.experts.114.down_proj.weight": "model-00076-of-00282.safetensors",
115
+ "model.layers.3.mlp.experts.114.gate_proj.weight": "model-00076-of-00282.safetensors",
116
+ "model.layers.3.mlp.experts.114.up_proj.weight": "model-00076-of-00282.safetensors",
117
+ "model.layers.3.mlp.experts.115.down_proj.weight": "model-00076-of-00282.safetensors",
118
+ "model.layers.3.mlp.experts.115.gate_proj.weight": "model-00076-of-00282.safetensors",
119
+ "model.layers.3.mlp.experts.115.up_proj.weight": "model-00076-of-00282.safetensors",
120
+ "model.layers.3.mlp.experts.116.down_proj.weight": "model-00076-of-00282.safetensors",
121
+ "model.layers.3.mlp.experts.116.gate_proj.weight": "model-00076-of-00282.safetensors",
122
+ "model.layers.3.mlp.experts.116.up_proj.weight": "model-00076-of-00282.safetensors",
123
+ "model.layers.3.mlp.experts.117.down_proj.weight": "model-00076-of-00282.safetensors",
124
+ "model.layers.3.mlp.experts.117.gate_proj.weight": "model-00076-of-00282.safetensors",
125
+ "model.layers.3.mlp.experts.117.up_proj.weight": "model-00076-of-00282.safetensors",
126
+ "model.layers.3.mlp.experts.118.down_proj.weight": "model-00076-of-00282.safetensors",
127
+ "model.layers.3.mlp.experts.118.gate_proj.weight": "model-00076-of-00282.safetensors",
128
+ "model.layers.3.mlp.experts.118.up_proj.weight": "model-00076-of-00282.safetensors",
129
+ "model.layers.3.mlp.experts.119.down_proj.weight": "model-00076-of-00282.safetensors",
130
+ "model.layers.3.mlp.experts.119.gate_proj.weight": "model-00076-of-00282.safetensors",
131
+ "model.layers.3.mlp.experts.119.up_proj.weight": "model-00076-of-00282.safetensors",
132
+ "model.layers.3.mlp.experts.12.down_proj.weight": "model-00076-of-00282.safetensors",
133
+ "model.layers.3.mlp.experts.12.gate_proj.weight": "model-00076-of-00282.safetensors",
134
+ "model.layers.3.mlp.experts.12.up_proj.weight": "model-00076-of-00282.safetensors",
135
+ "model.layers.3.mlp.experts.120.down_proj.weight": "model-00076-of-00282.safetensors",
136
+ "model.layers.3.mlp.experts.120.gate_proj.weight": "model-00076-of-00282.safetensors",
137
+ "model.layers.3.mlp.experts.120.up_proj.weight": "model-00076-of-00282.safetensors",
138
+ "model.layers.3.mlp.experts.121.down_proj.weight": "model-00076-of-00282.safetensors",
139
+ "model.layers.3.mlp.experts.121.gate_proj.weight": "model-00076-of-00282.safetensors",
140
+ "model.layers.3.mlp.experts.121.up_proj.weight": "model-00076-of-00282.safetensors",
141
+ "model.layers.3.mlp.experts.122.down_proj.weight": "model-00076-of-00282.safetensors",
142
+ "model.layers.3.mlp.experts.122.gate_proj.weight": "model-00076-of-00282.safetensors",
143
+ "model.layers.3.mlp.experts.122.up_proj.weight": "model-00076-of-00282.safetensors",
144
+ "model.layers.3.mlp.experts.123.down_proj.weight": "model-00076-of-00282.safetensors",
145
+ "model.layers.3.mlp.experts.123.gate_proj.weight": "model-00076-of-00282.safetensors",
146
+ "model.layers.3.mlp.experts.123.up_proj.weight": "model-00076-of-00282.safetensors",
147
+ "model.layers.3.mlp.experts.124.down_proj.weight": "model-00076-of-00282.safetensors",
148
+ "model.layers.3.mlp.experts.124.gate_proj.weight": "model-00076-of-00282.safetensors",
149
+ "model.layers.3.mlp.experts.124.up_proj.weight": "model-00076-of-00282.safetensors",
150
+ "model.layers.3.mlp.experts.125.down_proj.weight": "model-00076-of-00282.safetensors",
151
+ "model.layers.3.mlp.experts.125.gate_proj.weight": "model-00076-of-00282.safetensors",
152
+ "model.layers.3.mlp.experts.125.up_proj.weight": "model-00076-of-00282.safetensors",
153
+ "model.layers.3.mlp.experts.126.down_proj.weight": "model-00076-of-00282.safetensors",
154
+ "model.layers.3.mlp.experts.126.gate_proj.weight": "model-00076-of-00282.safetensors",
155
+ "model.layers.3.mlp.experts.126.up_proj.weight": "model-00076-of-00282.safetensors",
156
+ "model.layers.3.mlp.experts.127.down_proj.weight": "model-00076-of-00282.safetensors",
157
+ "model.layers.3.mlp.experts.127.gate_proj.weight": "model-00076-of-00282.safetensors",
158
+ "model.layers.3.mlp.experts.127.up_proj.weight": "model-00076-of-00282.safetensors",
159
+ "model.layers.3.mlp.experts.128.down_proj.weight": "model-00076-of-00282.safetensors",
160
+ "model.layers.3.mlp.experts.128.gate_proj.weight": "model-00076-of-00282.safetensors",
161
+ "model.layers.3.mlp.experts.128.up_proj.weight": "model-00076-of-00282.safetensors",
162
+ "model.layers.3.mlp.experts.129.down_proj.weight": "model-00076-of-00282.safetensors",
163
+ "model.layers.3.mlp.experts.129.gate_proj.weight": "model-00076-of-00282.safetensors",
164
+ "model.layers.3.mlp.experts.129.up_proj.weight": "model-00076-of-00282.safetensors",
165
+ "model.layers.3.mlp.experts.13.down_proj.weight": "model-00076-of-00282.safetensors",
166
+ "model.layers.3.mlp.experts.13.gate_proj.weight": "model-00076-of-00282.safetensors",
167
+ "model.layers.3.mlp.experts.13.up_proj.weight": "model-00076-of-00282.safetensors",
168
+ "model.layers.3.mlp.experts.130.down_proj.weight": "model-00076-of-00282.safetensors",
169
+ "model.layers.3.mlp.experts.130.gate_proj.weight": "model-00076-of-00282.safetensors",
170
+ "model.layers.3.mlp.experts.130.up_proj.weight": "model-00076-of-00282.safetensors",
171
+ "model.layers.3.mlp.experts.131.down_proj.weight": "model-00076-of-00282.safetensors",
172
+ "model.layers.3.mlp.experts.131.gate_proj.weight": "model-00076-of-00282.safetensors",
173
+ "model.layers.3.mlp.experts.131.up_proj.weight": "model-00076-of-00282.safetensors",
174
+ "model.layers.3.mlp.experts.132.down_proj.weight": "model-00076-of-00282.safetensors",
175
+ "model.layers.3.mlp.experts.132.gate_proj.weight": "model-00076-of-00282.safetensors",
176
+ "model.layers.3.mlp.experts.132.up_proj.weight": "model-00076-of-00282.safetensors",
177
+ "model.layers.3.mlp.experts.133.down_proj.weight": "model-00076-of-00282.safetensors",
178
+ "model.layers.3.mlp.experts.133.gate_proj.weight": "model-00076-of-00282.safetensors",
179
+ "model.layers.3.mlp.experts.133.up_proj.weight": "model-00076-of-00282.safetensors",
180
+ "model.layers.3.mlp.experts.134.down_proj.weight": "model-00076-of-00282.safetensors",
181
+ "model.layers.3.mlp.experts.134.gate_proj.weight": "model-00076-of-00282.safetensors",
182
+ "model.layers.3.mlp.experts.134.up_proj.weight": "model-00076-of-00282.safetensors",
183
+ "model.layers.3.mlp.experts.135.down_proj.weight": "model-00076-of-00282.safetensors",
184
+ "model.layers.3.mlp.experts.135.gate_proj.weight": "model-00076-of-00282.safetensors",
185
+ "model.layers.3.mlp.experts.135.up_proj.weight": "model-00076-of-00282.safetensors",
186
+ "model.layers.3.mlp.experts.136.down_proj.weight": "model-00076-of-00282.safetensors",
187
+ "model.layers.3.mlp.experts.136.gate_proj.weight": "model-00076-of-00282.safetensors",
188
+ "model.layers.3.mlp.experts.136.up_proj.weight": "model-00076-of-00282.safetensors",
189
+ "model.layers.3.mlp.experts.137.down_proj.weight": "model-00076-of-00282.safetensors",
190
+ "model.layers.3.mlp.experts.137.gate_proj.weight": "model-00076-of-00282.safetensors",
191
+ "model.layers.3.mlp.experts.137.up_proj.weight": "model-00076-of-00282.safetensors",
192
+ "model.layers.3.mlp.experts.138.down_proj.weight": "model-00076-of-00282.safetensors",
193
+ "model.layers.3.mlp.experts.138.gate_proj.weight": "model-00076-of-00282.safetensors",
194
+ "model.layers.3.mlp.experts.138.up_proj.weight": "model-00076-of-00282.safetensors",
195
+ "model.layers.3.mlp.experts.139.down_proj.weight": "model-00076-of-00282.safetensors",
196
+ "model.layers.3.mlp.experts.139.gate_proj.weight": "model-00076-of-00282.safetensors",
197
+ "model.layers.3.mlp.experts.139.up_proj.weight": "model-00076-of-00282.safetensors",
198
+ "model.layers.3.mlp.experts.14.down_proj.weight": "model-00076-of-00282.safetensors",
199
+ "model.layers.3.mlp.experts.14.gate_proj.weight": "model-00076-of-00282.safetensors",
200
+ "model.layers.3.mlp.experts.14.up_proj.weight": "model-00076-of-00282.safetensors",
201
+ "model.layers.3.mlp.experts.140.down_proj.weight": "model-00076-of-00282.safetensors",
202
+ "model.layers.3.mlp.experts.140.gate_proj.weight": "model-00076-of-00282.safetensors",
203
+ "model.layers.3.mlp.experts.140.up_proj.weight": "model-00076-of-00282.safetensors",
204
+ "model.layers.3.mlp.experts.141.down_proj.weight": "model-00076-of-00282.safetensors",
205
+ "model.layers.3.mlp.experts.141.gate_proj.weight": "model-00076-of-00282.safetensors",
206
+ "model.layers.3.mlp.experts.141.up_proj.weight": "model-00076-of-00282.safetensors",
207
+ "model.layers.3.mlp.experts.142.down_proj.weight": "model-00076-of-00282.safetensors",
208
+ "model.layers.3.mlp.experts.142.gate_proj.weight": "model-00076-of-00282.safetensors",
209
+ "model.layers.3.mlp.experts.142.up_proj.weight": "model-00076-of-00282.safetensors",
210
+ "model.layers.3.mlp.experts.143.down_proj.weight": "model-00076-of-00282.safetensors",
211
+ "model.layers.3.mlp.experts.143.gate_proj.weight": "model-00076-of-00282.safetensors",
212
+ "model.layers.3.mlp.experts.143.up_proj.weight": "model-00076-of-00282.safetensors",
213
+ "model.layers.3.mlp.experts.144.down_proj.weight": "model-00076-of-00282.safetensors",
214
+ "model.layers.3.mlp.experts.144.gate_proj.weight": "model-00076-of-00282.safetensors",
215
+ "model.layers.3.mlp.experts.144.up_proj.weight": "model-00076-of-00282.safetensors",
216
+ "model.layers.3.mlp.experts.145.down_proj.weight": "model-00076-of-00282.safetensors",
217
+ "model.layers.3.mlp.experts.145.gate_proj.weight": "model-00076-of-00282.safetensors",
218
+ "model.layers.3.mlp.experts.145.up_proj.weight": "model-00076-of-00282.safetensors",
219
+ "model.layers.3.mlp.experts.146.down_proj.weight": "model-00076-of-00282.safetensors",
220
+ "model.layers.3.mlp.experts.146.gate_proj.weight": "model-00076-of-00282.safetensors",
221
+ "model.layers.3.mlp.experts.146.up_proj.weight": "model-00076-of-00282.safetensors",
222
+ "model.layers.3.mlp.experts.147.down_proj.weight": "model-00076-of-00282.safetensors",
223
+ "model.layers.3.mlp.experts.147.gate_proj.weight": "model-00076-of-00282.safetensors",
224
+ "model.layers.3.mlp.experts.147.up_proj.weight": "model-00076-of-00282.safetensors",
225
+ "model.layers.3.mlp.experts.148.down_proj.weight": "model-00076-of-00282.safetensors",
226
+ "model.layers.3.mlp.experts.148.gate_proj.weight": "model-00076-of-00282.safetensors",
227
+ "model.layers.3.mlp.experts.148.up_proj.weight": "model-00076-of-00282.safetensors",
228
+ "model.layers.3.mlp.experts.149.down_proj.weight": "model-00076-of-00282.safetensors",
229
+ "model.layers.3.mlp.experts.149.gate_proj.weight": "model-00076-of-00282.safetensors",
230
+ "model.layers.3.mlp.experts.149.up_proj.weight": "model-00076-of-00282.safetensors",
231
+ "model.layers.3.mlp.experts.15.down_proj.weight": "model-00076-of-00282.safetensors",
232
+ "model.layers.3.mlp.experts.15.gate_proj.weight": "model-00076-of-00282.safetensors",
233
+ "model.layers.3.mlp.experts.15.up_proj.weight": "model-00076-of-00282.safetensors",
234
+ "model.layers.3.mlp.experts.150.down_proj.weight": "model-00076-of-00282.safetensors",
235
+ "model.layers.3.mlp.experts.150.gate_proj.weight": "model-00076-of-00282.safetensors",
236
+ "model.layers.3.mlp.experts.150.up_proj.weight": "model-00076-of-00282.safetensors",
237
+ "model.layers.3.mlp.experts.151.down_proj.weight": "model-00076-of-00282.safetensors",
238
+ "model.layers.3.mlp.experts.151.gate_proj.weight": "model-00076-of-00282.safetensors",
239
+ "model.layers.3.mlp.experts.151.up_proj.weight": "model-00076-of-00282.safetensors",
240
+ "model.layers.3.mlp.experts.152.down_proj.weight": "model-00076-of-00282.safetensors",
241
+ "model.layers.3.mlp.experts.152.gate_proj.weight": "model-00076-of-00282.safetensors",
242
+ "model.layers.3.mlp.experts.152.up_proj.weight": "model-00076-of-00282.safetensors",
243
+ "model.layers.3.mlp.experts.153.down_proj.weight": "model-00076-of-00282.safetensors",
244
+ "model.layers.3.mlp.experts.153.gate_proj.weight": "model-00076-of-00282.safetensors",
245
+ "model.layers.3.mlp.experts.153.up_proj.weight": "model-00076-of-00282.safetensors",
246
+ "model.layers.3.mlp.experts.154.down_proj.weight": "model-00076-of-00282.safetensors",
247
+ "model.layers.3.mlp.experts.154.gate_proj.weight": "model-00076-of-00282.safetensors",
248
+ "model.layers.3.mlp.experts.154.up_proj.weight": "model-00076-of-00282.safetensors",
249
+ "model.layers.3.mlp.experts.155.down_proj.weight": "model-00076-of-00282.safetensors",
250
+ "model.layers.3.mlp.experts.155.gate_proj.weight": "model-00076-of-00282.safetensors",
251
+ "model.layers.3.mlp.experts.155.up_proj.weight": "model-00076-of-00282.safetensors",
252
+ "model.layers.3.mlp.experts.156.down_proj.weight": "model-00076-of-00282.safetensors",
253
+ "model.layers.3.mlp.experts.156.gate_proj.weight": "model-00076-of-00282.safetensors",
254
+ "model.layers.3.mlp.experts.156.up_proj.weight": "model-00076-of-00282.safetensors",
255
+ "model.layers.3.mlp.experts.157.down_proj.weight": "model-00076-of-00282.safetensors",
256
+ "model.layers.3.mlp.experts.157.gate_proj.weight": "model-00076-of-00282.safetensors",
257
+ "model.layers.3.mlp.experts.157.up_proj.weight": "model-00076-of-00282.safetensors",
258
+ "model.layers.3.mlp.experts.158.down_proj.weight": "model-00076-of-00282.safetensors",
259
+ "model.layers.3.mlp.experts.158.gate_proj.weight": "model-00076-of-00282.safetensors",
260
+ "model.layers.3.mlp.experts.158.up_proj.weight": "model-00076-of-00282.safetensors",
261
+ "model.layers.3.mlp.experts.159.down_proj.weight": "model-00076-of-00282.safetensors",
262
+ "model.layers.3.mlp.experts.159.gate_proj.weight": "model-00076-of-00282.safetensors",
263
+ "model.layers.3.mlp.experts.159.up_proj.weight": "model-00076-of-00282.safetensors",
264
+ "model.layers.3.mlp.experts.16.down_proj.weight": "model-00076-of-00282.safetensors",
265
+ "model.layers.3.mlp.experts.16.gate_proj.weight": "model-00076-of-00282.safetensors",
266
+ "model.layers.3.mlp.experts.16.up_proj.weight": "model-00076-of-00282.safetensors",
267
+ "model.layers.3.mlp.experts.160.down_proj.weight": "model-00076-of-00282.safetensors",
268
+ "model.layers.3.mlp.experts.160.gate_proj.weight": "model-00076-of-00282.safetensors",
269
+ "model.layers.3.mlp.experts.160.up_proj.weight": "model-00076-of-00282.safetensors",
270
+ "model.layers.3.mlp.experts.161.down_proj.weight": "model-00076-of-00282.safetensors",
271
+ "model.layers.3.mlp.experts.161.gate_proj.weight": "model-00076-of-00282.safetensors",
272
+ "model.layers.3.mlp.experts.161.up_proj.weight": "model-00076-of-00282.safetensors",
273
+ "model.layers.3.mlp.experts.162.down_proj.weight": "model-00076-of-00282.safetensors",
274
+ "model.layers.3.mlp.experts.162.gate_proj.weight": "model-00076-of-00282.safetensors",
275
+ "model.layers.3.mlp.experts.162.up_proj.weight": "model-00076-of-00282.safetensors",
276
+ "model.layers.3.mlp.experts.163.down_proj.weight": "model-00076-of-00282.safetensors",
277
+ "model.layers.3.mlp.experts.163.gate_proj.weight": "model-00076-of-00282.safetensors",
278
+ "model.layers.3.mlp.experts.163.up_proj.weight": "model-00076-of-00282.safetensors",
279
+ "model.layers.3.mlp.experts.164.down_proj.weight": "model-00076-of-00282.safetensors",
280
+ "model.layers.3.mlp.experts.164.gate_proj.weight": "model-00076-of-00282.safetensors",
281
+ "model.layers.3.mlp.experts.164.up_proj.weight": "model-00076-of-00282.safetensors",
282
+ "model.layers.3.mlp.experts.165.down_proj.weight": "model-00076-of-00282.safetensors",
283
+ "model.layers.3.mlp.experts.165.gate_proj.weight": "model-00076-of-00282.safetensors",
284
+ "model.layers.3.mlp.experts.165.up_proj.weight": "model-00076-of-00282.safetensors",
285
+ "model.layers.3.mlp.experts.166.down_proj.weight": "model-00076-of-00282.safetensors",
286
+ "model.layers.3.mlp.experts.166.gate_proj.weight": "model-00076-of-00282.safetensors",
287
+ "model.layers.3.mlp.experts.166.up_proj.weight": "model-00076-of-00282.safetensors",
288
+ "model.layers.3.mlp.experts.167.down_proj.weight": "model-00076-of-00282.safetensors",
289
+ "model.layers.3.mlp.experts.167.gate_proj.weight": "model-00076-of-00282.safetensors",
290
+ "model.layers.3.mlp.experts.167.up_proj.weight": "model-00076-of-00282.safetensors",
291
+ "model.layers.3.mlp.experts.168.down_proj.weight": "model-00076-of-00282.safetensors",
292
+ "model.layers.3.mlp.experts.168.gate_proj.weight": "model-00076-of-00282.safetensors",
293
+ "model.layers.3.mlp.experts.168.up_proj.weight": "model-00076-of-00282.safetensors",
294
+ "model.layers.3.mlp.experts.169.down_proj.weight": "model-00076-of-00282.safetensors",
295
+ "model.layers.3.mlp.experts.169.gate_proj.weight": "model-00076-of-00282.safetensors",
296
+ "model.layers.3.mlp.experts.169.up_proj.weight": "model-00076-of-00282.safetensors",
297
+ "model.layers.3.mlp.experts.17.down_proj.weight": "model-00076-of-00282.safetensors",
298
+ "model.layers.3.mlp.experts.17.gate_proj.weight": "model-00076-of-00282.safetensors",
299
+ "model.layers.3.mlp.experts.17.up_proj.weight": "model-00076-of-00282.safetensors",
300
+ "model.layers.3.mlp.experts.170.down_proj.weight": "model-00077-of-00282.safetensors",
301
+ "model.layers.3.mlp.experts.170.gate_proj.weight": "model-00077-of-00282.safetensors",
302
+ "model.layers.3.mlp.experts.170.up_proj.weight": "model-00077-of-00282.safetensors",
303
+ "model.layers.3.mlp.experts.171.down_proj.weight": "model-00077-of-00282.safetensors",
304
+ "model.layers.3.mlp.experts.171.gate_proj.weight": "model-00077-of-00282.safetensors",
305
+ "model.layers.3.mlp.experts.171.up_proj.weight": "model-00077-of-00282.safetensors",
306
+ "model.layers.3.mlp.experts.172.down_proj.weight": "model-00077-of-00282.safetensors",
307
+ "model.layers.3.mlp.experts.172.gate_proj.weight": "model-00077-of-00282.safetensors",
308
+ "model.layers.3.mlp.experts.172.up_proj.weight": "model-00077-of-00282.safetensors",
309
+ "model.layers.3.mlp.experts.173.down_proj.weight": "model-00077-of-00282.safetensors",
310
+ "model.layers.3.mlp.experts.173.gate_proj.weight": "model-00077-of-00282.safetensors",
311
+ "model.layers.3.mlp.experts.173.up_proj.weight": "model-00077-of-00282.safetensors",
312
+ "model.layers.3.mlp.experts.174.down_proj.weight": "model-00077-of-00282.safetensors",
313
+ "model.layers.3.mlp.experts.174.gate_proj.weight": "model-00077-of-00282.safetensors",
314
+ "model.layers.3.mlp.experts.174.up_proj.weight": "model-00077-of-00282.safetensors",
315
+ "model.layers.3.mlp.experts.175.down_proj.weight": "model-00077-of-00282.safetensors",
316
+ "model.layers.3.mlp.experts.175.gate_proj.weight": "model-00077-of-00282.safetensors",
317
+ "model.layers.3.mlp.experts.175.up_proj.weight": "model-00077-of-00282.safetensors",
318
+ "model.layers.3.mlp.experts.176.down_proj.weight": "model-00077-of-00282.safetensors",
319
+ "model.layers.3.mlp.experts.176.gate_proj.weight": "model-00077-of-00282.safetensors",
320
+ "model.layers.3.mlp.experts.176.up_proj.weight": "model-00077-of-00282.safetensors",
321
+ "model.layers.3.mlp.experts.177.down_proj.weight": "model-00077-of-00282.safetensors",
322
+ "model.layers.3.mlp.experts.177.gate_proj.weight": "model-00077-of-00282.safetensors",
323
+ "model.layers.3.mlp.experts.177.up_proj.weight": "model-00077-of-00282.safetensors",
324
+ "model.layers.3.mlp.experts.178.down_proj.weight": "model-00077-of-00282.safetensors",
325
+ "model.layers.3.mlp.experts.178.gate_proj.weight": "model-00077-of-00282.safetensors",
326
+ "model.layers.3.mlp.experts.178.up_proj.weight": "model-00077-of-00282.safetensors",
327
+ "model.layers.3.mlp.experts.179.down_proj.weight": "model-00077-of-00282.safetensors",
328
+ "model.layers.3.mlp.experts.179.gate_proj.weight": "model-00077-of-00282.safetensors",
329
+ "model.layers.3.mlp.experts.179.up_proj.weight": "model-00077-of-00282.safetensors",
330
+ "model.layers.3.mlp.experts.18.down_proj.weight": "model-00077-of-00282.safetensors",
331
+ "model.layers.3.mlp.experts.18.gate_proj.weight": "model-00077-of-00282.safetensors",
332
+ "model.layers.3.mlp.experts.18.up_proj.weight": "model-00077-of-00282.safetensors",
333
+ "model.layers.3.mlp.experts.180.down_proj.weight": "model-00077-of-00282.safetensors",
334
+ "model.layers.3.mlp.experts.180.gate_proj.weight": "model-00077-of-00282.safetensors",
335
+ "model.layers.3.mlp.experts.180.up_proj.weight": "model-00077-of-00282.safetensors",
336
+ "model.layers.3.mlp.experts.181.down_proj.weight": "model-00077-of-00282.safetensors",
337
+ "model.layers.3.mlp.experts.181.gate_proj.weight": "model-00077-of-00282.safetensors",
338
+ "model.layers.3.mlp.experts.181.up_proj.weight": "model-00077-of-00282.safetensors",
339
+ "model.layers.3.mlp.experts.182.down_proj.weight": "model-00077-of-00282.safetensors",
340
+ "model.layers.3.mlp.experts.182.gate_proj.weight": "model-00077-of-00282.safetensors",
341
+ "model.layers.3.mlp.experts.182.up_proj.weight": "model-00077-of-00282.safetensors",
342
+ "model.layers.3.mlp.experts.183.down_proj.weight": "model-00077-of-00282.safetensors",
343
+ "model.layers.3.mlp.experts.183.gate_proj.weight": "model-00077-of-00282.safetensors",
344
+ "model.layers.3.mlp.experts.183.up_proj.weight": "model-00077-of-00282.safetensors",
345
+ "model.layers.3.mlp.experts.184.down_proj.weight": "model-00077-of-00282.safetensors",
346
+ "model.layers.3.mlp.experts.184.gate_proj.weight": "model-00077-of-00282.safetensors",
347
+ "model.layers.3.mlp.experts.184.up_proj.weight": "model-00077-of-00282.safetensors",
348
+ "model.layers.3.mlp.experts.185.down_proj.weight": "model-00077-of-00282.safetensors",
349
+ "model.layers.3.mlp.experts.185.gate_proj.weight": "model-00077-of-00282.safetensors",
350
+ "model.layers.3.mlp.experts.185.up_proj.weight": "model-00077-of-00282.safetensors",
351
+ "model.layers.3.mlp.experts.186.down_proj.weight": "model-00077-of-00282.safetensors",
352
+ "model.layers.3.mlp.experts.186.gate_proj.weight": "model-00077-of-00282.safetensors",
353
+ "model.layers.3.mlp.experts.186.up_proj.weight": "model-00077-of-00282.safetensors",
354
+ "model.layers.3.mlp.experts.187.down_proj.weight": "model-00077-of-00282.safetensors",
355
+ "model.layers.3.mlp.experts.187.gate_proj.weight": "model-00077-of-00282.safetensors",
356
+ "model.layers.3.mlp.experts.187.up_proj.weight": "model-00077-of-00282.safetensors",
357
+ "model.layers.3.mlp.experts.188.down_proj.weight": "model-00077-of-00282.safetensors",
358
+ "model.layers.3.mlp.experts.188.gate_proj.weight": "model-00077-of-00282.safetensors",
359
+ "model.layers.3.mlp.experts.188.up_proj.weight": "model-00077-of-00282.safetensors",
360
+ "model.layers.3.mlp.experts.189.down_proj.weight": "model-00077-of-00282.safetensors",
361
+ "model.layers.3.mlp.experts.189.gate_proj.weight": "model-00077-of-00282.safetensors",
362
+ "model.layers.3.mlp.experts.189.up_proj.weight": "model-00077-of-00282.safetensors",
363
+ "model.layers.3.mlp.experts.19.down_proj.weight": "model-00077-of-00282.safetensors",
364
+ "model.layers.3.mlp.experts.19.gate_proj.weight": "model-00077-of-00282.safetensors",
365
+ "model.layers.3.mlp.experts.19.up_proj.weight": "model-00077-of-00282.safetensors",
366
+ "model.layers.3.mlp.experts.190.down_proj.weight": "model-00077-of-00282.safetensors",
367
+ "model.layers.3.mlp.experts.190.gate_proj.weight": "model-00077-of-00282.safetensors",
368
+ "model.layers.3.mlp.experts.190.up_proj.weight": "model-00077-of-00282.safetensors",
369
+ "model.layers.3.mlp.experts.191.down_proj.weight": "model-00077-of-00282.safetensors",
370
+ "model.layers.3.mlp.experts.191.gate_proj.weight": "model-00077-of-00282.safetensors",
371
+ "model.layers.3.mlp.experts.191.up_proj.weight": "model-00077-of-00282.safetensors",
372
+ "model.layers.3.mlp.experts.192.down_proj.weight": "model-00077-of-00282.safetensors",
373
+ "model.layers.3.mlp.experts.192.gate_proj.weight": "model-00077-of-00282.safetensors",
374
+ "model.layers.3.mlp.experts.192.up_proj.weight": "model-00077-of-00282.safetensors",
375
+ "model.layers.3.mlp.experts.193.down_proj.weight": "model-00077-of-00282.safetensors",
376
+ "model.layers.3.mlp.experts.193.gate_proj.weight": "model-00077-of-00282.safetensors",
377
+ "model.layers.3.mlp.experts.193.up_proj.weight": "model-00077-of-00282.safetensors",
378
+ "model.layers.3.mlp.experts.194.down_proj.weight": "model-00077-of-00282.safetensors",
379
+ "model.layers.3.mlp.experts.194.gate_proj.weight": "model-00077-of-00282.safetensors",
380
+ "model.layers.3.mlp.experts.194.up_proj.weight": "model-00077-of-00282.safetensors",
381
+ "model.layers.3.mlp.experts.195.down_proj.weight": "model-00077-of-00282.safetensors",
382
+ "model.layers.3.mlp.experts.195.gate_proj.weight": "model-00077-of-00282.safetensors",
383
+ "model.layers.3.mlp.experts.195.up_proj.weight": "model-00077-of-00282.safetensors",
384
+ "model.layers.3.mlp.experts.196.down_proj.weight": "model-00077-of-00282.safetensors",
385
+ "model.layers.3.mlp.experts.196.gate_proj.weight": "model-00077-of-00282.safetensors",
386
+ "model.layers.3.mlp.experts.196.up_proj.weight": "model-00077-of-00282.safetensors",
387
+ "model.layers.3.mlp.experts.197.down_proj.weight": "model-00077-of-00282.safetensors",
388
+ "model.layers.3.mlp.experts.197.gate_proj.weight": "model-00077-of-00282.safetensors",
389
+ "model.layers.3.mlp.experts.197.up_proj.weight": "model-00077-of-00282.safetensors",
390
+ "model.layers.3.mlp.experts.198.down_proj.weight": "model-00077-of-00282.safetensors",
391
+ "model.layers.3.mlp.experts.198.gate_proj.weight": "model-00077-of-00282.safetensors",
392
+ "model.layers.3.mlp.experts.198.up_proj.weight": "model-00077-of-00282.safetensors",
393
+ "model.layers.3.mlp.experts.199.down_proj.weight": "model-00077-of-00282.safetensors",
394
+ "model.layers.3.mlp.experts.199.gate_proj.weight": "model-00077-of-00282.safetensors",
395
+ "model.layers.3.mlp.experts.199.up_proj.weight": "model-00077-of-00282.safetensors",
396
+ "model.layers.3.mlp.experts.2.down_proj.weight": "model-00077-of-00282.safetensors",
397
+ "model.layers.3.mlp.experts.2.gate_proj.weight": "model-00077-of-00282.safetensors",
398
+ "model.layers.3.mlp.experts.2.up_proj.weight": "model-00077-of-00282.safetensors",
399
+ "model.layers.3.mlp.experts.20.down_proj.weight": "model-00077-of-00282.safetensors",
400
+ "model.layers.3.mlp.experts.20.gate_proj.weight": "model-00077-of-00282.safetensors",
401
+ "model.layers.3.mlp.experts.20.up_proj.weight": "model-00077-of-00282.safetensors",
402
+ "model.layers.3.mlp.experts.200.down_proj.weight": "model-00077-of-00282.safetensors",
403
+ "model.layers.3.mlp.experts.200.gate_proj.weight": "model-00077-of-00282.safetensors",
404
+ "model.layers.3.mlp.experts.200.up_proj.weight": "model-00077-of-00282.safetensors",
405
+ "model.layers.3.mlp.experts.201.down_proj.weight": "model-00077-of-00282.safetensors",
406
+ "model.layers.3.mlp.experts.201.gate_proj.weight": "model-00077-of-00282.safetensors",
407
+ "model.layers.3.mlp.experts.201.up_proj.weight": "model-00077-of-00282.safetensors",
408
+ "model.layers.3.mlp.experts.202.down_proj.weight": "model-00077-of-00282.safetensors",
409
+ "model.layers.3.mlp.experts.202.gate_proj.weight": "model-00077-of-00282.safetensors",
410
+ "model.layers.3.mlp.experts.202.up_proj.weight": "model-00077-of-00282.safetensors",
411
+ "model.layers.3.mlp.experts.203.down_proj.weight": "model-00077-of-00282.safetensors",
412
+ "model.layers.3.mlp.experts.203.gate_proj.weight": "model-00077-of-00282.safetensors",
413
+ "model.layers.3.mlp.experts.203.up_proj.weight": "model-00077-of-00282.safetensors",
414
+ "model.layers.3.mlp.experts.204.down_proj.weight": "model-00077-of-00282.safetensors",
415
+ "model.layers.3.mlp.experts.204.gate_proj.weight": "model-00077-of-00282.safetensors",
416
+ "model.layers.3.mlp.experts.204.up_proj.weight": "model-00077-of-00282.safetensors",
417
+ "model.layers.3.mlp.experts.205.down_proj.weight": "model-00077-of-00282.safetensors",
418
+ "model.layers.3.mlp.experts.205.gate_proj.weight": "model-00077-of-00282.safetensors",
419
+ "model.layers.3.mlp.experts.205.up_proj.weight": "model-00077-of-00282.safetensors",
420
+ "model.layers.3.mlp.experts.206.down_proj.weight": "model-00077-of-00282.safetensors",
421
+ "model.layers.3.mlp.experts.206.gate_proj.weight": "model-00077-of-00282.safetensors",
422
+ "model.layers.3.mlp.experts.206.up_proj.weight": "model-00077-of-00282.safetensors",
423
+ "model.layers.3.mlp.experts.207.down_proj.weight": "model-00077-of-00282.safetensors",
424
+ "model.layers.3.mlp.experts.207.gate_proj.weight": "model-00077-of-00282.safetensors",
425
+ "model.layers.3.mlp.experts.207.up_proj.weight": "model-00077-of-00282.safetensors",
426
+ "model.layers.3.mlp.experts.208.down_proj.weight": "model-00077-of-00282.safetensors",
427
+ "model.layers.3.mlp.experts.208.gate_proj.weight": "model-00077-of-00282.safetensors",
428
+ "model.layers.3.mlp.experts.208.up_proj.weight": "model-00077-of-00282.safetensors",
429
+ "model.layers.3.mlp.experts.209.down_proj.weight": "model-00077-of-00282.safetensors",
430
+ "model.layers.3.mlp.experts.209.gate_proj.weight": "model-00077-of-00282.safetensors",
431
+ "model.layers.3.mlp.experts.209.up_proj.weight": "model-00077-of-00282.safetensors",
432
+ "model.layers.3.mlp.experts.21.down_proj.weight": "model-00077-of-00282.safetensors",
433
+ "model.layers.3.mlp.experts.21.gate_proj.weight": "model-00077-of-00282.safetensors",
434
+ "model.layers.3.mlp.experts.21.up_proj.weight": "model-00077-of-00282.safetensors",
435
+ "model.layers.3.mlp.experts.210.down_proj.weight": "model-00077-of-00282.safetensors",
436
+ "model.layers.3.mlp.experts.210.gate_proj.weight": "model-00077-of-00282.safetensors",
437
+ "model.layers.3.mlp.experts.210.up_proj.weight": "model-00077-of-00282.safetensors",
438
+ "model.layers.3.mlp.experts.211.down_proj.weight": "model-00077-of-00282.safetensors",
439
+ "model.layers.3.mlp.experts.211.gate_proj.weight": "model-00077-of-00282.safetensors",
440
+ "model.layers.3.mlp.experts.211.up_proj.weight": "model-00077-of-00282.safetensors",
441
+ "model.layers.3.mlp.experts.212.down_proj.weight": "model-00077-of-00282.safetensors",
442
+ "model.layers.3.mlp.experts.212.gate_proj.weight": "model-00077-of-00282.safetensors",
443
+ "model.layers.3.mlp.experts.212.up_proj.weight": "model-00077-of-00282.safetensors",
444
+ "model.layers.3.mlp.experts.213.down_proj.weight": "model-00077-of-00282.safetensors",
445
+ "model.layers.3.mlp.experts.213.gate_proj.weight": "model-00077-of-00282.safetensors",
446
+ "model.layers.3.mlp.experts.213.up_proj.weight": "model-00077-of-00282.safetensors",
447
+ "model.layers.3.mlp.experts.214.down_proj.weight": "model-00077-of-00282.safetensors",
448
+ "model.layers.3.mlp.experts.214.gate_proj.weight": "model-00077-of-00282.safetensors",
449
+ "model.layers.3.mlp.experts.214.up_proj.weight": "model-00077-of-00282.safetensors",
450
+ "model.layers.3.mlp.experts.215.down_proj.weight": "model-00077-of-00282.safetensors",
451
+ "model.layers.3.mlp.experts.215.gate_proj.weight": "model-00077-of-00282.safetensors",
452
+ "model.layers.3.mlp.experts.215.up_proj.weight": "model-00077-of-00282.safetensors",
453
+ "model.layers.3.mlp.experts.216.down_proj.weight": "model-00077-of-00282.safetensors",
454
+ "model.layers.3.mlp.experts.216.gate_proj.weight": "model-00077-of-00282.safetensors",
455
+ "model.layers.3.mlp.experts.216.up_proj.weight": "model-00077-of-00282.safetensors",
456
+ "model.layers.3.mlp.experts.217.down_proj.weight": "model-00077-of-00282.safetensors",
457
+ "model.layers.3.mlp.experts.217.gate_proj.weight": "model-00077-of-00282.safetensors",
458
+ "model.layers.3.mlp.experts.217.up_proj.weight": "model-00077-of-00282.safetensors",
459
+ "model.layers.3.mlp.experts.218.down_proj.weight": "model-00077-of-00282.safetensors",
460
+ "model.layers.3.mlp.experts.218.gate_proj.weight": "model-00077-of-00282.safetensors",
461
+ "model.layers.3.mlp.experts.218.up_proj.weight": "model-00077-of-00282.safetensors",
462
+ "model.layers.3.mlp.experts.219.down_proj.weight": "model-00077-of-00282.safetensors",
463
+ "model.layers.3.mlp.experts.219.gate_proj.weight": "model-00077-of-00282.safetensors",
464
+ "model.layers.3.mlp.experts.219.up_proj.weight": "model-00077-of-00282.safetensors",
465
+ "model.layers.3.mlp.experts.22.down_proj.weight": "model-00077-of-00282.safetensors",
466
+ "model.layers.3.mlp.experts.22.gate_proj.weight": "model-00077-of-00282.safetensors",
467
+ "model.layers.3.mlp.experts.22.up_proj.weight": "model-00077-of-00282.safetensors",
468
+ "model.layers.3.mlp.experts.220.down_proj.weight": "model-00077-of-00282.safetensors",
469
+ "model.layers.3.mlp.experts.220.gate_proj.weight": "model-00077-of-00282.safetensors",
470
+ "model.layers.3.mlp.experts.220.up_proj.weight": "model-00077-of-00282.safetensors",
471
+ "model.layers.3.mlp.experts.221.down_proj.weight": "model-00077-of-00282.safetensors",
472
+ "model.layers.3.mlp.experts.221.gate_proj.weight": "model-00077-of-00282.safetensors",
473
+ "model.layers.3.mlp.experts.221.up_proj.weight": "model-00077-of-00282.safetensors",
474
+ "model.layers.3.mlp.experts.222.down_proj.weight": "model-00077-of-00282.safetensors",
475
+ "model.layers.3.mlp.experts.222.gate_proj.weight": "model-00077-of-00282.safetensors",
476
+ "model.layers.3.mlp.experts.222.up_proj.weight": "model-00077-of-00282.safetensors",
477
+ "model.layers.3.mlp.experts.223.down_proj.weight": "model-00077-of-00282.safetensors",
478
+ "model.layers.3.mlp.experts.223.gate_proj.weight": "model-00077-of-00282.safetensors",
479
+ "model.layers.3.mlp.experts.223.up_proj.weight": "model-00077-of-00282.safetensors",
480
+ "model.layers.3.mlp.experts.224.down_proj.weight": "model-00077-of-00282.safetensors",
481
+ "model.layers.3.mlp.experts.224.gate_proj.weight": "model-00077-of-00282.safetensors",
482
+ "model.layers.3.mlp.experts.224.up_proj.weight": "model-00077-of-00282.safetensors",
483
+ "model.layers.3.mlp.experts.225.down_proj.weight": "model-00077-of-00282.safetensors",
484
+ "model.layers.3.mlp.experts.225.gate_proj.weight": "model-00077-of-00282.safetensors",
485
+ "model.layers.3.mlp.experts.225.up_proj.weight": "model-00077-of-00282.safetensors",
486
+ "model.layers.3.mlp.experts.226.down_proj.weight": "model-00077-of-00282.safetensors",
487
+ "model.layers.3.mlp.experts.226.gate_proj.weight": "model-00077-of-00282.safetensors",
488
+ "model.layers.3.mlp.experts.226.up_proj.weight": "model-00077-of-00282.safetensors",
489
+ "model.layers.3.mlp.experts.227.down_proj.weight": "model-00077-of-00282.safetensors",
490
+ "model.layers.3.mlp.experts.227.gate_proj.weight": "model-00077-of-00282.safetensors",
491
+ "model.layers.3.mlp.experts.227.up_proj.weight": "model-00077-of-00282.safetensors",
492
+ "model.layers.3.mlp.experts.228.down_proj.weight": "model-00077-of-00282.safetensors",
493
+ "model.layers.3.mlp.experts.228.gate_proj.weight": "model-00077-of-00282.safetensors",
494
+ "model.layers.3.mlp.experts.228.up_proj.weight": "model-00077-of-00282.safetensors",
495
+ "model.layers.3.mlp.experts.229.down_proj.weight": "model-00077-of-00282.safetensors",
496
+ "model.layers.3.mlp.experts.229.gate_proj.weight": "model-00077-of-00282.safetensors",
497
+ "model.layers.3.mlp.experts.229.up_proj.weight": "model-00077-of-00282.safetensors",
498
+ "model.layers.3.mlp.experts.23.down_proj.weight": "model-00077-of-00282.safetensors",
499
+ "model.layers.3.mlp.experts.23.gate_proj.weight": "model-00077-of-00282.safetensors",
500
+ "model.layers.3.mlp.experts.23.up_proj.weight": "model-00077-of-00282.safetensors",
501
+ "model.layers.3.mlp.experts.230.down_proj.weight": "model-00077-of-00282.safetensors",
502
+ "model.layers.3.mlp.experts.230.gate_proj.weight": "model-00077-of-00282.safetensors",
503
+ "model.layers.3.mlp.experts.230.up_proj.weight": "model-00077-of-00282.safetensors",
504
+ "model.layers.3.mlp.experts.231.down_proj.weight": "model-00077-of-00282.safetensors",
505
+ "model.layers.3.mlp.experts.231.gate_proj.weight": "model-00077-of-00282.safetensors",
506
+ "model.layers.3.mlp.experts.231.up_proj.weight": "model-00077-of-00282.safetensors",
507
+ "model.layers.3.mlp.experts.232.down_proj.weight": "model-00077-of-00282.safetensors",
508
+ "model.layers.3.mlp.experts.232.gate_proj.weight": "model-00077-of-00282.safetensors",
509
+ "model.layers.3.mlp.experts.232.up_proj.weight": "model-00077-of-00282.safetensors",
510
+ "model.layers.3.mlp.experts.233.down_proj.weight": "model-00077-of-00282.safetensors",
511
+ "model.layers.3.mlp.experts.233.gate_proj.weight": "model-00077-of-00282.safetensors",
512
+ "model.layers.3.mlp.experts.233.up_proj.weight": "model-00077-of-00282.safetensors",
513
+ "model.layers.3.mlp.experts.234.down_proj.weight": "model-00078-of-00282.safetensors",
514
+ "model.layers.3.mlp.experts.234.gate_proj.weight": "model-00078-of-00282.safetensors",
515
+ "model.layers.3.mlp.experts.234.up_proj.weight": "model-00078-of-00282.safetensors",
516
+ "model.layers.3.mlp.experts.235.down_proj.weight": "model-00078-of-00282.safetensors",
517
+ "model.layers.3.mlp.experts.235.gate_proj.weight": "model-00078-of-00282.safetensors",
518
+ "model.layers.3.mlp.experts.235.up_proj.weight": "model-00078-of-00282.safetensors",
519
+ "model.layers.3.mlp.experts.236.down_proj.weight": "model-00078-of-00282.safetensors",
520
+ "model.layers.3.mlp.experts.236.gate_proj.weight": "model-00078-of-00282.safetensors",
521
+ "model.layers.3.mlp.experts.236.up_proj.weight": "model-00078-of-00282.safetensors",
522
+ "model.layers.3.mlp.experts.237.down_proj.weight": "model-00078-of-00282.safetensors",
523
+ "model.layers.3.mlp.experts.237.gate_proj.weight": "model-00078-of-00282.safetensors",
524
+ "model.layers.3.mlp.experts.237.up_proj.weight": "model-00078-of-00282.safetensors",
525
+ "model.layers.3.mlp.experts.238.down_proj.weight": "model-00078-of-00282.safetensors",
526
+ "model.layers.3.mlp.experts.238.gate_proj.weight": "model-00078-of-00282.safetensors",
527
+ "model.layers.3.mlp.experts.238.up_proj.weight": "model-00078-of-00282.safetensors",
528
+ "model.layers.3.mlp.experts.239.down_proj.weight": "model-00078-of-00282.safetensors",
529
+ "model.layers.3.mlp.experts.239.gate_proj.weight": "model-00078-of-00282.safetensors",
530
+ "model.layers.3.mlp.experts.239.up_proj.weight": "model-00078-of-00282.safetensors",
531
+ "model.layers.3.mlp.experts.24.down_proj.weight": "model-00078-of-00282.safetensors",
532
+ "model.layers.3.mlp.experts.24.gate_proj.weight": "model-00078-of-00282.safetensors",
533
+ "model.layers.3.mlp.experts.24.up_proj.weight": "model-00078-of-00282.safetensors",
534
+ "model.layers.3.mlp.experts.240.down_proj.weight": "model-00078-of-00282.safetensors",
535
+ "model.layers.3.mlp.experts.240.gate_proj.weight": "model-00078-of-00282.safetensors",
536
+ "model.layers.3.mlp.experts.240.up_proj.weight": "model-00078-of-00282.safetensors",
537
+ "model.layers.3.mlp.experts.241.down_proj.weight": "model-00078-of-00282.safetensors",
538
+ "model.layers.3.mlp.experts.241.gate_proj.weight": "model-00078-of-00282.safetensors",
539
+ "model.layers.3.mlp.experts.241.up_proj.weight": "model-00078-of-00282.safetensors",
540
+ "model.layers.3.mlp.experts.242.down_proj.weight": "model-00078-of-00282.safetensors",
541
+ "model.layers.3.mlp.experts.242.gate_proj.weight": "model-00078-of-00282.safetensors",
542
+ "model.layers.3.mlp.experts.242.up_proj.weight": "model-00078-of-00282.safetensors",
543
+ "model.layers.3.mlp.experts.243.down_proj.weight": "model-00078-of-00282.safetensors",
544
+ "model.layers.3.mlp.experts.243.gate_proj.weight": "model-00078-of-00282.safetensors",
545
+ "model.layers.3.mlp.experts.243.up_proj.weight": "model-00078-of-00282.safetensors",
546
+ "model.layers.3.mlp.experts.244.down_proj.weight": "model-00078-of-00282.safetensors",
547
+ "model.layers.3.mlp.experts.244.gate_proj.weight": "model-00078-of-00282.safetensors",
548
+ "model.layers.3.mlp.experts.244.up_proj.weight": "model-00078-of-00282.safetensors",
549
+ "model.layers.3.mlp.experts.245.down_proj.weight": "model-00078-of-00282.safetensors",
550
+ "model.layers.3.mlp.experts.245.gate_proj.weight": "model-00078-of-00282.safetensors",
551
+ "model.layers.3.mlp.experts.245.up_proj.weight": "model-00078-of-00282.safetensors",
552
+ "model.layers.3.mlp.experts.246.down_proj.weight": "model-00078-of-00282.safetensors",
553
+ "model.layers.3.mlp.experts.246.gate_proj.weight": "model-00078-of-00282.safetensors",
554
+ "model.layers.3.mlp.experts.246.up_proj.weight": "model-00078-of-00282.safetensors",
555
+ "model.layers.3.mlp.experts.247.down_proj.weight": "model-00078-of-00282.safetensors",
556
+ "model.layers.3.mlp.experts.247.gate_proj.weight": "model-00078-of-00282.safetensors",
557
+ "model.layers.3.mlp.experts.247.up_proj.weight": "model-00078-of-00282.safetensors",
558
+ "model.layers.3.mlp.experts.248.down_proj.weight": "model-00078-of-00282.safetensors",
559
+ "model.layers.3.mlp.experts.248.gate_proj.weight": "model-00078-of-00282.safetensors",
560
+ "model.layers.3.mlp.experts.248.up_proj.weight": "model-00078-of-00282.safetensors",
561
+ "model.layers.3.mlp.experts.249.down_proj.weight": "model-00078-of-00282.safetensors",
562
+ "model.layers.3.mlp.experts.249.gate_proj.weight": "model-00078-of-00282.safetensors",
563
+ "model.layers.3.mlp.experts.249.up_proj.weight": "model-00078-of-00282.safetensors",
564
+ "model.layers.3.mlp.experts.25.down_proj.weight": "model-00078-of-00282.safetensors",
565
+ "model.layers.3.mlp.experts.25.gate_proj.weight": "model-00078-of-00282.safetensors",
566
+ "model.layers.3.mlp.experts.25.up_proj.weight": "model-00078-of-00282.safetensors",
567
+ "model.layers.3.mlp.experts.250.down_proj.weight": "model-00078-of-00282.safetensors",
568
+ "model.layers.3.mlp.experts.250.gate_proj.weight": "model-00078-of-00282.safetensors",
569
+ "model.layers.3.mlp.experts.250.up_proj.weight": "model-00078-of-00282.safetensors",
570
+ "model.layers.3.mlp.experts.251.down_proj.weight": "model-00078-of-00282.safetensors",
571
+ "model.layers.3.mlp.experts.251.gate_proj.weight": "model-00078-of-00282.safetensors",
572
+ "model.layers.3.mlp.experts.251.up_proj.weight": "model-00078-of-00282.safetensors",
573
+ "model.layers.3.mlp.experts.252.down_proj.weight": "model-00078-of-00282.safetensors",
574
+ "model.layers.3.mlp.experts.252.gate_proj.weight": "model-00078-of-00282.safetensors",
575
+ "model.layers.3.mlp.experts.252.up_proj.weight": "model-00078-of-00282.safetensors",
576
+ "model.layers.3.mlp.experts.253.down_proj.weight": "model-00078-of-00282.safetensors",
577
+ "model.layers.3.mlp.experts.253.gate_proj.weight": "model-00078-of-00282.safetensors",
578
+ "model.layers.3.mlp.experts.253.up_proj.weight": "model-00078-of-00282.safetensors",
579
+ "model.layers.3.mlp.experts.254.down_proj.weight": "model-00078-of-00282.safetensors",
580
+ "model.layers.3.mlp.experts.254.gate_proj.weight": "model-00078-of-00282.safetensors",
581
+ "model.layers.3.mlp.experts.254.up_proj.weight": "model-00078-of-00282.safetensors",
582
+ "model.layers.3.mlp.experts.255.down_proj.weight": "model-00078-of-00282.safetensors",
583
+ "model.layers.3.mlp.experts.255.gate_proj.weight": "model-00078-of-00282.safetensors",
584
+ "model.layers.3.mlp.experts.255.up_proj.weight": "model-00078-of-00282.safetensors",
585
+ "model.layers.3.mlp.experts.26.down_proj.weight": "model-00078-of-00282.safetensors",
586
+ "model.layers.3.mlp.experts.26.gate_proj.weight": "model-00078-of-00282.safetensors",
587
+ "model.layers.3.mlp.experts.26.up_proj.weight": "model-00078-of-00282.safetensors",
588
+ "model.layers.3.mlp.experts.27.down_proj.weight": "model-00078-of-00282.safetensors",
589
+ "model.layers.3.mlp.experts.27.gate_proj.weight": "model-00078-of-00282.safetensors",
590
+ "model.layers.3.mlp.experts.27.up_proj.weight": "model-00078-of-00282.safetensors",
591
+ "model.layers.3.mlp.experts.28.down_proj.weight": "model-00078-of-00282.safetensors",
592
+ "model.layers.3.mlp.experts.28.gate_proj.weight": "model-00078-of-00282.safetensors",
593
+ "model.layers.3.mlp.experts.28.up_proj.weight": "model-00078-of-00282.safetensors",
594
+ "model.layers.3.mlp.experts.29.down_proj.weight": "model-00078-of-00282.safetensors",
595
+ "model.layers.3.mlp.experts.29.gate_proj.weight": "model-00078-of-00282.safetensors",
596
+ "model.layers.3.mlp.experts.29.up_proj.weight": "model-00078-of-00282.safetensors",
597
+ "model.layers.3.mlp.experts.3.down_proj.weight": "model-00078-of-00282.safetensors",
598
+ "model.layers.3.mlp.experts.3.gate_proj.weight": "model-00078-of-00282.safetensors",
599
+ "model.layers.3.mlp.experts.3.up_proj.weight": "model-00078-of-00282.safetensors",
600
+ "model.layers.3.mlp.experts.30.down_proj.weight": "model-00078-of-00282.safetensors",
601
+ "model.layers.3.mlp.experts.30.gate_proj.weight": "model-00078-of-00282.safetensors",
602
+ "model.layers.3.mlp.experts.30.up_proj.weight": "model-00078-of-00282.safetensors",
603
+ "model.layers.3.mlp.experts.31.down_proj.weight": "model-00078-of-00282.safetensors",
604
+ "model.layers.3.mlp.experts.31.gate_proj.weight": "model-00078-of-00282.safetensors",
605
+ "model.layers.3.mlp.experts.31.up_proj.weight": "model-00078-of-00282.safetensors",
606
+ "model.layers.3.mlp.experts.32.down_proj.weight": "model-00078-of-00282.safetensors",
607
+ "model.layers.3.mlp.experts.32.gate_proj.weight": "model-00078-of-00282.safetensors",
608
+ "model.layers.3.mlp.experts.32.up_proj.weight": "model-00078-of-00282.safetensors",
609
+ "model.layers.3.mlp.experts.33.down_proj.weight": "model-00078-of-00282.safetensors",
610
+ "model.layers.3.mlp.experts.33.gate_proj.weight": "model-00078-of-00282.safetensors",
611
+ "model.layers.3.mlp.experts.33.up_proj.weight": "model-00078-of-00282.safetensors",
612
+ "model.layers.3.mlp.experts.34.down_proj.weight": "model-00078-of-00282.safetensors",
613
+ "model.layers.3.mlp.experts.34.gate_proj.weight": "model-00078-of-00282.safetensors",
614
+ "model.layers.3.mlp.experts.34.up_proj.weight": "model-00078-of-00282.safetensors",
615
+ "model.layers.3.mlp.experts.35.down_proj.weight": "model-00078-of-00282.safetensors",
616
+ "model.layers.3.mlp.experts.35.gate_proj.weight": "model-00078-of-00282.safetensors",
617
+ "model.layers.3.mlp.experts.35.up_proj.weight": "model-00078-of-00282.safetensors",
618
+ "model.layers.3.mlp.experts.36.down_proj.weight": "model-00078-of-00282.safetensors",
619
+ "model.layers.3.mlp.experts.36.gate_proj.weight": "model-00078-of-00282.safetensors",
620
+ "model.layers.3.mlp.experts.36.up_proj.weight": "model-00078-of-00282.safetensors",
621
+ "model.layers.3.mlp.experts.37.down_proj.weight": "model-00078-of-00282.safetensors",
622
+ "model.layers.3.mlp.experts.37.gate_proj.weight": "model-00078-of-00282.safetensors",
623
+ "model.layers.3.mlp.experts.37.up_proj.weight": "model-00078-of-00282.safetensors",
624
+ "model.layers.3.mlp.experts.38.down_proj.weight": "model-00078-of-00282.safetensors",
625
+ "model.layers.3.mlp.experts.38.gate_proj.weight": "model-00078-of-00282.safetensors",
626
+ "model.layers.3.mlp.experts.38.up_proj.weight": "model-00078-of-00282.safetensors",
627
+ "model.layers.3.mlp.experts.39.down_proj.weight": "model-00078-of-00282.safetensors",
628
+ "model.layers.3.mlp.experts.39.gate_proj.weight": "model-00078-of-00282.safetensors",
629
+ "model.layers.3.mlp.experts.39.up_proj.weight": "model-00078-of-00282.safetensors",
630
+ "model.layers.3.mlp.experts.4.down_proj.weight": "model-00078-of-00282.safetensors",
631
+ "model.layers.3.mlp.experts.4.gate_proj.weight": "model-00078-of-00282.safetensors",
632
+ "model.layers.3.mlp.experts.4.up_proj.weight": "model-00078-of-00282.safetensors",
633
+ "model.layers.3.mlp.experts.40.down_proj.weight": "model-00078-of-00282.safetensors",
634
+ "model.layers.3.mlp.experts.40.gate_proj.weight": "model-00078-of-00282.safetensors",
635
+ "model.layers.3.mlp.experts.40.up_proj.weight": "model-00078-of-00282.safetensors",
636
+ "model.layers.3.mlp.experts.41.down_proj.weight": "model-00078-of-00282.safetensors",
637
+ "model.layers.3.mlp.experts.41.gate_proj.weight": "model-00078-of-00282.safetensors",
638
+ "model.layers.3.mlp.experts.41.up_proj.weight": "model-00078-of-00282.safetensors",
639
+ "model.layers.3.mlp.experts.42.down_proj.weight": "model-00078-of-00282.safetensors",
640
+ "model.layers.3.mlp.experts.42.gate_proj.weight": "model-00078-of-00282.safetensors",
641
+ "model.layers.3.mlp.experts.42.up_proj.weight": "model-00078-of-00282.safetensors",
642
+ "model.layers.3.mlp.experts.43.down_proj.weight": "model-00078-of-00282.safetensors",
643
+ "model.layers.3.mlp.experts.43.gate_proj.weight": "model-00078-of-00282.safetensors",
644
+ "model.layers.3.mlp.experts.43.up_proj.weight": "model-00078-of-00282.safetensors",
645
+ "model.layers.3.mlp.experts.44.down_proj.weight": "model-00078-of-00282.safetensors",
646
+ "model.layers.3.mlp.experts.44.gate_proj.weight": "model-00078-of-00282.safetensors",
647
+ "model.layers.3.mlp.experts.44.up_proj.weight": "model-00078-of-00282.safetensors",
648
+ "model.layers.3.mlp.experts.45.down_proj.weight": "model-00078-of-00282.safetensors",
649
+ "model.layers.3.mlp.experts.45.gate_proj.weight": "model-00078-of-00282.safetensors",
650
+ "model.layers.3.mlp.experts.45.up_proj.weight": "model-00078-of-00282.safetensors",
651
+ "model.layers.3.mlp.experts.46.down_proj.weight": "model-00078-of-00282.safetensors",
652
+ "model.layers.3.mlp.experts.46.gate_proj.weight": "model-00078-of-00282.safetensors",
653
+ "model.layers.3.mlp.experts.46.up_proj.weight": "model-00078-of-00282.safetensors",
654
+ "model.layers.3.mlp.experts.47.down_proj.weight": "model-00078-of-00282.safetensors",
655
+ "model.layers.3.mlp.experts.47.gate_proj.weight": "model-00078-of-00282.safetensors",
656
+ "model.layers.3.mlp.experts.47.up_proj.weight": "model-00078-of-00282.safetensors",
657
+ "model.layers.3.mlp.experts.48.down_proj.weight": "model-00078-of-00282.safetensors",
658
+ "model.layers.3.mlp.experts.48.gate_proj.weight": "model-00078-of-00282.safetensors",
659
+ "model.layers.3.mlp.experts.48.up_proj.weight": "model-00078-of-00282.safetensors",
660
+ "model.layers.3.mlp.experts.49.down_proj.weight": "model-00078-of-00282.safetensors",
661
+ "model.layers.3.mlp.experts.49.gate_proj.weight": "model-00078-of-00282.safetensors",
662
+ "model.layers.3.mlp.experts.49.up_proj.weight": "model-00078-of-00282.safetensors",
663
+ "model.layers.3.mlp.experts.5.down_proj.weight": "model-00078-of-00282.safetensors",
664
+ "model.layers.3.mlp.experts.5.gate_proj.weight": "model-00078-of-00282.safetensors",
665
+ "model.layers.3.mlp.experts.5.up_proj.weight": "model-00078-of-00282.safetensors",
666
+ "model.layers.3.mlp.experts.50.down_proj.weight": "model-00078-of-00282.safetensors",
667
+ "model.layers.3.mlp.experts.50.gate_proj.weight": "model-00078-of-00282.safetensors",
668
+ "model.layers.3.mlp.experts.50.up_proj.weight": "model-00078-of-00282.safetensors",
669
+ "model.layers.3.mlp.experts.51.down_proj.weight": "model-00078-of-00282.safetensors",
670
+ "model.layers.3.mlp.experts.51.gate_proj.weight": "model-00078-of-00282.safetensors",
671
+ "model.layers.3.mlp.experts.51.up_proj.weight": "model-00078-of-00282.safetensors",
672
+ "model.layers.3.mlp.experts.52.down_proj.weight": "model-00078-of-00282.safetensors",
673
+ "model.layers.3.mlp.experts.52.gate_proj.weight": "model-00078-of-00282.safetensors",
674
+ "model.layers.3.mlp.experts.52.up_proj.weight": "model-00078-of-00282.safetensors",
675
+ "model.layers.3.mlp.experts.53.down_proj.weight": "model-00078-of-00282.safetensors",
676
+ "model.layers.3.mlp.experts.53.gate_proj.weight": "model-00078-of-00282.safetensors",
677
+ "model.layers.3.mlp.experts.53.up_proj.weight": "model-00078-of-00282.safetensors",
678
+ "model.layers.3.mlp.experts.54.down_proj.weight": "model-00078-of-00282.safetensors",
679
+ "model.layers.3.mlp.experts.54.gate_proj.weight": "model-00078-of-00282.safetensors",
680
+ "model.layers.3.mlp.experts.54.up_proj.weight": "model-00078-of-00282.safetensors",
681
+ "model.layers.3.mlp.experts.55.down_proj.weight": "model-00078-of-00282.safetensors",
682
+ "model.layers.3.mlp.experts.55.gate_proj.weight": "model-00078-of-00282.safetensors",
683
+ "model.layers.3.mlp.experts.55.up_proj.weight": "model-00078-of-00282.safetensors",
684
+ "model.layers.3.mlp.experts.56.down_proj.weight": "model-00078-of-00282.safetensors",
685
+ "model.layers.3.mlp.experts.56.gate_proj.weight": "model-00078-of-00282.safetensors",
686
+ "model.layers.3.mlp.experts.56.up_proj.weight": "model-00078-of-00282.safetensors",
687
+ "model.layers.3.mlp.experts.57.down_proj.weight": "model-00078-of-00282.safetensors",
688
+ "model.layers.3.mlp.experts.57.gate_proj.weight": "model-00078-of-00282.safetensors",
689
+ "model.layers.3.mlp.experts.57.up_proj.weight": "model-00078-of-00282.safetensors",
690
+ "model.layers.3.mlp.experts.58.down_proj.weight": "model-00078-of-00282.safetensors",
691
+ "model.layers.3.mlp.experts.58.gate_proj.weight": "model-00078-of-00282.safetensors",
692
+ "model.layers.3.mlp.experts.58.up_proj.weight": "model-00078-of-00282.safetensors",
693
+ "model.layers.3.mlp.experts.59.down_proj.weight": "model-00078-of-00282.safetensors",
694
+ "model.layers.3.mlp.experts.59.gate_proj.weight": "model-00078-of-00282.safetensors",
695
+ "model.layers.3.mlp.experts.59.up_proj.weight": "model-00078-of-00282.safetensors",
696
+ "model.layers.3.mlp.experts.6.down_proj.weight": "model-00078-of-00282.safetensors",
697
+ "model.layers.3.mlp.experts.6.gate_proj.weight": "model-00078-of-00282.safetensors",
698
+ "model.layers.3.mlp.experts.6.up_proj.weight": "model-00078-of-00282.safetensors",
699
+ "model.layers.3.mlp.experts.60.down_proj.weight": "model-00078-of-00282.safetensors",
700
+ "model.layers.3.mlp.experts.60.gate_proj.weight": "model-00078-of-00282.safetensors",
701
+ "model.layers.3.mlp.experts.60.up_proj.weight": "model-00078-of-00282.safetensors",
702
+ "model.layers.3.mlp.experts.61.down_proj.weight": "model-00078-of-00282.safetensors",
703
+ "model.layers.3.mlp.experts.61.gate_proj.weight": "model-00078-of-00282.safetensors",
704
+ "model.layers.3.mlp.experts.61.up_proj.weight": "model-00078-of-00282.safetensors",
705
+ "model.layers.3.mlp.experts.62.down_proj.weight": "model-00078-of-00282.safetensors",
706
+ "model.layers.3.mlp.experts.62.gate_proj.weight": "model-00078-of-00282.safetensors",
707
+ "model.layers.3.mlp.experts.62.up_proj.weight": "model-00078-of-00282.safetensors",
708
+ "model.layers.3.mlp.experts.63.down_proj.weight": "model-00078-of-00282.safetensors",
709
+ "model.layers.3.mlp.experts.63.gate_proj.weight": "model-00078-of-00282.safetensors",
710
+ "model.layers.3.mlp.experts.63.up_proj.weight": "model-00078-of-00282.safetensors",
711
+ "model.layers.3.mlp.experts.64.down_proj.weight": "model-00078-of-00282.safetensors",
712
+ "model.layers.3.mlp.experts.64.gate_proj.weight": "model-00078-of-00282.safetensors",
713
+ "model.layers.3.mlp.experts.64.up_proj.weight": "model-00078-of-00282.safetensors",
714
+ "model.layers.3.mlp.experts.65.down_proj.weight": "model-00078-of-00282.safetensors",
715
+ "model.layers.3.mlp.experts.65.gate_proj.weight": "model-00078-of-00282.safetensors",
716
+ "model.layers.3.mlp.experts.65.up_proj.weight": "model-00078-of-00282.safetensors",
717
+ "model.layers.3.mlp.experts.66.down_proj.weight": "model-00078-of-00282.safetensors",
718
+ "model.layers.3.mlp.experts.66.gate_proj.weight": "model-00078-of-00282.safetensors",
719
+ "model.layers.3.mlp.experts.66.up_proj.weight": "model-00078-of-00282.safetensors",
720
+ "model.layers.3.mlp.experts.67.down_proj.weight": "model-00078-of-00282.safetensors",
721
+ "model.layers.3.mlp.experts.67.gate_proj.weight": "model-00078-of-00282.safetensors",
722
+ "model.layers.3.mlp.experts.67.up_proj.weight": "model-00078-of-00282.safetensors",
723
+ "model.layers.3.mlp.experts.68.down_proj.weight": "model-00078-of-00282.safetensors",
724
+ "model.layers.3.mlp.experts.68.gate_proj.weight": "model-00078-of-00282.safetensors",
725
+ "model.layers.3.mlp.experts.68.up_proj.weight": "model-00078-of-00282.safetensors",
726
+ "model.layers.3.mlp.gate.e_score_correction_bias": "model-00079-of-00282.safetensors",
727
+ "model.layers.3.mlp.experts.69.down_proj.weight": "model-00079-of-00282.safetensors",
728
+ "model.layers.3.mlp.experts.69.gate_proj.weight": "model-00079-of-00282.safetensors",
729
+ "model.layers.3.mlp.experts.69.up_proj.weight": "model-00079-of-00282.safetensors",
730
+ "model.layers.3.mlp.experts.7.down_proj.weight": "model-00079-of-00282.safetensors",
731
+ "model.layers.3.mlp.experts.7.gate_proj.weight": "model-00079-of-00282.safetensors",
732
+ "model.layers.3.mlp.experts.7.up_proj.weight": "model-00079-of-00282.safetensors",
733
+ "model.layers.3.mlp.experts.70.down_proj.weight": "model-00079-of-00282.safetensors",
734
+ "model.layers.3.mlp.experts.70.gate_proj.weight": "model-00079-of-00282.safetensors",
735
+ "model.layers.3.mlp.experts.70.up_proj.weight": "model-00079-of-00282.safetensors",
736
+ "model.layers.3.mlp.experts.71.down_proj.weight": "model-00079-of-00282.safetensors",
737
+ "model.layers.3.mlp.experts.71.gate_proj.weight": "model-00079-of-00282.safetensors",
738
+ "model.layers.3.mlp.experts.71.up_proj.weight": "model-00079-of-00282.safetensors",
739
+ "model.layers.3.mlp.experts.72.down_proj.weight": "model-00079-of-00282.safetensors",
740
+ "model.layers.3.mlp.experts.72.gate_proj.weight": "model-00079-of-00282.safetensors",
741
+ "model.layers.3.mlp.experts.72.up_proj.weight": "model-00079-of-00282.safetensors",
742
+ "model.layers.3.mlp.experts.73.down_proj.weight": "model-00079-of-00282.safetensors",
743
+ "model.layers.3.mlp.experts.73.gate_proj.weight": "model-00079-of-00282.safetensors",
744
+ "model.layers.3.mlp.experts.73.up_proj.weight": "model-00079-of-00282.safetensors",
745
+ "model.layers.3.mlp.experts.74.down_proj.weight": "model-00079-of-00282.safetensors",
746
+ "model.layers.3.mlp.experts.74.gate_proj.weight": "model-00079-of-00282.safetensors",
747
+ "model.layers.3.mlp.experts.74.up_proj.weight": "model-00079-of-00282.safetensors",
748
+ "model.layers.3.mlp.experts.75.down_proj.weight": "model-00079-of-00282.safetensors",
749
+ "model.layers.3.mlp.experts.75.gate_proj.weight": "model-00079-of-00282.safetensors",
750
+ "model.layers.3.mlp.experts.75.up_proj.weight": "model-00079-of-00282.safetensors",
751
+ "model.layers.3.mlp.experts.76.down_proj.weight": "model-00079-of-00282.safetensors",
752
+ "model.layers.3.mlp.experts.76.gate_proj.weight": "model-00079-of-00282.safetensors",
753
+ "model.layers.3.mlp.experts.76.up_proj.weight": "model-00079-of-00282.safetensors",
754
+ "model.layers.3.mlp.experts.77.down_proj.weight": "model-00079-of-00282.safetensors",
755
+ "model.layers.3.mlp.experts.77.gate_proj.weight": "model-00079-of-00282.safetensors",
756
+ "model.layers.3.mlp.experts.77.up_proj.weight": "model-00079-of-00282.safetensors",
757
+ "model.layers.3.mlp.experts.78.down_proj.weight": "model-00079-of-00282.safetensors",
758
+ "model.layers.3.mlp.experts.78.gate_proj.weight": "model-00079-of-00282.safetensors",
759
+ "model.layers.3.mlp.experts.78.up_proj.weight": "model-00079-of-00282.safetensors",
760
+ "model.layers.3.mlp.experts.79.down_proj.weight": "model-00079-of-00282.safetensors",
761
+ "model.layers.3.mlp.experts.79.gate_proj.weight": "model-00079-of-00282.safetensors",
762
+ "model.layers.3.mlp.experts.79.up_proj.weight": "model-00079-of-00282.safetensors",
763
+ "model.layers.3.mlp.experts.8.down_proj.weight": "model-00079-of-00282.safetensors",
764
+ "model.layers.3.mlp.experts.8.gate_proj.weight": "model-00079-of-00282.safetensors",
765
+ "model.layers.3.mlp.experts.8.up_proj.weight": "model-00079-of-00282.safetensors",
766
+ "model.layers.3.mlp.experts.80.down_proj.weight": "model-00079-of-00282.safetensors",
767
+ "model.layers.3.mlp.experts.80.gate_proj.weight": "model-00079-of-00282.safetensors",
768
+ "model.layers.3.mlp.experts.80.up_proj.weight": "model-00079-of-00282.safetensors",
769
+ "model.layers.3.mlp.experts.81.down_proj.weight": "model-00079-of-00282.safetensors",
770
+ "model.layers.3.mlp.experts.81.gate_proj.weight": "model-00079-of-00282.safetensors",
771
+ "model.layers.3.mlp.experts.81.up_proj.weight": "model-00079-of-00282.safetensors",
772
+ "model.layers.3.mlp.experts.82.down_proj.weight": "model-00079-of-00282.safetensors",
773
+ "model.layers.3.mlp.experts.82.gate_proj.weight": "model-00079-of-00282.safetensors",
774
+ "model.layers.3.mlp.experts.82.up_proj.weight": "model-00079-of-00282.safetensors",
775
+ "model.layers.3.mlp.experts.83.down_proj.weight": "model-00079-of-00282.safetensors",
776
+ "model.layers.3.mlp.experts.83.gate_proj.weight": "model-00079-of-00282.safetensors",
777
+ "model.layers.3.mlp.experts.83.up_proj.weight": "model-00079-of-00282.safetensors",
778
+ "model.layers.3.mlp.experts.84.down_proj.weight": "model-00079-of-00282.safetensors",
779
+ "model.layers.3.mlp.experts.84.gate_proj.weight": "model-00079-of-00282.safetensors",
780
+ "model.layers.3.mlp.experts.84.up_proj.weight": "model-00079-of-00282.safetensors",
781
+ "model.layers.3.mlp.experts.85.down_proj.weight": "model-00079-of-00282.safetensors",
782
+ "model.layers.3.mlp.experts.85.gate_proj.weight": "model-00079-of-00282.safetensors",
783
+ "model.layers.3.mlp.experts.85.up_proj.weight": "model-00079-of-00282.safetensors",
784
+ "model.layers.3.mlp.experts.86.down_proj.weight": "model-00079-of-00282.safetensors",
785
+ "model.layers.3.mlp.experts.86.gate_proj.weight": "model-00079-of-00282.safetensors",
786
+ "model.layers.3.mlp.experts.86.up_proj.weight": "model-00079-of-00282.safetensors",
787
+ "model.layers.3.mlp.experts.87.down_proj.weight": "model-00079-of-00282.safetensors",
788
+ "model.layers.3.mlp.experts.87.gate_proj.weight": "model-00079-of-00282.safetensors",
789
+ "model.layers.3.mlp.experts.87.up_proj.weight": "model-00079-of-00282.safetensors",
790
+ "model.layers.3.mlp.experts.88.down_proj.weight": "model-00079-of-00282.safetensors",
791
+ "model.layers.3.mlp.experts.88.gate_proj.weight": "model-00079-of-00282.safetensors",
792
+ "model.layers.3.mlp.experts.88.up_proj.weight": "model-00079-of-00282.safetensors",
793
+ "model.layers.3.mlp.experts.89.down_proj.weight": "model-00079-of-00282.safetensors",
794
+ "model.layers.3.mlp.experts.89.gate_proj.weight": "model-00079-of-00282.safetensors",
795
+ "model.layers.3.mlp.experts.89.up_proj.weight": "model-00079-of-00282.safetensors",
796
+ "model.layers.3.mlp.experts.9.down_proj.weight": "model-00079-of-00282.safetensors",
797
+ "model.layers.3.mlp.experts.9.gate_proj.weight": "model-00079-of-00282.safetensors",
798
+ "model.layers.3.mlp.experts.9.up_proj.weight": "model-00079-of-00282.safetensors",
799
+ "model.layers.3.mlp.experts.90.down_proj.weight": "model-00079-of-00282.safetensors",
800
+ "model.layers.3.mlp.experts.90.gate_proj.weight": "model-00079-of-00282.safetensors",
801
+ "model.layers.3.mlp.experts.90.up_proj.weight": "model-00079-of-00282.safetensors",
802
+ "model.layers.3.mlp.experts.91.down_proj.weight": "model-00079-of-00282.safetensors",
803
+ "model.layers.3.mlp.experts.91.gate_proj.weight": "model-00079-of-00282.safetensors",
804
+ "model.layers.3.mlp.experts.91.up_proj.weight": "model-00079-of-00282.safetensors",
805
+ "model.layers.3.mlp.experts.92.down_proj.weight": "model-00079-of-00282.safetensors",
806
+ "model.layers.3.mlp.experts.92.gate_proj.weight": "model-00079-of-00282.safetensors",
807
+ "model.layers.3.mlp.experts.92.up_proj.weight": "model-00079-of-00282.safetensors",
808
+ "model.layers.3.mlp.experts.93.down_proj.weight": "model-00079-of-00282.safetensors",
809
+ "model.layers.3.mlp.experts.93.gate_proj.weight": "model-00079-of-00282.safetensors",
810
+ "model.layers.3.mlp.experts.93.up_proj.weight": "model-00079-of-00282.safetensors",
811
+ "model.layers.3.mlp.experts.94.down_proj.weight": "model-00079-of-00282.safetensors",
812
+ "model.layers.3.mlp.experts.94.gate_proj.weight": "model-00079-of-00282.safetensors",
813
+ "model.layers.3.mlp.experts.94.up_proj.weight": "model-00079-of-00282.safetensors",
814
+ "model.layers.3.mlp.experts.95.down_proj.weight": "model-00079-of-00282.safetensors",
815
+ "model.layers.3.mlp.experts.95.gate_proj.weight": "model-00079-of-00282.safetensors",
816
+ "model.layers.3.mlp.experts.95.up_proj.weight": "model-00079-of-00282.safetensors",
817
+ "model.layers.3.mlp.experts.96.down_proj.weight": "model-00079-of-00282.safetensors",
818
+ "model.layers.3.mlp.experts.96.gate_proj.weight": "model-00079-of-00282.safetensors",
819
+ "model.layers.3.mlp.experts.96.up_proj.weight": "model-00079-of-00282.safetensors",
820
+ "model.layers.3.mlp.experts.97.down_proj.weight": "model-00079-of-00282.safetensors",
821
+ "model.layers.3.mlp.experts.97.gate_proj.weight": "model-00079-of-00282.safetensors",
822
+ "model.layers.3.mlp.experts.97.up_proj.weight": "model-00079-of-00282.safetensors",
823
+ "model.layers.3.mlp.experts.98.down_proj.weight": "model-00079-of-00282.safetensors",
824
+ "model.layers.3.mlp.experts.98.gate_proj.weight": "model-00079-of-00282.safetensors",
825
+ "model.layers.3.mlp.experts.98.up_proj.weight": "model-00079-of-00282.safetensors",
826
+ "model.layers.3.mlp.experts.99.down_proj.weight": "model-00079-of-00282.safetensors",
827
+ "model.layers.3.mlp.experts.99.gate_proj.weight": "model-00079-of-00282.safetensors",
828
+ "model.layers.3.mlp.experts.99.up_proj.weight": "model-00079-of-00282.safetensors",
829
+ "model.layers.3.mlp.gate.weight": "model-00079-of-00282.safetensors",
830
+ "model.layers.3.mlp.shared_experts.down_proj.weight": "model-00079-of-00282.safetensors",
831
+ "model.layers.3.mlp.shared_experts.gate_proj.weight": "model-00079-of-00282.safetensors",
832
+ "model.layers.3.mlp.shared_experts.up_proj.weight": "model-00079-of-00282.safetensors",
833
+ "model.layers.3.post_attention_layernorm.weight": "model-00079-of-00282.safetensors",
834
+ "model.layers.3.self_attn.indexer.k_norm.bias": "model-00079-of-00282.safetensors",
835
+ "model.layers.3.self_attn.indexer.k_norm.weight": "model-00079-of-00282.safetensors",
836
+ "model.layers.3.self_attn.indexer.weights_proj.weight": "model-00079-of-00282.safetensors",
837
+ "model.layers.3.self_attn.indexer.wk.weight": "model-00079-of-00282.safetensors",
838
+ "model.layers.3.self_attn.indexer.wq_b.weight": "model-00079-of-00282.safetensors",
839
+ "model.layers.3.self_attn.kv_a_layernorm.weight": "model-00079-of-00282.safetensors",
840
+ "model.layers.3.self_attn.kv_a_proj_with_mqa.weight": "model-00079-of-00282.safetensors",
841
+ "model.layers.3.self_attn.kv_b_proj.weight": "model-00079-of-00282.safetensors",
842
+ "model.layers.3.self_attn.o_proj.weight": "model-00079-of-00282.safetensors",
843
+ "model.layers.3.self_attn.q_a_layernorm.weight": "model-00079-of-00282.safetensors",
844
+ "model.layers.3.self_attn.q_a_proj.weight": "model-00079-of-00282.safetensors",
845
+ "model.layers.3.self_attn.q_b_proj.weight": "model-00079-of-00282.safetensors",
846
+ "model.norm.weight": "model-00282-of-00282.safetensors"
847
+ }
848
+ }
modeling_deepseek_v32.py ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import warnings
3
+ from collections.abc import Callable
4
+ from typing import Optional
5
+
6
+ import torch
7
+ import torch.nn.functional as F
8
+ from torch import nn
9
+
10
+ from transformers import initialization as init
11
+ from transformers.cache_utils import Cache
12
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
13
+ from transformers.modeling_layers import GenericForSequenceClassification, GenericForTokenClassification
14
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
15
+ from transformers.processing_utils import Unpack
16
+ from transformers.utils import logging
17
+ from transformers.models.deepseek_v3.modeling_deepseek_v3 import (
18
+ DeepseekV3Attention,
19
+ DeepseekV3DecoderLayer,
20
+ DeepseekV3ForCausalLM,
21
+ DeepseekV3MLP,
22
+ DeepseekV3Model,
23
+ DeepseekV3MoE,
24
+ DeepseekV3PreTrainedModel,
25
+ DeepseekV3RMSNorm,
26
+ DeepseekV3RotaryEmbedding,
27
+ apply_rotary_pos_emb_interleave,
28
+ yarn_get_mscale,
29
+ )
30
+ from transformers.models.llama.modeling_llama import (
31
+ apply_rotary_pos_emb,
32
+ eager_attention_forward,
33
+ )
34
+ from configuration_deepseek_v32 import DeepseekV32Config
35
+
36
+
37
+ logger = logging.get_logger(__name__)
38
+
39
+
40
+ class DeepseekV32RMSNorm(DeepseekV3RMSNorm):
41
+ pass
42
+
43
+
44
+ class DeepseekV32RotaryEmbedding(DeepseekV3RotaryEmbedding):
45
+ pass
46
+
47
+
48
+ class DeepseekV32MLP(DeepseekV3MLP):
49
+ pass
50
+
51
+
52
+ class DeepseekV32MoE(DeepseekV3MoE):
53
+ pass
54
+
55
+
56
+ class DeepseekV32SparseAttention(nn.Module):
57
+ """
58
+ DeepSeek V3.2 sparse attention mechanism with indexer.
59
+
60
+ This implements the native sparse attention from DeepSeek V3.2 which uses
61
+ an indexer to select top-k tokens for attention computation, making it
62
+ more efficient for long sequences.
63
+ """
64
+
65
+ def __init__(self, config: DeepseekV32Config, layer_idx: int):
66
+ super().__init__()
67
+ self.config = config
68
+ self.layer_idx = layer_idx
69
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
70
+ self.attention_dropout = config.attention_dropout
71
+ self.num_heads = config.num_attention_heads
72
+
73
+ self.q_lora_rank = config.q_lora_rank
74
+ self.qk_rope_head_dim = config.qk_rope_head_dim
75
+ self.kv_lora_rank = config.kv_lora_rank
76
+ self.v_head_dim = config.v_head_dim
77
+ self.qk_nope_head_dim = config.qk_nope_head_dim
78
+ self.qk_head_dim = config.qk_head_dim
79
+ self.index_topk = config.index_topk
80
+
81
+ self.is_causal = True
82
+
83
+ # Query projection
84
+ if self.q_lora_rank is None:
85
+ self.q_proj = nn.Linear(config.hidden_size, self.num_heads * self.qk_head_dim, bias=False)
86
+ else:
87
+ self.q_a_proj = nn.Linear(config.hidden_size, config.q_lora_rank, bias=config.attention_bias)
88
+ self.q_a_layernorm = DeepseekV32RMSNorm(config.q_lora_rank)
89
+ self.q_b_proj = nn.Linear(config.q_lora_rank, self.num_heads * self.qk_head_dim, bias=False)
90
+
91
+ # Key-Value projections
92
+ self.kv_a_proj_with_mqa = nn.Linear(
93
+ config.hidden_size,
94
+ self.kv_lora_rank + self.qk_rope_head_dim,
95
+ bias=config.attention_bias,
96
+ )
97
+ self.kv_a_layernorm = DeepseekV32RMSNorm(self.kv_lora_rank)
98
+ self.kv_b_proj = nn.Linear(
99
+ self.kv_lora_rank,
100
+ self.num_heads * (self.qk_nope_head_dim + self.v_head_dim),
101
+ bias=False,
102
+ )
103
+
104
+ # Output projection
105
+ self.o_proj = nn.Linear(
106
+ self.num_heads * self.v_head_dim,
107
+ config.hidden_size,
108
+ bias=config.attention_bias,
109
+ )
110
+
111
+ # Indexer components for sparse attention
112
+ self.wq_b = nn.Linear(config.q_lora_rank, self.num_heads * self.qk_head_dim, bias=False)
113
+ self.wk = nn.Linear(config.hidden_size, self.qk_head_dim, bias=config.attention_bias)
114
+ self.k_norm = DeepseekV32RMSNorm(self.qk_head_dim)
115
+ self.weights_proj = nn.Linear(config.hidden_size, self.num_heads, bias=False)
116
+
117
+ self.scaling = self.qk_head_dim ** (-0.5)
118
+ if self.config.rope_parameters.get("rope_type", "default") != "default":
119
+ mscale_all_dim = self.config.rope_parameters.get("mscale_all_dim", 0)
120
+ scaling_factor = self.config.rope_parameters["factor"]
121
+ if mscale_all_dim:
122
+ mscale = yarn_get_mscale(scaling_factor, mscale_all_dim)
123
+ self.scaling = self.scaling * mscale * mscale
124
+
125
+ def forward(
126
+ self,
127
+ hidden_states: torch.Tensor,
128
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
129
+ attention_mask: Optional[torch.Tensor],
130
+ past_key_values: Optional[Cache] = None,
131
+ cache_position: Optional[torch.LongTensor] = None,
132
+ **kwargs: Unpack[FlashAttentionKwargs],
133
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
134
+ batch_size, seq_length = hidden_states.shape[:-1]
135
+
136
+ # For training or when index_topk is not effective, fall back to standard attention
137
+ # This is a simplified implementation - in practice, you'd implement the full sparse indexer
138
+ if self.training or seq_length <= self.index_topk:
139
+ warnings.warn(
140
+ "DeepSeek V3.2 sparse attention is not fully implemented in this version. "
141
+ "Falling back to standard attention. For production use, please use vLLM or "
142
+ "other optimized inference engines.",
143
+ UserWarning,
144
+ )
145
+ return self._standard_attention(
146
+ hidden_states, position_embeddings, attention_mask, past_key_values, cache_position, **kwargs
147
+ )
148
+
149
+ # Sparse attention implementation would go here
150
+ # This requires custom CUDA kernels for efficient top-k selection and indexing
151
+ return self._standard_attention(
152
+ hidden_states, position_embeddings, attention_mask, past_key_values, cache_position, **kwargs
153
+ )
154
+
155
+ def _standard_attention(
156
+ self,
157
+ hidden_states: torch.Tensor,
158
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
159
+ attention_mask: Optional[torch.Tensor],
160
+ past_key_values: Optional[Cache] = None,
161
+ cache_position: Optional[torch.LongTensor] = None,
162
+ **kwargs: Unpack[FlashAttentionKwargs],
163
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
164
+ """Standard attention fallback (same as DeepSeek V3)"""
165
+ batch_size, seq_length = hidden_states.shape[:-1]
166
+ query_shape = (batch_size, seq_length, -1, self.qk_head_dim)
167
+ key_shape = (batch_size, seq_length, -1, self.qk_nope_head_dim + self.v_head_dim)
168
+
169
+ if self.q_lora_rank is None:
170
+ q_states = self.q_proj(hidden_states)
171
+ else:
172
+ q_states = self.q_b_proj(self.q_a_layernorm(self.q_a_proj(hidden_states)))
173
+ q_states = q_states.view(query_shape).transpose(1, 2)
174
+ q_pass, q_rot = torch.split(q_states, [self.qk_nope_head_dim, self.qk_rope_head_dim], dim=-1)
175
+
176
+ compressed_kv = self.kv_a_proj_with_mqa(hidden_states)
177
+ k_pass, k_rot = torch.split(compressed_kv, [self.kv_lora_rank, self.qk_rope_head_dim], dim=-1)
178
+
179
+ k_pass = self.kv_b_proj(self.kv_a_layernorm(k_pass)).view(key_shape).transpose(1, 2)
180
+ k_pass, value_states = torch.split(k_pass, [self.qk_nope_head_dim, self.v_head_dim], dim=-1)
181
+
182
+ k_rot = k_rot.view(batch_size, 1, seq_length, self.qk_rope_head_dim)
183
+
184
+ cos, sin = position_embeddings
185
+ if self.config.rope_interleave:
186
+ q_rot, k_rot = apply_rotary_pos_emb_interleave(q_rot, k_rot, cos, sin)
187
+ else:
188
+ q_rot, k_rot = apply_rotary_pos_emb(q_rot, k_rot, cos, sin)
189
+ k_rot = k_rot.expand(*k_pass.shape[:-1], -1)
190
+
191
+ query_states = torch.cat((q_pass, q_rot), dim=-1)
192
+ key_states = torch.cat((k_pass, k_rot), dim=-1)
193
+
194
+ if past_key_values is not None:
195
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
196
+ key_states, value_states = past_key_values.update(key_states, value_states, self.layer_idx, cache_kwargs)
197
+
198
+ if self.config._attn_implementation == "flash_attention_2" and self.qk_head_dim != self.v_head_dim:
199
+ value_states = F.pad(value_states, [0, self.qk_head_dim - self.v_head_dim])
200
+
201
+ attention_interface: Callable = eager_attention_forward
202
+ if self.config._attn_implementation != "eager":
203
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
204
+
205
+ attn_output, attn_weights = attention_interface(
206
+ self,
207
+ query_states,
208
+ key_states,
209
+ value_states,
210
+ attention_mask,
211
+ dropout=0.0 if not self.training else self.attention_dropout,
212
+ scaling=self.scaling,
213
+ **kwargs,
214
+ )
215
+
216
+ if self.config._attn_implementation == "flash_attention_2" and self.qk_head_dim != self.v_head_dim:
217
+ attn_output = attn_output[:, :, :, : self.v_head_dim]
218
+
219
+ attn_output = attn_output.reshape(batch_size, seq_length, -1).contiguous()
220
+ attn_output = self.o_proj(attn_output)
221
+ return attn_output, attn_weights
222
+
223
+
224
+ class DeepseekV32DecoderLayer(nn.Module):
225
+ def __init__(self, config: DeepseekV32Config, layer_idx: int):
226
+ super().__init__()
227
+ self.hidden_size = config.hidden_size
228
+
229
+ # Use sparse attention for V3.2
230
+ self.self_attn = DeepseekV32SparseAttention(config=config, layer_idx=layer_idx)
231
+
232
+ if layer_idx >= config.first_k_dense_replace:
233
+ self.mlp = DeepseekV32MoE(config)
234
+ else:
235
+ self.mlp = DeepseekV32MLP(config)
236
+
237
+ self.input_layernorm = DeepseekV32RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
238
+ self.post_attention_layernorm = DeepseekV32RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
239
+
240
+ def forward(
241
+ self,
242
+ hidden_states: torch.Tensor,
243
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None,
244
+ attention_mask: Optional[torch.Tensor] = None,
245
+ past_key_values: Optional[Cache] = None,
246
+ cache_position: Optional[torch.LongTensor] = None,
247
+ **kwargs: Unpack[FlashAttentionKwargs],
248
+ ) -> torch.Tensor:
249
+ residual = hidden_states
250
+
251
+ hidden_states = self.input_layernorm(hidden_states)
252
+
253
+ # Self Attention
254
+ hidden_states, self_attn_weights = self.self_attn(
255
+ hidden_states=hidden_states,
256
+ position_embeddings=position_embeddings,
257
+ attention_mask=attention_mask,
258
+ past_key_values=past_key_values,
259
+ cache_position=cache_position,
260
+ **kwargs,
261
+ )
262
+ hidden_states = residual + hidden_states
263
+
264
+ # Fully Connected
265
+ residual = hidden_states
266
+ hidden_states = self.post_attention_layernorm(hidden_states)
267
+ hidden_states = self.mlp(hidden_states)
268
+ hidden_states = residual + hidden_states
269
+
270
+ return hidden_states
271
+
272
+
273
+ class DeepseekV32PreTrainedModel(DeepseekV3PreTrainedModel):
274
+ config_class = DeepseekV32Config
275
+ _can_compile_fullgraph = False
276
+ _keep_in_fp32_modules_strict = ["e_score_correction_bias"]
277
+
278
+
279
+ class DeepseekV32Model(DeepseekV3Model):
280
+ """
281
+ DeepSeek V3.2 Model with native sparse attention.
282
+
283
+ This model extends DeepSeek V3 with an efficient sparse attention mechanism
284
+ that uses an indexer to select top-k tokens for attention computation.
285
+ """
286
+ config_class = DeepseekV32Config
287
+ _keys_to_ignore_on_load_unexpected = [r"model\.layers\.61.*"]
288
+
289
+ def __init__(self, config: DeepseekV32Config):
290
+ # Skip DeepseekV3Model.__init__ and go directly to PreTrainedModel
291
+ DeepseekV3PreTrainedModel.__init__(self, config)
292
+ self.padding_idx = config.pad_token_id
293
+ self.vocab_size = config.vocab_size
294
+
295
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
296
+ # Use V3.2-specific decoder layers
297
+ self.layers = nn.ModuleList(
298
+ [DeepseekV32DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
299
+ )
300
+ self.norm = DeepseekV32RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
301
+ self.rotary_emb = DeepseekV32RotaryEmbedding(config=config)
302
+ self.gradient_checkpointing = False
303
+
304
+ # Initialize weights and apply final processing
305
+ self.post_init()
306
+
307
+
308
+ class DeepseekV32ForCausalLM(DeepseekV3ForCausalLM):
309
+ """
310
+ DeepSeek V3.2 Model for causal language modeling with sparse attention.
311
+ """
312
+ config_class = DeepseekV32Config
313
+ _tied_weights_keys = ["lm_head.weight"]
314
+
315
+ def __init__(self, config):
316
+ super(DeepseekV3ForCausalLM, self).__init__(config)
317
+ self.model = DeepseekV32Model(config)
318
+ self.vocab_size = config.vocab_size
319
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
320
+
321
+ # Initialize weights and apply final processing
322
+ self.post_init()
323
+
324
+
325
+ class DeepseekV32ForSequenceClassification(GenericForSequenceClassification, DeepseekV32PreTrainedModel):
326
+ pass
327
+
328
+
329
+ class DeepseekV32ForTokenClassification(GenericForTokenClassification, DeepseekV32PreTrainedModel):
330
+ pass
331
+
332
+
333
+ __all__ = [
334
+ "DeepseekV32PreTrainedModel",
335
+ "DeepseekV32Model",
336
+ "DeepseekV32ForCausalLM",
337
+ "DeepseekV32ForSequenceClassification",
338
+ "DeepseekV32ForTokenClassification",
339
+ ]
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19e773648cb4e65de8660ea6365e10acca112d42a854923df93db4a6f333a82d
3
+ size 20217442
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "clean_up_tokenization_spaces": false,
4
+ "do_lower_case": false,
5
+ "eos_token": "<|endoftext|>",
6
+ "extra_special_tokens": [
7
+ "<|endoftext|>",
8
+ "[MASK]",
9
+ "[gMASK]",
10
+ "[sMASK]",
11
+ "<sop>",
12
+ "<eop>",
13
+ "<|system|>",
14
+ "<|user|>",
15
+ "<|assistant|>",
16
+ "<|observation|>",
17
+ "<|begin_of_image|>",
18
+ "<|end_of_image|>",
19
+ "<|begin_of_video|>",
20
+ "<|end_of_video|>",
21
+ "<|begin_of_audio|>",
22
+ "<|end_of_audio|>",
23
+ "<|begin_of_transcription|>",
24
+ "<|end_of_transcription|>"
25
+ ],
26
+ "is_local": true,
27
+ "model_max_length": 202752,
28
+ "model_specific_special_tokens": {},
29
+ "pad_token": "<|endoftext|>",
30
+ "padding_side": "left",
31
+ "remove_space": false,
32
+ "tokenizer_class": "TokenizersBackend"
33
+ }