bartowski commited on
Commit
8299609
1 Parent(s): 0602dfa

Converted using Chales Goddards script

Browse files
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ license: other
4
+ language:
5
+ - en
6
+ - zh
7
+ tags:
8
+ - math
9
+ ---
10
+
11
+ # InternLM-Math
12
+
13
+ <div align="center">
14
+
15
+ <img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/>
16
+ <div> </div>
17
+ <div align="center">
18
+ <b><font size="5">InternLM-Math</font></b>
19
+ <sup>
20
+ <a href="https://internlm.intern-ai.org.cn/">
21
+ <i><font size="4">HOT</font></i>
22
+ </a>
23
+ </sup>
24
+ <div> </div>
25
+ </div>
26
+
27
+ State-of-the-art bilingual open-sourced Math reasoning LLMs.
28
+ </div>
29
+
30
+ # Introduction
31
+ - **7B and 20B Chinese and English Math LMs with better than ChatGPT performances.** InternLM2-Math are continued pretrained from InternLM2-Base with ~100B high quality math-related tokens and SFT with ~2M bilingual math supervised data. We apply minhash and exact number match to decontaminate possible test set leakage.
32
+ - **Add Lean as a support language for math problem solving and math theorem proving.** We are exploring combining Lean 3 with InternLM-Math for verifiable math reasoning. InternLM-Math can generate Lean codes for simple math reasoning tasks like GSM8K or provide possible proof tactics based on Lean states.
33
+ - **Also can be viewed as a reward model, which supports the Outcome/Process/Lean Reward Model.** We supervise InternLM2-Math with various types of reward modeling data, to make InternLM2-Math can also verify chain-of-thought processes. We also add the ability to convert a chain-of-thought process into Lean 3 code.
34
+ - **A Math LM Augment Helper** and **Code Intepreter**. InternLM2-Math can help augment math reasoning problems and solve them using the code interpreter which makes you generate synthesis data quicker!
35
+
36
+ # Models
37
+ | Model | Transformers(HF) |Release Date |
38
+ |---|---|---|
39
+ | **InternLM2-Math-Base-7B** | [🤗internlm/internlm2-math-base-7b](https://huggingface.co/internlm/internlm2-math-base-7b) | 2024-01-23|
40
+ | **InternLM2-Math-Base-20B** | [🤗internlm/internlm2-math-base-20b](https://huggingface.co/internlm/internlm2-math-base-20b) | 2024-01-23|
41
+ | **InternLM2-Math-7B** | [🤗internlm/internlm2-math-7b](https://huggingface.co/internlm/internlm2-math-7b) | 2024-01-23|
42
+ | **InternLM2-Math-20B** | [🤗internlm/internlm2-math-20b](https://huggingface.co/internlm/internlm2-math-20b) | 2024-01-23|
43
+
44
+
45
+ # Performance
46
+
47
+ ## Pretrain Performance
48
+ We evaluate pretrain checkpoints based on greedy decoding with few-shot COT. Details of pretraining will be introduced in the tech report.
49
+ | Model | GSM8K | MATH |
50
+ |------------------------|---------|--------|
51
+ | Llama2-7B | 11.8 | 3.2 |
52
+ | Llemma-7B | 36.4 | 18.0 |
53
+ | InternLM2-Base-7B | 36.5 | 8.6 |
54
+ | **InternLM2-Math-Base-7B** | **49.2** | **21.5** |
55
+ | Minerva-8B | 16.2 | 14.1 |
56
+ | InternLM2-Base-20B | 54.6 | 13.7 |
57
+ | **InternLM2-Math-Base-20B** | **63.7** | **27.3** |
58
+ | Llemma-34B | 51.5 | 25.0 |
59
+ | Minerva-62B | 52.4 | 27.6 |
60
+ | Minerva-540B | 58.8 | 33.6 |
61
+
62
+
63
+ ## SFT Peformance
64
+ All performance is based on greedy decoding with COT. We notice that the performance of Hungary has a big variance between our different checkpoints, while other performance is very stable. This may be due to the problem amount about Hungary.
65
+ | Model | Model Type | GSM8K | MATH | Hungary |
66
+ |------------------------|----------------------|--------|--------|---------|
67
+ | Qwen-7B-Chat | Genearl | 51.7 | 11.6 | - |
68
+ | DeepSeek-7B-Chat | General | 63.0 | 15.8 | 28.5 |
69
+ | InternLM2-Chat-7B | General | 70.7 | 23.0 | - |
70
+ | ChatGLM3-6B | General | 53.8 | 20.4 | 32 |
71
+ | MetaMath-Mistral-7B | Mathematics | 77.7 | 28.2 | 29 |
72
+ | MetaMath-Llemma-7B | Mathematics | 69.2 | 30.0 | - |
73
+ | **InternLM2-Math-7B** | Mathematics | **78.1** | **34.6** | **55** |
74
+ | InternLM2-Chat-20B | General | 79.6 | 31.9 | - |
75
+ | MetaMath-Llemma-34B | Mathematics | 75.8 | 34.8 | - |
76
+ | **InternLM2-Math-20B** | Mathematics | **82.6** | **37.7** | **66** |
77
+ | Qwen-72B | General | 78.9 | 35.2 | 52 |
78
+ | DeepSeek-67B | General | 84.1 | 32.6 | 58 |
79
+ | ChatGPT (GPT-3.5) | General | 80.8 | 34.1 | 41 |
80
+ | GPT4 (First version) | General | 92.0 | 42.5 | 68 |
81
+
82
+ # Inference
83
+
84
+ ## LMDeploy
85
+ We suggest using [LMDeploy](https://github.com/InternLM/LMDeploy)(>=0.2.1) for inference.
86
+ ```python
87
+ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
88
+
89
+ backend_config = TurbomindEngineConfig(model_name='internlm2-chat-7b', tp=1, cache_max_entry_count=0.3)
90
+ chat_template = ChatTemplateConfig(model_name='internlm2-chat-7b', system='', eosys='', meta_instruction='')
91
+ pipe = pipeline(model_path='internlm/internlm2-math-7b', chat_template_config=chat_template, backend_config=backend_config)
92
+
93
+ problem = '1+1='
94
+ result = pipe([problem], request_output_len=1024, top_k=1)
95
+ ```
96
+
97
+ ## Huggingface
98
+ ```python
99
+ import torch
100
+ from transformers import AutoTokenizer, AutoModelForCausalLM
101
+ tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-math-7b", trust_remote_code=True)
102
+ # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
103
+ model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-math-7b", trust_remote_code=True, torch_dtype=torch.float16).cuda()
104
+ model = model.eval()
105
+ response, history = model.chat(tokenizer, "1+1=", history=[], meta_instruction="")
106
+ print(response)
107
+ ```
108
+
109
+ # Special usages
110
+ We list some instructions used in our SFT. You can use them to help you. You can use the other ways to prompt the model, but the following are recommended. InternLM2-Math may combine the following abilities but it is not guaranteed.
111
+
112
+ | Description | Query |
113
+ | --- | --- |
114
+ | Solving question via chain-of-thought | {Question} |
115
+ | Solving question via Lean 3 | {Question}\nSolve this via Lean 3 |
116
+ | Outcome reward model | Given a question and an answer, check is it correct?\nQuestion:{Question}\nAnswer:{COT} |
117
+ | Process reward model | Given a question and an answer, check correctness of each step.\nQuestion:{Question}\nAnswer:{COT} |
118
+ | Reward model | Given a question and two answers, which one is better? \nQuestion:{Question}\nAnswer 1:{COT}\nAnswer 2:{COT} |
119
+ | Convert chain-of-thought to Lean 3 | Convert this answer into Lean3. Question:{Question}\nAnswer:{COT} |
120
+ | Convert Lean 3 to chain-of-thought | Convert this lean 3 code into a natural language problem with answers:\n{LEAN} |
121
+ | Translate question and chain-of-thought answer to a proof statement | Convert this question and answer into a proof format.\nQuestion:{Question}\nAnswer:{COT} |
122
+ | Translate proof problem to Lean 3 | Convert this natural langauge statement into a Lean 3 theorem statement:{Theorem} |
123
+ | Translate Lean 3 to proof problem | Convert this Lean 3 theorem statement into natural language:{STATEMENT} |
124
+ | Suggest a tactic based on Lean state | Given the Lean 3 tactic state, suggest a next tactic:\n{State} |
125
+ | Rephrase Problem | Describe this problem in another way. {STATEMENT} |
126
+ | Augment Problem | Please augment a new problem based on: {Question} |
127
+ | Augment a harder Problem | Increase the complexity of the problem: {Question} |
128
+ | Change specific numbers | Change specific numbers: {Question}|
129
+ | Introduce fractions or percentages | Introduce fractions or percentages: {Question}|
130
+ | Code Intepreter | [lagent](https://github.com/InternLM/InternLM/blob/main/agent/lagent.md) |
131
+ | In-context Learning | Question:{Question}\nAnswer:{COT}\n...Question:{Question}\nAnswer:{COT}|
132
+
133
+ # Fine-tune and others
134
+ Please refer to [InternLM](https://github.com/InternLM/InternLM/tree/main).
135
+
136
+ # Known issues
137
+ Our model is still under development and will be upgraded. There are some possible issues of InternLM-Math.
138
+ - Jump the calculating step.
139
+ - Perform badly at Chinese fill-in-the-bank problems and English choice problems due to SFT data composition.
140
+ - The reward model mode can be better leveraged with assigned token probabilities.
141
+ - Code switch due to SFT data composition.
142
+ - Some abilities of Lean can only be adapted to GSM8K-like problems (e.g. Convert chain-of-thought to Lean 3), and performance related to Lean is not guaranteed.
143
+
144
+ # Citation and Tech Report
145
+ To be appended.
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/models/internlm2-math-20b",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attn_implementation": "eager",
7
+ "bias": false,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 6144,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 16384,
14
+ "max_position_embeddings": 8192,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 48,
17
+ "num_hidden_layers": 48,
18
+ "num_key_value_heads": 8,
19
+ "pad_token_id": 2,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_theta": 1000000,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.36.2",
25
+ "use_cache": true,
26
+ "vocab_size": 92544
27
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60f7979f7e61230e9e96d37d7b7297a00a3e420b13230d3b77f96bfbf1096e16
3
+ size 9940821088
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43aa7d527b5b57701b23aad090240f154e7051dddcc42e0fe5bedc7a326761e4
3
+ size 9940833512
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c883ca8ba69382d1f7143d482374426910a1385f82973449656f43cbd1466060
3
+ size 9940833528
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85ddd6234d95016d2687e3fd2049bc4b8700752aef2b7ce0516f5331243aaac8
3
+ size 9899861984
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.3.2"}, "weight_map": {"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.31.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.40.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.40.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.40.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.41.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.41.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.41.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.42.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.42.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.42.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.43.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.43.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.43.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.43.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.43.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.43.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.43.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.43.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.43.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.44.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.44.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.44.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.45.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.45.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.45.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.46.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.46.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.46.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.47.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.47.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.47.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.norm.weight": "model-00004-of-00004.safetensors", "model.embed_tokens.weight": "model-00004-of-00004.safetensors", "lm_head.weight": "model-00004-of-00004.safetensors"}}
original_repo_url.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ https://huggingface.co/internlm/internlm2-math-20b
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
3
+ size 1477754
tokenizer_config.json ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "92538": {
30
+ "content": "<|plugin|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "92539": {
38
+ "content": "<|interpreter|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "92540": {
46
+ "content": "<|action_end|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "92541": {
54
+ "content": "<|action_start|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "92542": {
62
+ "content": "<|im_end|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "92543": {
70
+ "content": "<|im_start|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ }
77
+ },
78
+ "auto_map": {
79
+ "AutoTokenizer": [
80
+ "tokenization_internlm.InternLMTokenizer",
81
+ null
82
+ ]
83
+ },
84
+ "bos_token": "<s>",
85
+ "chat_template": "{{ bos_token }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
86
+ "clean_up_tokenization_spaces": true,
87
+ "eos_token": "</s>",
88
+ "legacy": true,
89
+ "model_max_length": 1000000000000000019884624838656,
90
+ "pad_token": "</s>",
91
+ "sp_model_kwargs": {},
92
+ "spaces_between_special_tokens": false,
93
+ "tokenizer_class": "LlamaTokenizer",
94
+ "trust_remote_code": false,
95
+ "unk_token": "<unk>",
96
+ "use_default_system_prompt": false
97
+ }