huwenxing
commited on
Commit
•
1c290a9
1
Parent(s):
daf8de1
first update
Browse files- README.md +194 -0
- config.json +36 -0
- configuration_internlm.py +121 -0
- generation_config.json +6 -0
- modeling_internlm.py +1086 -0
- pytorch_model-00001-of-00005.bin +3 -0
- pytorch_model-00002-of-00005.bin +3 -0
- pytorch_model-00003-of-00005.bin +3 -0
- pytorch_model-00004-of-00005.bin +3 -0
- pytorch_model-00005-of-00005.bin +3 -0
- pytorch_model.bin.index.json +550 -0
- special_tokens_map.json +6 -0
- tokenization_internlm.py +242 -0
- tokenizer.model +3 -0
- tokenizer_config.json +15 -0
README.md
ADDED
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
---
|
5 |
+
|
6 |
+
**InternLM**
|
7 |
+
|
8 |
+
<div align="center">
|
9 |
+
|
10 |
+
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
|
11 |
+
<div> </div>
|
12 |
+
<div align="center">
|
13 |
+
<b><font size="5">InternLM</font></b>
|
14 |
+
<sup>
|
15 |
+
<a href="https://internlm.intern-ai.org.cn/">
|
16 |
+
<i><font size="4">HOT</font></i>
|
17 |
+
</a>
|
18 |
+
</sup>
|
19 |
+
<div> </div>
|
20 |
+
</div>
|
21 |
+
|
22 |
+
[![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/)
|
23 |
+
|
24 |
+
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new)
|
25 |
+
|
26 |
+
</div>
|
27 |
+
|
28 |
+
|
29 |
+
## Introduction
|
30 |
+
|
31 |
+
The Shanghai Artificial Intelligence Laboratory, in collaboration with SenseTime Technology, the Chinese University of Hong Kong, and Fudan University, has officially released the 20 billion parameter pretrained model, InternLM-20B. InternLM-20B was pre-trained on over **2.3T** Tokens containing high-quality English, Chinese, and code data. Additionally, the Chat version has undergone SFT and RLHF training, enabling it to better and more securely meet users' needs.
|
32 |
+
|
33 |
+
In terms of model structure, InternLM-20B opted for a deeper architecture, with a depth set at 60 layers. This surpasses the conventional 7B and 13B models that utilize 32 or 40 layers. When parameters are limited, increasing the number of layers can enhance the model's overall capability. Furthermore, compared to InternLM-7B, the pre-training data used for InternLM-20B underwent higher quality cleansing and was supplemented with data rich in knowledge and designed for reinforcing understanding and reasoning capabilities. As a result, it exhibits significant improvements in understanding, reasoning, mathematical, and programming abilities—all of which test the technical proficiency of language models. Overall, InternLM-20B features the following characteristics:
|
34 |
+
- Outstanding overall performance
|
35 |
+
- Strong utility invocation capability
|
36 |
+
- Supports a 16k context length (Through infererence extrapolation)
|
37 |
+
- Better value alignment.
|
38 |
+
|
39 |
+
## Performance Evaluation
|
40 |
+
On the 5 capability dimensions proposed by OpenCompass, InternLM-20B has achieved excellent results (the bolded scores represent the best performances within the 13B-33B parameter range).
|
41 |
+
|
42 |
+
| Capability | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
|
43 |
+
|----------|-----------|------------|---------------|--------------|-----------|-----------|------------|
|
44 |
+
| Language | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 |
|
45 |
+
| Knowledge | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 |
|
46 |
+
| Understanding | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 |
|
47 |
+
| Reasoning | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 |
|
48 |
+
| Examination | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 |
|
49 |
+
| Overall | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 |
|
50 |
+
|
51 |
+
The table below compares the performance of mainstream open-source models on some influential and typical datasets.
|
52 |
+
|
53 |
+
| | Benchmarks | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
|
54 |
+
|------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------|
|
55 |
+
| Examination | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 |
|
56 |
+
| | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 |
|
57 |
+
| | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 |
|
58 |
+
| Knowledge | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 |
|
59 |
+
| | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 |
|
60 |
+
| | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 |
|
61 |
+
| Understanding | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 |
|
62 |
+
| | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 |
|
63 |
+
| | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 |
|
64 |
+
| | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 |
|
65 |
+
| | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 |
|
66 |
+
| Reasoning | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 |
|
67 |
+
| | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 |
|
68 |
+
| | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 |
|
69 |
+
| | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 |
|
70 |
+
| Programming | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 |
|
71 |
+
| | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 |
|
72 |
+
|
73 |
+
Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B.
|
74 |
+
|
75 |
+
## Import from Transformers
|
76 |
+
To load the InternLM 20B model using Transformers, use the following code:
|
77 |
+
```python
|
78 |
+
import torch
|
79 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
80 |
+
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True)
|
81 |
+
# Set `torch_dtype=torch.bfloat16` to load model in bfloat16, otherwise it will be loaded as float32 and cause OOM Error.
|
82 |
+
model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
|
83 |
+
model = model.eval()
|
84 |
+
output, history = model.chat(tokenizer, "Hello! Today is sunny, it is time to go out")
|
85 |
+
print(output)
|
86 |
+
# Hello! Today is sunny, and it sounds like a great day to go out an enjoy the weather. What would you like to do?
|
87 |
+
```
|
88 |
+
|
89 |
+
The responses can be streamed using `stream_chat`:
|
90 |
+
|
91 |
+
```python
|
92 |
+
import torch
|
93 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
94 |
+
|
95 |
+
model_path = "internlm/internlm-chat-20b"
|
96 |
+
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, trust_remote_code=True)
|
97 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
98 |
+
|
99 |
+
model = model.eval()
|
100 |
+
length = 0
|
101 |
+
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
|
102 |
+
print(response[length:], flush=True, end="")
|
103 |
+
length = len(response)
|
104 |
+
```
|
105 |
+
|
106 |
+
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
|
107 |
+
|
108 |
+
|
109 |
+
## Open Source License
|
110 |
+
|
111 |
+
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>.
|
112 |
+
|
113 |
+
|
114 |
+
## 简介
|
115 |
+
上海人工智能实验室与商汤科技联合香港中文大学和复旦大学正式推出书生·浦语200亿参数模型版本 InternLM-20B ,InternLM-20B 在超过 **2.3T** Tokens 包含高质量英文、中文和代码的数据上进行预训练,其中 Chat 版本还经过了 SFT 和 RLHF 训练,使其能够更好、更安全地满足用户的需求。
|
116 |
+
|
117 |
+
InternLM 20B 在模型结构上选择了深结构,层数设定为60层,超过常规7B和13B模型所使用的32层或者40层。在参数受限的情况下,提高层数有利于提高模型的综合能力。此外,相较于InternLM-7B,InternLM-20B使用的预训练数据经过了更高质量的清洗,并补充了高知识密度和用于强化理解与推理能力的训练数据。因此,它在理解能力、推理能力、数学能力、编程能力等考验语言模型技术水平的方面都得到了显著提升。总体而言,InternLM-20B具有以下的特点:
|
118 |
+
- 优异的综合性能
|
119 |
+
- 很强的工具调用功能
|
120 |
+
- 支持16k语境长度(通过推理时外推)
|
121 |
+
- 更好的价值对齐
|
122 |
+
|
123 |
+
## 性能评测
|
124 |
+
在OpenCompass提出的5个能力维度上,InternLM-20B都取得很好的效果(粗体为13B-33B这个量级范围内,各项最佳成绩)
|
125 |
+
|
126 |
+
| 能力维度 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
|
127 |
+
|----------|-----------|------------|---------------|--------------|-----------|-----------|------------|
|
128 |
+
| 语言 | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 |
|
129 |
+
| 知识 | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 |
|
130 |
+
| 理解 | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 |
|
131 |
+
| 推理 | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 |
|
132 |
+
| 学科 | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 |
|
133 |
+
| 总平均 | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 |
|
134 |
+
|
135 |
+
下表展示了在多个经典数据集上 InternLM 20B 与各个主流开源模型的表现
|
136 |
+
|
137 |
+
| | 评测集 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
|
138 |
+
|------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------|
|
139 |
+
| 学科 | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 |
|
140 |
+
| | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 |
|
141 |
+
| | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 |
|
142 |
+
| 知识 | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 |
|
143 |
+
| | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 |
|
144 |
+
| | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 |
|
145 |
+
| 理解 | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 |
|
146 |
+
| | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 |
|
147 |
+
| | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 |
|
148 |
+
| | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 |
|
149 |
+
| | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 |
|
150 |
+
| 推理 | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 |
|
151 |
+
| | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 |
|
152 |
+
| | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 |
|
153 |
+
| | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 |
|
154 |
+
| 编程 | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 |
|
155 |
+
| | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 |
|
156 |
+
|
157 |
+
总体而言,InternLM-20B 在综合能力上全面领先于13B量级的开源模型,同时在推理评测集上能够接近甚至超越Llama-65B的性能。
|
158 |
+
|
159 |
+
## 通过 Transformers 加载
|
160 |
+
通过以下的代码加载 InternLM 20B 模型
|
161 |
+
```python
|
162 |
+
import torch
|
163 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
164 |
+
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True)
|
165 |
+
# `torch_dtype=torch.bfloat16` 可以令模型以 bfloat16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
|
166 |
+
model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
|
167 |
+
model = model.eval()
|
168 |
+
output, history = model.chat(tokenizer, "你好呀!今天天气真好")
|
169 |
+
print(output)
|
170 |
+
# 你好!是的,今天的天气非常晴朗,非常适合户外活动。
|
171 |
+
```
|
172 |
+
|
173 |
+
如果想进行流式生成,则可以使用 stream_chat 接口:
|
174 |
+
|
175 |
+
```python
|
176 |
+
import torch
|
177 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
178 |
+
|
179 |
+
model_path = "internlm/internlm-chat-20b"
|
180 |
+
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.bfloat16, trust_remote_code=True)
|
181 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
182 |
+
|
183 |
+
model = model.eval()
|
184 |
+
length = 0
|
185 |
+
for response, history in model.stream_chat(tokenizer, "你好", history=[]):
|
186 |
+
print(response[length:], flush=True, end="")
|
187 |
+
length = len(response)
|
188 |
+
```
|
189 |
+
|
190 |
+
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
|
191 |
+
|
192 |
+
## 开源许可证
|
193 |
+
|
194 |
+
本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。
|
config.json
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"InternLMForCausalLM"
|
4 |
+
],
|
5 |
+
"auto_map": {
|
6 |
+
"AutoConfig": "configuration_internlm.InternLMConfig",
|
7 |
+
"AutoModel": "modeling_internlm.InternLMForCausalLM",
|
8 |
+
"AutoModelForCausalLM": "modeling_internlm.InternLMForCausalLM"
|
9 |
+
},
|
10 |
+
"bias": false,
|
11 |
+
"bos_token_id": 1,
|
12 |
+
"eos_token_id": 2,
|
13 |
+
"hidden_act": "silu",
|
14 |
+
"hidden_size": 5120,
|
15 |
+
"initializer_range": 0.02,
|
16 |
+
"intermediate_size": 13824,
|
17 |
+
"max_position_embeddings": 4096,
|
18 |
+
"model_type": "internlm",
|
19 |
+
"num_attention_heads": 40,
|
20 |
+
"num_hidden_layers": 60,
|
21 |
+
"num_key_value_heads": 40,
|
22 |
+
"pad_token_id": 2,
|
23 |
+
"pretraining_tp": 1,
|
24 |
+
"rms_norm_eps": 1e-06,
|
25 |
+
"rope_scaling": null,
|
26 |
+
"rope_theta": 10000.0,
|
27 |
+
"tie_word_embeddings": false,
|
28 |
+
"torch_dtype": "bfloat16",
|
29 |
+
"transformers_version": "4.33.1",
|
30 |
+
"use_cache": true,
|
31 |
+
"vocab_size": 103168,
|
32 |
+
"rotary": {
|
33 |
+
"base": 10000,
|
34 |
+
"type": "dynamic"
|
35 |
+
}
|
36 |
+
}
|
configuration_internlm.py
ADDED
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
|
5 |
+
# and OPT implementations in this library. It has been modified from its
|
6 |
+
# original forms to accommodate minor architectural differences compared
|
7 |
+
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
|
8 |
+
#
|
9 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
10 |
+
# you may not use this file except in compliance with the License.
|
11 |
+
# You may obtain a copy of the License at
|
12 |
+
#
|
13 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
14 |
+
#
|
15 |
+
# Unless required by applicable law or agreed to in writing, software
|
16 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
17 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
18 |
+
# See the License for the specific language governing permissions and
|
19 |
+
# limitations under the License.
|
20 |
+
""" InternLM model configuration"""
|
21 |
+
|
22 |
+
from transformers.configuration_utils import PretrainedConfig
|
23 |
+
from transformers.utils import logging
|
24 |
+
|
25 |
+
logger = logging.get_logger(__name__)
|
26 |
+
|
27 |
+
INTERNLM_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
28 |
+
|
29 |
+
|
30 |
+
class InternLMConfig(PretrainedConfig):
|
31 |
+
r"""
|
32 |
+
This is the configuration class to store the configuration of a [`InternLMModel`]. It is used to instantiate
|
33 |
+
an InternLM model according to the specified arguments, defining the model architecture. Instantiating a
|
34 |
+
configuration with the defaults will yield a similar configuration to that of the InternLM-7B.
|
35 |
+
|
36 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
37 |
+
documentation from [`PretrainedConfig`] for more information.
|
38 |
+
|
39 |
+
|
40 |
+
Args:
|
41 |
+
vocab_size (`int`, *optional*, defaults to 32000):
|
42 |
+
Vocabulary size of the InternLM model. Defines the number of different tokens that can be represented by the
|
43 |
+
`inputs_ids` passed when calling [`InternLMModel`]
|
44 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
45 |
+
Dimension of the hidden representations.
|
46 |
+
intermediate_size (`int`, *optional*, defaults to 11008):
|
47 |
+
Dimension of the MLP representations.
|
48 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
49 |
+
Number of hidden layers in the Transformer encoder.
|
50 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
51 |
+
Number of attention heads for each attention layer in the Transformer encoder.
|
52 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
53 |
+
The non-linear activation function (function or string) in the decoder.
|
54 |
+
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
55 |
+
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
56 |
+
just in case (e.g., 512 or 1024 or 2048).
|
57 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
58 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
59 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-12):
|
60 |
+
The epsilon used by the rms normalization layers.
|
61 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
62 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
63 |
+
relevant if `config.is_decoder=True`.
|
64 |
+
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
|
65 |
+
Whether to tie weight embeddings
|
66 |
+
Example:
|
67 |
+
|
68 |
+
```python
|
69 |
+
>>> from transformers import InternLMModel, InternLMConfig
|
70 |
+
|
71 |
+
>>> # Initializing a InternLM internlm-7b style configuration
|
72 |
+
>>> configuration = InternLMConfig()
|
73 |
+
|
74 |
+
>>> # Initializing a model from the internlm-7b style configuration
|
75 |
+
>>> model = InternLMModel(configuration)
|
76 |
+
|
77 |
+
>>> # Accessing the model configuration
|
78 |
+
>>> configuration = model.config
|
79 |
+
```"""
|
80 |
+
model_type = "internlm"
|
81 |
+
_auto_class = "AutoConfig"
|
82 |
+
|
83 |
+
def __init__( # pylint: disable=W0102
|
84 |
+
self,
|
85 |
+
vocab_size=103168,
|
86 |
+
hidden_size=4096,
|
87 |
+
intermediate_size=11008,
|
88 |
+
num_hidden_layers=32,
|
89 |
+
num_attention_heads=32,
|
90 |
+
hidden_act="silu",
|
91 |
+
max_position_embeddings=2048,
|
92 |
+
initializer_range=0.02,
|
93 |
+
rms_norm_eps=1e-6,
|
94 |
+
use_cache=True,
|
95 |
+
pad_token_id=0,
|
96 |
+
bos_token_id=1,
|
97 |
+
eos_token_id=2,
|
98 |
+
tie_word_embeddings=False,
|
99 |
+
bias=True,
|
100 |
+
rotary={"base": 10000, "type": "dynamic"}, # pylint: disable=W0102
|
101 |
+
**kwargs,
|
102 |
+
):
|
103 |
+
self.vocab_size = vocab_size
|
104 |
+
self.max_position_embeddings = max_position_embeddings
|
105 |
+
self.hidden_size = hidden_size
|
106 |
+
self.intermediate_size = intermediate_size
|
107 |
+
self.num_hidden_layers = num_hidden_layers
|
108 |
+
self.num_attention_heads = num_attention_heads
|
109 |
+
self.hidden_act = hidden_act
|
110 |
+
self.initializer_range = initializer_range
|
111 |
+
self.rms_norm_eps = rms_norm_eps
|
112 |
+
self.use_cache = use_cache
|
113 |
+
self.bias = bias
|
114 |
+
self.rotary = rotary
|
115 |
+
super().__init__(
|
116 |
+
pad_token_id=pad_token_id,
|
117 |
+
bos_token_id=bos_token_id,
|
118 |
+
eos_token_id=eos_token_id,
|
119 |
+
tie_word_embeddings=tie_word_embeddings,
|
120 |
+
**kwargs,
|
121 |
+
)
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.33.1"
|
6 |
+
}
|
modeling_internlm.py
ADDED
@@ -0,0 +1,1086 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
|
5 |
+
# and OPT implementations in this library. It has been modified from its
|
6 |
+
# original forms to accommodate minor architectural differences compared
|
7 |
+
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
|
8 |
+
#
|
9 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
10 |
+
# you may not use this file except in compliance with the License.
|
11 |
+
# You may obtain a copy of the License at
|
12 |
+
#
|
13 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
14 |
+
#
|
15 |
+
# Unless required by applicable law or agreed to in writing, software
|
16 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
17 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
18 |
+
# See the License for the specific language governing permissions and
|
19 |
+
# limitations under the License.
|
20 |
+
""" PyTorch InternLM model."""
|
21 |
+
import math
|
22 |
+
import queue
|
23 |
+
import threading
|
24 |
+
from typing import List, Optional, Tuple, Union
|
25 |
+
|
26 |
+
import torch
|
27 |
+
import torch.utils.checkpoint
|
28 |
+
from torch import nn
|
29 |
+
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
|
30 |
+
from transformers.activations import ACT2FN
|
31 |
+
from transformers.generation.streamers import BaseStreamer
|
32 |
+
from transformers.modeling_outputs import (
|
33 |
+
BaseModelOutputWithPast,
|
34 |
+
CausalLMOutputWithPast,
|
35 |
+
SequenceClassifierOutputWithPast,
|
36 |
+
)
|
37 |
+
from transformers.modeling_utils import PreTrainedModel
|
38 |
+
from transformers.utils import (
|
39 |
+
add_start_docstrings,
|
40 |
+
add_start_docstrings_to_model_forward,
|
41 |
+
logging,
|
42 |
+
replace_return_docstrings,
|
43 |
+
)
|
44 |
+
|
45 |
+
from .configuration_internlm import InternLMConfig
|
46 |
+
|
47 |
+
logger = logging.get_logger(__name__)
|
48 |
+
|
49 |
+
_CONFIG_FOR_DOC = "InternLMConfig"
|
50 |
+
|
51 |
+
|
52 |
+
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
|
53 |
+
def _make_causal_mask(
|
54 |
+
input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
|
55 |
+
):
|
56 |
+
"""
|
57 |
+
Make causal mask used for bi-directional self-attention.
|
58 |
+
"""
|
59 |
+
bsz, tgt_len = input_ids_shape
|
60 |
+
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
|
61 |
+
mask_cond = torch.arange(mask.size(-1), device=device)
|
62 |
+
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
|
63 |
+
mask = mask.to(dtype)
|
64 |
+
|
65 |
+
if past_key_values_length > 0:
|
66 |
+
mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
|
67 |
+
return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
|
68 |
+
|
69 |
+
|
70 |
+
# Copied from transformers.models.bart.modeling_bart._expand_mask
|
71 |
+
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
|
72 |
+
"""
|
73 |
+
Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
|
74 |
+
"""
|
75 |
+
bsz, src_len = mask.size()
|
76 |
+
tgt_len = tgt_len if tgt_len is not None else src_len
|
77 |
+
|
78 |
+
expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
|
79 |
+
|
80 |
+
inverted_mask = 1.0 - expanded_mask
|
81 |
+
|
82 |
+
return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
|
83 |
+
|
84 |
+
|
85 |
+
class InternLMRMSNorm(nn.Module):
|
86 |
+
"""RMSNorm implemention."""
|
87 |
+
|
88 |
+
def __init__(self, hidden_size, eps=1e-6):
|
89 |
+
"""
|
90 |
+
InternLMRMSNorm is equivalent to T5LayerNorm
|
91 |
+
"""
|
92 |
+
super().__init__()
|
93 |
+
self.weight = nn.Parameter(torch.ones(hidden_size))
|
94 |
+
self.variance_epsilon = eps
|
95 |
+
|
96 |
+
def forward(self, hidden_states):
|
97 |
+
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
|
98 |
+
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
|
99 |
+
|
100 |
+
# convert into half-precision if necessary
|
101 |
+
if self.weight.dtype in [torch.float16, torch.bfloat16]:
|
102 |
+
hidden_states = hidden_states.to(self.weight.dtype)
|
103 |
+
|
104 |
+
return self.weight * hidden_states
|
105 |
+
|
106 |
+
|
107 |
+
class InternLMRotaryEmbedding(torch.nn.Module):
|
108 |
+
"""Implement InternLM's rotary embedding.
|
109 |
+
|
110 |
+
Args:
|
111 |
+
dim (int): Characteristic dimension of each self-attentional head.
|
112 |
+
max_position_embeddings (int, optional): Model's training length. Defaults to 2048.
|
113 |
+
base (int, optional): The rotation position encodes the rotation Angle base number. Defaults to 10000.
|
114 |
+
device (Any, optional): Running device. Defaults to None.
|
115 |
+
"""
|
116 |
+
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
|
117 |
+
super().__init__()
|
118 |
+
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
|
119 |
+
self.register_buffer("inv_freq", inv_freq, persistent=False)
|
120 |
+
|
121 |
+
# Build here to make `torch.jit.trace` work.
|
122 |
+
self.max_seq_len_cached = max_position_embeddings
|
123 |
+
t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
|
124 |
+
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
|
125 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
126 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
127 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
|
128 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
|
129 |
+
|
130 |
+
def forward(self, x, seq_len=None):
|
131 |
+
# x: [bs, num_attention_heads, seq_len, head_size]
|
132 |
+
# This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
|
133 |
+
if seq_len > self.max_seq_len_cached:
|
134 |
+
self.max_seq_len_cached = seq_len
|
135 |
+
t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
|
136 |
+
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
|
137 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
138 |
+
emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
|
139 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
|
140 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
|
141 |
+
return (
|
142 |
+
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
143 |
+
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
144 |
+
)
|
145 |
+
|
146 |
+
|
147 |
+
class InternLMDynamicNTKScalingRotaryEmbedding(torch.nn.Module):
|
148 |
+
"""Implement InternLM's DyanmicNTK extrapolation method, thereby broadening the model support context to 16K.
|
149 |
+
|
150 |
+
Args:
|
151 |
+
dim (int): Characteristic dimension of each self-attentional head.
|
152 |
+
max_position_embeddings (int, optional): Model's training length. Defaults to 2048.
|
153 |
+
base (int, optional): The rotation position encodes the rotation Angle base number. Defaults to 10000.
|
154 |
+
device (Any, optional): Running device. Defaults to None.
|
155 |
+
scaling_factor (float, optional): NTK method extrapolation coefficient. Defaults to 1.0.
|
156 |
+
"""
|
157 |
+
|
158 |
+
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
|
159 |
+
super().__init__()
|
160 |
+
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
|
161 |
+
self.register_buffer("inv_freq", inv_freq)
|
162 |
+
self.dim = dim
|
163 |
+
self.base = base
|
164 |
+
self.scaling_factor = scaling_factor
|
165 |
+
|
166 |
+
# Build here to make `torch.jit.trace` work.
|
167 |
+
self.max_position_embeddings = max_position_embeddings
|
168 |
+
self.max_seq_len_cached = max_position_embeddings
|
169 |
+
t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
|
170 |
+
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
|
171 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
172 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
173 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
|
174 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
|
175 |
+
|
176 |
+
def _update_cached(self, x, seq_len=None):
|
177 |
+
self.max_seq_len_cached = max(seq_len, self.max_position_embeddings)
|
178 |
+
if seq_len > self.max_position_embeddings:
|
179 |
+
base = self.base * (
|
180 |
+
(self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
|
181 |
+
) ** (self.dim / (self.dim - 2))
|
182 |
+
inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(x.device) / self.dim))
|
183 |
+
else:
|
184 |
+
inv_freq = self.inv_freq
|
185 |
+
t = torch.arange(self.max_seq_len_cached, device=inv_freq.device, dtype=inv_freq.dtype)
|
186 |
+
freqs = torch.einsum("i,j->ij", t, inv_freq)
|
187 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
188 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
|
189 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
|
190 |
+
|
191 |
+
def forward(self, x, seq_len=None):
|
192 |
+
# x: [bs, num_attention_heads, seq_len, head_size]
|
193 |
+
# This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
|
194 |
+
if seq_len <= self.max_position_embeddings:
|
195 |
+
# Reset the tables if the sequence length has changed,
|
196 |
+
if self.max_seq_len_cached > self.max_position_embeddings:
|
197 |
+
self._update_cached(x, seq_len)
|
198 |
+
else:
|
199 |
+
self._update_cached(x, seq_len)
|
200 |
+
|
201 |
+
return (
|
202 |
+
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
203 |
+
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
204 |
+
)
|
205 |
+
|
206 |
+
|
207 |
+
def rotate_half(x):
|
208 |
+
"""Rotates half the hidden dims of the input."""
|
209 |
+
x1 = x[..., : x.shape[-1] // 2]
|
210 |
+
x2 = x[..., x.shape[-1] // 2 :]
|
211 |
+
return torch.cat((-x2, x1), dim=-1)
|
212 |
+
|
213 |
+
|
214 |
+
def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
|
215 |
+
# The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
|
216 |
+
cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
|
217 |
+
sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
|
218 |
+
cos = cos.unsqueeze(0).unsqueeze(0).expand(len(position_ids), -1, -1, -1)
|
219 |
+
sin = sin.unsqueeze(0).unsqueeze(0).expand(len(position_ids), -1, -1, -1)
|
220 |
+
if q.size(2) == 1:
|
221 |
+
q_embed = (q * cos[:, :, -1, :]) + (rotate_half(q) * sin[:, :, -1, :])
|
222 |
+
else:
|
223 |
+
q_embed = (q * cos) + (rotate_half(q) * sin)
|
224 |
+
|
225 |
+
if k.size(2) == 1:
|
226 |
+
k_embed = (k * cos[:, :, -1, :]) + (rotate_half(k) * sin[:, :, -1, :])
|
227 |
+
else:
|
228 |
+
k_embed = (k * cos) + (rotate_half(k) * sin)
|
229 |
+
|
230 |
+
return q_embed, k_embed
|
231 |
+
|
232 |
+
|
233 |
+
class InternLMMLP(nn.Module):
|
234 |
+
def __init__(
|
235 |
+
self,
|
236 |
+
hidden_size: int,
|
237 |
+
intermediate_size: int,
|
238 |
+
hidden_act: str,
|
239 |
+
):
|
240 |
+
super().__init__()
|
241 |
+
self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
|
242 |
+
self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
|
243 |
+
self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
|
244 |
+
self.act_fn = ACT2FN[hidden_act]
|
245 |
+
|
246 |
+
def forward(self, x):
|
247 |
+
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
|
248 |
+
|
249 |
+
|
250 |
+
class InternLMAttention(nn.Module):
|
251 |
+
"""Multi-headed attention from 'Attention Is All You Need' paper"""
|
252 |
+
|
253 |
+
def __init__(self, config: InternLMConfig):
|
254 |
+
super().__init__()
|
255 |
+
self.config = config
|
256 |
+
self.hidden_size = config.hidden_size
|
257 |
+
self.num_heads = config.num_attention_heads
|
258 |
+
self.head_dim = self.hidden_size // self.num_heads
|
259 |
+
self.max_position_embeddings = config.max_position_embeddings
|
260 |
+
|
261 |
+
if (self.head_dim * self.num_heads) != self.hidden_size:
|
262 |
+
raise ValueError(
|
263 |
+
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
|
264 |
+
f" and `num_heads`: {self.num_heads})."
|
265 |
+
)
|
266 |
+
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
|
267 |
+
self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
|
268 |
+
self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
|
269 |
+
self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
|
270 |
+
self.rotary_emb = self._init_rope()
|
271 |
+
|
272 |
+
def _init_rope(self):
|
273 |
+
if self.config.rotary["type"] == "origin":
|
274 |
+
self.rotary_emb = InternLMRotaryEmbedding(
|
275 |
+
self.head_dim,
|
276 |
+
max_position_embeddings=self.max_position_embeddings,
|
277 |
+
base=self.config.rotary["base"],
|
278 |
+
)
|
279 |
+
elif self.config.rotary["type"] == "dynamic":
|
280 |
+
self.rotary_emb = InternLMDynamicNTKScalingRotaryEmbedding(
|
281 |
+
self.head_dim,
|
282 |
+
max_position_embeddings=self.max_position_embeddings,
|
283 |
+
base=self.config.rotary["base"],
|
284 |
+
scaling_factor=self.config.rotary.get("scaling_factor", 1.0),
|
285 |
+
)
|
286 |
+
else:
|
287 |
+
raise ValueError("Currently we only support rotary embedding's type being one of ('origin', 'dynamic').")
|
288 |
+
return self.rotary_emb
|
289 |
+
|
290 |
+
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
|
291 |
+
return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
|
292 |
+
|
293 |
+
def forward(
|
294 |
+
self,
|
295 |
+
hidden_states: torch.Tensor,
|
296 |
+
attention_mask: Optional[torch.Tensor] = None,
|
297 |
+
position_ids: Optional[torch.LongTensor] = None,
|
298 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
299 |
+
output_attentions: bool = False,
|
300 |
+
use_cache: bool = False,
|
301 |
+
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
|
302 |
+
bsz, q_len, _ = hidden_states.size()
|
303 |
+
|
304 |
+
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
305 |
+
key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
306 |
+
value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
307 |
+
|
308 |
+
if past_key_value is not None:
|
309 |
+
# reuse k, v, self_attention
|
310 |
+
key_states = torch.cat([past_key_value[0], key_states], dim=2)
|
311 |
+
value_states = torch.cat([past_key_value[1], value_states], dim=2)
|
312 |
+
|
313 |
+
# print(use_cache)
|
314 |
+
past_key_value = (key_states, value_states) if use_cache else None
|
315 |
+
|
316 |
+
kv_seq_len = key_states.shape[-2]
|
317 |
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
|
318 |
+
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
|
319 |
+
|
320 |
+
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
|
321 |
+
|
322 |
+
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
|
323 |
+
raise ValueError(
|
324 |
+
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
|
325 |
+
f" {attn_weights.size()}"
|
326 |
+
)
|
327 |
+
|
328 |
+
if attention_mask is not None:
|
329 |
+
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
|
330 |
+
raise ValueError(
|
331 |
+
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
|
332 |
+
)
|
333 |
+
attn_weights = attn_weights + attention_mask
|
334 |
+
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
|
335 |
+
|
336 |
+
# upcast attention to fp32
|
337 |
+
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
|
338 |
+
attn_output = torch.matmul(attn_weights, value_states)
|
339 |
+
|
340 |
+
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
|
341 |
+
raise ValueError(
|
342 |
+
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
|
343 |
+
f" {attn_output.size()}"
|
344 |
+
)
|
345 |
+
|
346 |
+
attn_output = attn_output.transpose(1, 2)
|
347 |
+
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
|
348 |
+
|
349 |
+
attn_output = self.o_proj(attn_output)
|
350 |
+
|
351 |
+
if not output_attentions:
|
352 |
+
attn_weights = None
|
353 |
+
|
354 |
+
return attn_output, attn_weights, past_key_value
|
355 |
+
|
356 |
+
|
357 |
+
class InternLMDecoderLayer(nn.Module):
|
358 |
+
def __init__(self, config: InternLMConfig):
|
359 |
+
super().__init__()
|
360 |
+
self.hidden_size = config.hidden_size
|
361 |
+
self.self_attn = InternLMAttention(config=config)
|
362 |
+
self.mlp = InternLMMLP(
|
363 |
+
hidden_size=self.hidden_size,
|
364 |
+
intermediate_size=config.intermediate_size,
|
365 |
+
hidden_act=config.hidden_act,
|
366 |
+
)
|
367 |
+
self.input_layernorm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
368 |
+
self.post_attention_layernorm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
369 |
+
|
370 |
+
def forward(
|
371 |
+
self,
|
372 |
+
hidden_states: torch.Tensor,
|
373 |
+
attention_mask: Optional[torch.Tensor] = None,
|
374 |
+
position_ids: Optional[torch.LongTensor] = None,
|
375 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
376 |
+
output_attentions: Optional[bool] = False,
|
377 |
+
use_cache: Optional[bool] = False,
|
378 |
+
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
|
379 |
+
"""
|
380 |
+
Args:
|
381 |
+
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
|
382 |
+
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
|
383 |
+
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
|
384 |
+
output_attentions (`bool`, *optional*):
|
385 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
|
386 |
+
returned tensors for more detail.
|
387 |
+
use_cache (`bool`, *optional*):
|
388 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
|
389 |
+
(see `past_key_values`).
|
390 |
+
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
|
391 |
+
"""
|
392 |
+
|
393 |
+
residual = hidden_states
|
394 |
+
|
395 |
+
hidden_states = self.input_layernorm(hidden_states)
|
396 |
+
|
397 |
+
# Self Attention
|
398 |
+
hidden_states, self_attn_weights, present_key_value = self.self_attn(
|
399 |
+
hidden_states=hidden_states,
|
400 |
+
attention_mask=attention_mask,
|
401 |
+
position_ids=position_ids,
|
402 |
+
past_key_value=past_key_value,
|
403 |
+
output_attentions=output_attentions,
|
404 |
+
use_cache=use_cache,
|
405 |
+
)
|
406 |
+
hidden_states = residual + hidden_states
|
407 |
+
|
408 |
+
# Fully Connected
|
409 |
+
residual = hidden_states
|
410 |
+
hidden_states = self.post_attention_layernorm(hidden_states)
|
411 |
+
hidden_states = self.mlp(hidden_states)
|
412 |
+
hidden_states = residual + hidden_states
|
413 |
+
|
414 |
+
outputs = (hidden_states,)
|
415 |
+
|
416 |
+
if output_attentions:
|
417 |
+
outputs += (self_attn_weights,)
|
418 |
+
|
419 |
+
if use_cache:
|
420 |
+
outputs += (present_key_value,)
|
421 |
+
|
422 |
+
return outputs
|
423 |
+
|
424 |
+
|
425 |
+
INTERNLM_START_DOCSTRING = r"""
|
426 |
+
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
427 |
+
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
|
428 |
+
etc.)
|
429 |
+
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
430 |
+
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
431 |
+
and behavior.
|
432 |
+
Parameters:
|
433 |
+
config ([`InternLMConfig`]):
|
434 |
+
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
435 |
+
load the weights associated with the model, only the configuration. Check out the
|
436 |
+
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
437 |
+
"""
|
438 |
+
|
439 |
+
|
440 |
+
@add_start_docstrings(
|
441 |
+
"The bare InternLM Model outputting raw hidden-states without any specific head on top.",
|
442 |
+
INTERNLM_START_DOCSTRING,
|
443 |
+
)
|
444 |
+
class InternLMPreTrainedModel(PreTrainedModel):
|
445 |
+
config_class = InternLMConfig
|
446 |
+
base_model_prefix = "model"
|
447 |
+
supports_gradient_checkpointing = True
|
448 |
+
_no_split_modules = ["InternLMDecoderLayer"]
|
449 |
+
_keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
|
450 |
+
|
451 |
+
def _init_weights(self, module):
|
452 |
+
std = self.config.initializer_range
|
453 |
+
if isinstance(module, nn.Linear):
|
454 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
455 |
+
if module.bias is not None:
|
456 |
+
module.bias.data.zero_()
|
457 |
+
elif isinstance(module, nn.Embedding):
|
458 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
459 |
+
if module.padding_idx is not None:
|
460 |
+
module.weight.data[module.padding_idx].zero_()
|
461 |
+
|
462 |
+
def _set_gradient_checkpointing(self, module, value=False):
|
463 |
+
if isinstance(module, InternLMModel):
|
464 |
+
module.gradient_checkpointing = value
|
465 |
+
|
466 |
+
|
467 |
+
INTERNLM_INPUTS_DOCSTRING = r"""
|
468 |
+
Args:
|
469 |
+
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
|
470 |
+
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
|
471 |
+
it.
|
472 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
473 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
474 |
+
[What are input IDs?](../glossary#input-ids)
|
475 |
+
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
|
476 |
+
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
|
477 |
+
- 1 for tokens that are **not masked**,
|
478 |
+
- 0 for tokens that are **masked**.
|
479 |
+
[What are attention masks?](../glossary#attention-mask)
|
480 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
481 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
482 |
+
If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
|
483 |
+
`past_key_values`).
|
484 |
+
If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
|
485 |
+
and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
|
486 |
+
information on the default strategy.
|
487 |
+
- 1 indicates the head is **not masked**,
|
488 |
+
- 0 indicates the head is **masked**.
|
489 |
+
position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
490 |
+
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
|
491 |
+
config.n_positions - 1]`.
|
492 |
+
[What are position IDs?](../glossary#position-ids)
|
493 |
+
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or
|
494 |
+
when `config.use_cache=True`):
|
495 |
+
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
|
496 |
+
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
497 |
+
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
|
498 |
+
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
|
499 |
+
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
500 |
+
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
|
501 |
+
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
|
502 |
+
`decoder_input_ids` of shape `(batch_size, sequence_length)`.
|
503 |
+
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
|
504 |
+
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
|
505 |
+
is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
|
506 |
+
model's internal embedding lookup matrix.
|
507 |
+
use_cache (`bool`, *optional*):
|
508 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
|
509 |
+
`past_key_values`).
|
510 |
+
output_attentions (`bool`, *optional*):
|
511 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
|
512 |
+
tensors for more detail.
|
513 |
+
output_hidden_states (`bool`, *optional*):
|
514 |
+
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
|
515 |
+
more detail.
|
516 |
+
return_dict (`bool`, *optional*):
|
517 |
+
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
|
518 |
+
"""
|
519 |
+
|
520 |
+
|
521 |
+
@add_start_docstrings(
|
522 |
+
"The bare InternLM Model outputting raw hidden-states without any specific head on top.",
|
523 |
+
INTERNLM_START_DOCSTRING,
|
524 |
+
)
|
525 |
+
class InternLMModel(InternLMPreTrainedModel):
|
526 |
+
"""
|
527 |
+
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLMDecoderLayer`]
|
528 |
+
Args:
|
529 |
+
config: InternLMConfig
|
530 |
+
"""
|
531 |
+
|
532 |
+
_auto_class = "AutoModel"
|
533 |
+
|
534 |
+
def __init__(self, config: InternLMConfig):
|
535 |
+
super().__init__(config)
|
536 |
+
self.padding_idx = config.pad_token_id
|
537 |
+
self.vocab_size = config.vocab_size
|
538 |
+
|
539 |
+
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
|
540 |
+
self.layers = nn.ModuleList([InternLMDecoderLayer(config) for _ in range(config.num_hidden_layers)])
|
541 |
+
self.norm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
542 |
+
|
543 |
+
self.gradient_checkpointing = False
|
544 |
+
# Initialize weights and apply final processing
|
545 |
+
self.post_init()
|
546 |
+
|
547 |
+
def get_input_embeddings(self):
|
548 |
+
return self.embed_tokens
|
549 |
+
|
550 |
+
def set_input_embeddings(self, value):
|
551 |
+
self.embed_tokens = value
|
552 |
+
|
553 |
+
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
|
554 |
+
def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
|
555 |
+
# create causal mask
|
556 |
+
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
|
557 |
+
combined_attention_mask = None
|
558 |
+
if input_shape[-1] > 1:
|
559 |
+
combined_attention_mask = _make_causal_mask(
|
560 |
+
input_shape,
|
561 |
+
inputs_embeds.dtype,
|
562 |
+
device=inputs_embeds.device,
|
563 |
+
past_key_values_length=past_key_values_length,
|
564 |
+
)
|
565 |
+
|
566 |
+
if attention_mask is not None:
|
567 |
+
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
|
568 |
+
expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
|
569 |
+
inputs_embeds.device
|
570 |
+
)
|
571 |
+
combined_attention_mask = (
|
572 |
+
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
|
573 |
+
)
|
574 |
+
|
575 |
+
return combined_attention_mask
|
576 |
+
|
577 |
+
@add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
|
578 |
+
def forward(
|
579 |
+
self,
|
580 |
+
input_ids: torch.LongTensor = None,
|
581 |
+
attention_mask: Optional[torch.Tensor] = None,
|
582 |
+
position_ids: Optional[torch.LongTensor] = None,
|
583 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
584 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
585 |
+
use_cache: Optional[bool] = None,
|
586 |
+
output_attentions: Optional[bool] = None,
|
587 |
+
output_hidden_states: Optional[bool] = None,
|
588 |
+
return_dict: Optional[bool] = None,
|
589 |
+
) -> Union[Tuple, BaseModelOutputWithPast]:
|
590 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
591 |
+
output_hidden_states = (
|
592 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
593 |
+
)
|
594 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
595 |
+
|
596 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
597 |
+
|
598 |
+
# retrieve input_ids and inputs_embeds
|
599 |
+
if input_ids is not None and inputs_embeds is not None:
|
600 |
+
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
|
601 |
+
elif input_ids is not None:
|
602 |
+
batch_size, seq_length = input_ids.shape
|
603 |
+
elif inputs_embeds is not None:
|
604 |
+
batch_size, seq_length, _ = inputs_embeds.shape
|
605 |
+
else:
|
606 |
+
raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
|
607 |
+
|
608 |
+
seq_length_with_past = seq_length
|
609 |
+
past_key_values_length = 0
|
610 |
+
|
611 |
+
if past_key_values is not None:
|
612 |
+
past_key_values_length = past_key_values[0][0].shape[2]
|
613 |
+
seq_length_with_past = seq_length_with_past + past_key_values_length
|
614 |
+
|
615 |
+
if position_ids is None:
|
616 |
+
device = input_ids.device if input_ids is not None else inputs_embeds.device
|
617 |
+
position_ids = torch.arange(
|
618 |
+
past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
|
619 |
+
)
|
620 |
+
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
|
621 |
+
else:
|
622 |
+
position_ids = position_ids.view(-1, seq_length).long()
|
623 |
+
|
624 |
+
if inputs_embeds is None:
|
625 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
626 |
+
# embed positions
|
627 |
+
if attention_mask is None:
|
628 |
+
attention_mask = torch.ones(
|
629 |
+
(batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
|
630 |
+
)
|
631 |
+
attention_mask = self._prepare_decoder_attention_mask(
|
632 |
+
attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
|
633 |
+
)
|
634 |
+
|
635 |
+
hidden_states = inputs_embeds
|
636 |
+
|
637 |
+
if self.gradient_checkpointing and self.training:
|
638 |
+
if use_cache:
|
639 |
+
logger.warning_once(
|
640 |
+
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
|
641 |
+
)
|
642 |
+
use_cache = False
|
643 |
+
|
644 |
+
# decoder layers
|
645 |
+
all_hidden_states = () if output_hidden_states else None
|
646 |
+
all_self_attns = () if output_attentions else None
|
647 |
+
next_decoder_cache = () if use_cache else None
|
648 |
+
|
649 |
+
for idx, decoder_layer in enumerate(self.layers):
|
650 |
+
if output_hidden_states:
|
651 |
+
all_hidden_states += (hidden_states,)
|
652 |
+
|
653 |
+
past_key_value = past_key_values[idx] if past_key_values is not None else None
|
654 |
+
|
655 |
+
if self.gradient_checkpointing and self.training:
|
656 |
+
|
657 |
+
def create_custom_forward(module):
|
658 |
+
def custom_forward(*inputs):
|
659 |
+
# None for past_key_value
|
660 |
+
return module(*inputs, output_attentions, None)
|
661 |
+
|
662 |
+
return custom_forward
|
663 |
+
|
664 |
+
layer_outputs = torch.utils.checkpoint.checkpoint(
|
665 |
+
create_custom_forward(decoder_layer),
|
666 |
+
hidden_states,
|
667 |
+
attention_mask,
|
668 |
+
position_ids,
|
669 |
+
None,
|
670 |
+
)
|
671 |
+
else:
|
672 |
+
layer_outputs = decoder_layer(
|
673 |
+
hidden_states,
|
674 |
+
attention_mask=attention_mask,
|
675 |
+
position_ids=position_ids,
|
676 |
+
past_key_value=past_key_value,
|
677 |
+
output_attentions=output_attentions,
|
678 |
+
use_cache=use_cache,
|
679 |
+
)
|
680 |
+
|
681 |
+
hidden_states = layer_outputs[0]
|
682 |
+
|
683 |
+
if use_cache:
|
684 |
+
next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
|
685 |
+
|
686 |
+
if output_attentions:
|
687 |
+
all_self_attns += (layer_outputs[1],)
|
688 |
+
|
689 |
+
hidden_states = self.norm(hidden_states)
|
690 |
+
|
691 |
+
# add hidden states from the last decoder layer
|
692 |
+
if output_hidden_states:
|
693 |
+
all_hidden_states += (hidden_states,)
|
694 |
+
|
695 |
+
next_cache = next_decoder_cache if use_cache else None
|
696 |
+
if not return_dict:
|
697 |
+
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
|
698 |
+
return BaseModelOutputWithPast(
|
699 |
+
last_hidden_state=hidden_states,
|
700 |
+
past_key_values=next_cache,
|
701 |
+
hidden_states=all_hidden_states,
|
702 |
+
attentions=all_self_attns,
|
703 |
+
)
|
704 |
+
|
705 |
+
|
706 |
+
class InternLMForCausalLM(InternLMPreTrainedModel):
|
707 |
+
_auto_class = "AutoModelForCausalLM"
|
708 |
+
|
709 |
+
def __init__(self, config):
|
710 |
+
super().__init__(config)
|
711 |
+
self.model = InternLMModel(config)
|
712 |
+
|
713 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
|
714 |
+
|
715 |
+
# Initialize weights and apply final processing
|
716 |
+
self.post_init()
|
717 |
+
|
718 |
+
def get_input_embeddings(self):
|
719 |
+
return self.model.embed_tokens
|
720 |
+
|
721 |
+
def set_input_embeddings(self, value):
|
722 |
+
self.model.embed_tokens = value
|
723 |
+
|
724 |
+
def get_output_embeddings(self):
|
725 |
+
return self.lm_head
|
726 |
+
|
727 |
+
def set_output_embeddings(self, new_embeddings):
|
728 |
+
self.lm_head = new_embeddings
|
729 |
+
|
730 |
+
def set_decoder(self, decoder):
|
731 |
+
self.model = decoder
|
732 |
+
|
733 |
+
def get_decoder(self):
|
734 |
+
return self.model
|
735 |
+
|
736 |
+
@add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
|
737 |
+
@replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
|
738 |
+
def forward(
|
739 |
+
self,
|
740 |
+
input_ids: torch.LongTensor = None,
|
741 |
+
attention_mask: Optional[torch.Tensor] = None,
|
742 |
+
position_ids: Optional[torch.LongTensor] = None,
|
743 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
744 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
745 |
+
labels: Optional[torch.LongTensor] = None,
|
746 |
+
use_cache: Optional[bool] = None,
|
747 |
+
output_attentions: Optional[bool] = None,
|
748 |
+
output_hidden_states: Optional[bool] = None,
|
749 |
+
return_dict: Optional[bool] = None,
|
750 |
+
) -> Union[Tuple, CausalLMOutputWithPast]:
|
751 |
+
r"""
|
752 |
+
Args:
|
753 |
+
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
754 |
+
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
|
755 |
+
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
|
756 |
+
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
|
757 |
+
Returns:
|
758 |
+
Example:
|
759 |
+
```python
|
760 |
+
>>> from transformers import AutoTokenizer, InternLMForCausalLM
|
761 |
+
>>> model = InternLMForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
|
762 |
+
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
|
763 |
+
>>> prompt = "Hey, are you consciours? Can you talk to me?"
|
764 |
+
>>> inputs = tokenizer(prompt, return_tensors="pt")
|
765 |
+
>>> # Generate
|
766 |
+
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
|
767 |
+
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
768 |
+
"Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
|
769 |
+
```"""
|
770 |
+
|
771 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
772 |
+
output_hidden_states = (
|
773 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
774 |
+
)
|
775 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
776 |
+
|
777 |
+
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
|
778 |
+
outputs = self.model(
|
779 |
+
input_ids=input_ids,
|
780 |
+
attention_mask=attention_mask,
|
781 |
+
position_ids=position_ids,
|
782 |
+
past_key_values=past_key_values,
|
783 |
+
inputs_embeds=inputs_embeds,
|
784 |
+
use_cache=use_cache,
|
785 |
+
output_attentions=output_attentions,
|
786 |
+
output_hidden_states=output_hidden_states,
|
787 |
+
return_dict=return_dict,
|
788 |
+
)
|
789 |
+
|
790 |
+
hidden_states = outputs[0]
|
791 |
+
logits = self.lm_head(hidden_states)
|
792 |
+
|
793 |
+
loss = None
|
794 |
+
if labels is not None:
|
795 |
+
# Shift so that tokens < n predict n
|
796 |
+
shift_logits = logits[..., :-1, :].contiguous()
|
797 |
+
shift_labels = labels[..., 1:].contiguous()
|
798 |
+
# Flatten the tokens
|
799 |
+
loss_fct = CrossEntropyLoss()
|
800 |
+
shift_logits = shift_logits.view(-1, self.config.vocab_size)
|
801 |
+
shift_labels = shift_labels.view(-1)
|
802 |
+
# Enable model parallelism
|
803 |
+
shift_labels = shift_labels.to(shift_logits.device)
|
804 |
+
loss = loss_fct(shift_logits, shift_labels)
|
805 |
+
|
806 |
+
if not return_dict:
|
807 |
+
output = (logits,) + outputs[1:]
|
808 |
+
return (loss,) + output if loss is not None else output
|
809 |
+
|
810 |
+
return CausalLMOutputWithPast(
|
811 |
+
loss=loss,
|
812 |
+
logits=logits,
|
813 |
+
past_key_values=outputs.past_key_values,
|
814 |
+
hidden_states=outputs.hidden_states,
|
815 |
+
attentions=outputs.attentions,
|
816 |
+
)
|
817 |
+
|
818 |
+
def prepare_inputs_for_generation(
|
819 |
+
self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
|
820 |
+
):
|
821 |
+
if past_key_values:
|
822 |
+
input_ids = input_ids[:, -1:]
|
823 |
+
|
824 |
+
position_ids = kwargs.get("position_ids", None)
|
825 |
+
if attention_mask is not None and position_ids is None:
|
826 |
+
# create position_ids on the fly for batch generation
|
827 |
+
position_ids = attention_mask.long().cumsum(-1) - 1
|
828 |
+
position_ids.masked_fill_(attention_mask == 0, 1)
|
829 |
+
if past_key_values:
|
830 |
+
position_ids = position_ids[:, -1].unsqueeze(-1)
|
831 |
+
|
832 |
+
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
|
833 |
+
if inputs_embeds is not None and past_key_values is None:
|
834 |
+
model_inputs = {"inputs_embeds": inputs_embeds}
|
835 |
+
else:
|
836 |
+
model_inputs = {"input_ids": input_ids}
|
837 |
+
|
838 |
+
model_inputs.update(
|
839 |
+
{
|
840 |
+
"position_ids": position_ids,
|
841 |
+
"past_key_values": past_key_values,
|
842 |
+
"use_cache": kwargs.get("use_cache"),
|
843 |
+
"attention_mask": attention_mask,
|
844 |
+
}
|
845 |
+
)
|
846 |
+
return model_inputs
|
847 |
+
|
848 |
+
@staticmethod
|
849 |
+
def _reorder_cache(past_key_values, beam_idx):
|
850 |
+
reordered_past = ()
|
851 |
+
for layer_past in past_key_values:
|
852 |
+
reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
|
853 |
+
return reordered_past
|
854 |
+
|
855 |
+
def build_inputs(self, tokenizer, query: str, history: List[Tuple[str, str]] = []):
|
856 |
+
prompt = ""
|
857 |
+
for record in history:
|
858 |
+
prompt += f"""<|User|>:{record[0]}<eoh>\n<|Bot|>:{record[1]}<eoa>\n"""
|
859 |
+
prompt += f"""<|User|>:{query}<eoh>\n<|Bot|>:"""
|
860 |
+
return tokenizer([prompt], return_tensors="pt")
|
861 |
+
|
862 |
+
@torch.no_grad()
|
863 |
+
def chat(
|
864 |
+
self,
|
865 |
+
tokenizer,
|
866 |
+
query: str,
|
867 |
+
history: List[Tuple[str, str]] = [],
|
868 |
+
streamer: Optional[BaseStreamer] = None,
|
869 |
+
max_new_tokens: int = 1024,
|
870 |
+
do_sample: bool = True,
|
871 |
+
temperature: float = 0.8,
|
872 |
+
top_p: float = 0.8,
|
873 |
+
**kwargs,
|
874 |
+
):
|
875 |
+
inputs = self.build_inputs(tokenizer, query, history)
|
876 |
+
inputs = {k: v.to(self.device) for k, v in inputs.items() if torch.is_tensor(v)}
|
877 |
+
outputs = self.generate(
|
878 |
+
**inputs,
|
879 |
+
streamer=streamer,
|
880 |
+
max_new_tokens=max_new_tokens,
|
881 |
+
do_sample=do_sample,
|
882 |
+
temperature=temperature,
|
883 |
+
top_p=top_p,
|
884 |
+
**kwargs,
|
885 |
+
)
|
886 |
+
outputs = outputs[0].cpu().tolist()[len(inputs["input_ids"][0]) :]
|
887 |
+
response = tokenizer.decode(outputs, skip_special_tokens=True)
|
888 |
+
response = response.split("<eoa>")[0]
|
889 |
+
history = history + [(query, response)]
|
890 |
+
return response, history
|
891 |
+
|
892 |
+
@torch.no_grad()
|
893 |
+
def stream_chat(
|
894 |
+
self,
|
895 |
+
tokenizer,
|
896 |
+
query: str,
|
897 |
+
history: List[Tuple[str, str]] = [],
|
898 |
+
max_new_tokens: int = 1024,
|
899 |
+
do_sample: bool = True,
|
900 |
+
temperature: float = 0.8,
|
901 |
+
top_p: float = 0.8,
|
902 |
+
**kwargs,
|
903 |
+
):
|
904 |
+
"""
|
905 |
+
Return a generator in format: (response, history)
|
906 |
+
Eg.
|
907 |
+
('你好,有什么可以帮助您的吗', [('你好', '你好,有什么可以帮助您的吗')])
|
908 |
+
('你好,有什么可以帮助您的吗?', [('你好', '你好,有什么可以帮助您的吗?')])
|
909 |
+
"""
|
910 |
+
|
911 |
+
response_queue = queue.Queue(maxsize=20)
|
912 |
+
|
913 |
+
class ChatStreamer(BaseStreamer):
|
914 |
+
def __init__(self, tokenizer) -> None:
|
915 |
+
super().__init__()
|
916 |
+
self.tokenizer = tokenizer
|
917 |
+
self.queue = response_queue
|
918 |
+
self.query = query
|
919 |
+
self.history = history
|
920 |
+
self.response = ""
|
921 |
+
self.received_inputs = False
|
922 |
+
self.queue.put((self.response, history + [(self.query, self.response)]))
|
923 |
+
|
924 |
+
def put(self, value):
|
925 |
+
if len(value.shape) > 1 and value.shape[0] > 1:
|
926 |
+
raise ValueError("ChatStreamer only supports batch size 1")
|
927 |
+
elif len(value.shape) > 1:
|
928 |
+
value = value[0]
|
929 |
+
|
930 |
+
if not self.received_inputs:
|
931 |
+
# The first received value is input_ids, ignore here
|
932 |
+
self.received_inputs = True
|
933 |
+
return
|
934 |
+
|
935 |
+
token = self.tokenizer.decode([value[-1]], skip_special_tokens=True)
|
936 |
+
if token.strip() != "<eoa>":
|
937 |
+
self.response = self.response + token
|
938 |
+
history = self.history + [(self.query, self.response)]
|
939 |
+
self.queue.put((self.response, history))
|
940 |
+
|
941 |
+
def end(self):
|
942 |
+
self.queue.put(None)
|
943 |
+
|
944 |
+
def stream_producer():
|
945 |
+
return self.chat(
|
946 |
+
tokenizer=tokenizer,
|
947 |
+
query=query,
|
948 |
+
streamer=ChatStreamer(tokenizer=tokenizer),
|
949 |
+
history=history,
|
950 |
+
max_new_tokens=max_new_tokens,
|
951 |
+
do_sample=do_sample,
|
952 |
+
temperature=temperature,
|
953 |
+
top_p=top_p,
|
954 |
+
**kwargs,
|
955 |
+
)
|
956 |
+
|
957 |
+
def consumer():
|
958 |
+
producer = threading.Thread(target=stream_producer)
|
959 |
+
producer.start()
|
960 |
+
while True:
|
961 |
+
res = response_queue.get()
|
962 |
+
if res is None:
|
963 |
+
return
|
964 |
+
yield res
|
965 |
+
|
966 |
+
return consumer()
|
967 |
+
|
968 |
+
|
969 |
+
@add_start_docstrings(
|
970 |
+
"""
|
971 |
+
The InternLM Model transformer with a sequence classification head on top (linear layer).
|
972 |
+
[`InternLMForSequenceClassification`] uses the last token in order to do the classification, as other causal models
|
973 |
+
(e.g. GPT-2) do.
|
974 |
+
Since it does classification on the last token, it requires to know the position of the last token. If a
|
975 |
+
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
976 |
+
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
|
977 |
+
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
|
978 |
+
each row of the batch).
|
979 |
+
""",
|
980 |
+
INTERNLM_START_DOCSTRING,
|
981 |
+
)
|
982 |
+
class InternLMForSequenceClassification(InternLMPreTrainedModel):
|
983 |
+
_keys_to_ignore_on_load_missing = [r"lm_head.weight"]
|
984 |
+
|
985 |
+
def __init__(self, config):
|
986 |
+
super().__init__(config)
|
987 |
+
self.num_labels = config.num_labels
|
988 |
+
self.model = InternLMModel(config)
|
989 |
+
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
|
990 |
+
|
991 |
+
# Initialize weights and apply final processing
|
992 |
+
self.post_init()
|
993 |
+
|
994 |
+
def get_input_embeddings(self):
|
995 |
+
return self.model.embed_tokens
|
996 |
+
|
997 |
+
def set_input_embeddings(self, value):
|
998 |
+
self.model.embed_tokens = value
|
999 |
+
|
1000 |
+
@add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
|
1001 |
+
def forward(
|
1002 |
+
self,
|
1003 |
+
input_ids: torch.LongTensor = None,
|
1004 |
+
attention_mask: Optional[torch.Tensor] = None,
|
1005 |
+
position_ids: Optional[torch.LongTensor] = None,
|
1006 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
1007 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
1008 |
+
labels: Optional[torch.LongTensor] = None,
|
1009 |
+
use_cache: Optional[bool] = None,
|
1010 |
+
output_attentions: Optional[bool] = None,
|
1011 |
+
output_hidden_states: Optional[bool] = None,
|
1012 |
+
return_dict: Optional[bool] = None,
|
1013 |
+
) -> Union[Tuple, SequenceClassifierOutputWithPast]:
|
1014 |
+
r"""
|
1015 |
+
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
|
1016 |
+
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
|
1017 |
+
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
|
1018 |
+
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
|
1019 |
+
"""
|
1020 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
1021 |
+
|
1022 |
+
transformer_outputs = self.model(
|
1023 |
+
input_ids,
|
1024 |
+
attention_mask=attention_mask,
|
1025 |
+
position_ids=position_ids,
|
1026 |
+
past_key_values=past_key_values,
|
1027 |
+
inputs_embeds=inputs_embeds,
|
1028 |
+
use_cache=use_cache,
|
1029 |
+
output_attentions=output_attentions,
|
1030 |
+
output_hidden_states=output_hidden_states,
|
1031 |
+
return_dict=return_dict,
|
1032 |
+
)
|
1033 |
+
hidden_states = transformer_outputs[0]
|
1034 |
+
logits = self.score(hidden_states)
|
1035 |
+
|
1036 |
+
if input_ids is not None:
|
1037 |
+
batch_size = input_ids.shape[0]
|
1038 |
+
else:
|
1039 |
+
batch_size = inputs_embeds.shape[0]
|
1040 |
+
|
1041 |
+
if self.config.pad_token_id is None and batch_size != 1:
|
1042 |
+
raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
|
1043 |
+
if self.config.pad_token_id is None:
|
1044 |
+
sequence_lengths = -1
|
1045 |
+
else:
|
1046 |
+
if input_ids is not None:
|
1047 |
+
sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device)
|
1048 |
+
else:
|
1049 |
+
sequence_lengths = -1
|
1050 |
+
|
1051 |
+
pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
|
1052 |
+
|
1053 |
+
loss = None
|
1054 |
+
if labels is not None:
|
1055 |
+
labels = labels.to(logits.device)
|
1056 |
+
if self.config.problem_type is None:
|
1057 |
+
if self.num_labels == 1:
|
1058 |
+
self.config.problem_type = "regression"
|
1059 |
+
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
|
1060 |
+
self.config.problem_type = "single_label_classification"
|
1061 |
+
else:
|
1062 |
+
self.config.problem_type = "multi_label_classification"
|
1063 |
+
|
1064 |
+
if self.config.problem_type == "regression":
|
1065 |
+
loss_fct = MSELoss()
|
1066 |
+
if self.num_labels == 1:
|
1067 |
+
loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
|
1068 |
+
else:
|
1069 |
+
loss = loss_fct(pooled_logits, labels)
|
1070 |
+
elif self.config.problem_type == "single_label_classification":
|
1071 |
+
loss_fct = CrossEntropyLoss()
|
1072 |
+
loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
|
1073 |
+
elif self.config.problem_type == "multi_label_classification":
|
1074 |
+
loss_fct = BCEWithLogitsLoss()
|
1075 |
+
loss = loss_fct(pooled_logits, labels)
|
1076 |
+
if not return_dict:
|
1077 |
+
output = (pooled_logits,) + transformer_outputs[1:]
|
1078 |
+
return ((loss,) + output) if loss is not None else output
|
1079 |
+
|
1080 |
+
return SequenceClassifierOutputWithPast(
|
1081 |
+
loss=loss,
|
1082 |
+
logits=pooled_logits,
|
1083 |
+
past_key_values=transformer_outputs.past_key_values,
|
1084 |
+
hidden_states=transformer_outputs.hidden_states,
|
1085 |
+
attentions=transformer_outputs.attentions,
|
1086 |
+
)
|
pytorch_model-00001-of-00005.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aeba743507872c45e7cf951d7996bce448d8deada841d055d2ac03948af0c2b7
|
3 |
+
size 9990647029
|
pytorch_model-00002-of-00005.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a11c8737fce8d6be9a8f6eb0faa44016c94813aed1d50a757ca32abece4ed461
|
3 |
+
size 9956594199
|
pytorch_model-00003-of-00005.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5c64167ce104e9a576da50f89a398ac2124734621c45e12ea0addbac99ad87ac
|
3 |
+
size 9867486361
|
pytorch_model-00004-of-00005.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40e22421695e3206bc85f0a4839641370bc8277ab689ff0e5d75e708d51f8691
|
3 |
+
size 9306483281
|
pytorch_model-00005-of-00005.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:263f29c6331d8951fd454d4bbd2991d422bbcfb5b07d4acbb0e75aaf53b1a76c
|
3 |
+
size 1056441258
|
pytorch_model.bin.index.json
ADDED
@@ -0,0 +1,550 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"metadata": {
|
3 |
+
"total_size": 40177428480
|
4 |
+
},
|
5 |
+
"weight_map": {
|
6 |
+
"lm_head.weight": "pytorch_model-00005-of-00005.bin",
|
7 |
+
"model.embed_tokens.weight": "pytorch_model-00001-of-00005.bin",
|
8 |
+
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
9 |
+
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
10 |
+
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
11 |
+
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
12 |
+
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
13 |
+
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
14 |
+
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
15 |
+
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
16 |
+
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
17 |
+
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
18 |
+
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
19 |
+
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
20 |
+
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
21 |
+
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
22 |
+
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
23 |
+
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
24 |
+
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
25 |
+
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
26 |
+
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
27 |
+
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
28 |
+
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
29 |
+
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
30 |
+
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
31 |
+
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
32 |
+
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
33 |
+
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
34 |
+
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
35 |
+
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
36 |
+
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
37 |
+
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
38 |
+
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
39 |
+
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
40 |
+
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
41 |
+
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
42 |
+
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
43 |
+
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
44 |
+
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
45 |
+
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
46 |
+
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
47 |
+
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
48 |
+
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
49 |
+
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
50 |
+
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
51 |
+
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
52 |
+
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
53 |
+
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
54 |
+
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
55 |
+
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
56 |
+
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
57 |
+
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
58 |
+
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
59 |
+
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
60 |
+
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
61 |
+
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
62 |
+
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
63 |
+
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
64 |
+
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
65 |
+
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
66 |
+
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
67 |
+
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
68 |
+
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
69 |
+
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
70 |
+
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
71 |
+
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
72 |
+
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
73 |
+
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
74 |
+
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
75 |
+
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
76 |
+
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
77 |
+
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
78 |
+
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
79 |
+
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
80 |
+
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
81 |
+
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
82 |
+
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
83 |
+
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
84 |
+
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
85 |
+
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
86 |
+
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
87 |
+
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
88 |
+
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
89 |
+
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
90 |
+
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
91 |
+
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
92 |
+
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
93 |
+
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
94 |
+
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
95 |
+
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
96 |
+
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
97 |
+
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
98 |
+
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
99 |
+
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
100 |
+
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
101 |
+
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
102 |
+
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
103 |
+
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
104 |
+
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
105 |
+
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
106 |
+
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
107 |
+
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
108 |
+
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
109 |
+
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
110 |
+
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
111 |
+
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
112 |
+
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
113 |
+
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
114 |
+
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
115 |
+
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
116 |
+
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
117 |
+
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
118 |
+
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
119 |
+
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
120 |
+
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
121 |
+
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
122 |
+
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
123 |
+
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
124 |
+
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
125 |
+
"model.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
126 |
+
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
127 |
+
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
128 |
+
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
129 |
+
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
130 |
+
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
131 |
+
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
132 |
+
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
133 |
+
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
134 |
+
"model.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
135 |
+
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
136 |
+
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
137 |
+
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
138 |
+
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
139 |
+
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
140 |
+
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
141 |
+
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
142 |
+
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
143 |
+
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
144 |
+
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
145 |
+
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
146 |
+
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
147 |
+
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
148 |
+
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
149 |
+
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
150 |
+
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
151 |
+
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
152 |
+
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
153 |
+
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
154 |
+
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
155 |
+
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
156 |
+
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
157 |
+
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
158 |
+
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
159 |
+
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
160 |
+
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
161 |
+
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
162 |
+
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
163 |
+
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
164 |
+
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
165 |
+
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
166 |
+
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
167 |
+
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
168 |
+
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
169 |
+
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
170 |
+
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
171 |
+
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
172 |
+
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
173 |
+
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
174 |
+
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
175 |
+
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
176 |
+
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
177 |
+
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
178 |
+
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
179 |
+
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
180 |
+
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
181 |
+
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
182 |
+
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
183 |
+
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
184 |
+
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
185 |
+
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
186 |
+
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
187 |
+
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
188 |
+
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
189 |
+
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
190 |
+
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
191 |
+
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
192 |
+
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
193 |
+
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
194 |
+
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
195 |
+
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
196 |
+
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
197 |
+
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
198 |
+
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00005.bin",
|
199 |
+
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
200 |
+
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
201 |
+
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00005.bin",
|
202 |
+
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
203 |
+
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
204 |
+
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
205 |
+
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
206 |
+
"model.layers.29.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
207 |
+
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
208 |
+
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00005.bin",
|
209 |
+
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00005.bin",
|
210 |
+
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
211 |
+
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00005.bin",
|
212 |
+
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00005.bin",
|
213 |
+
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00005.bin",
|
214 |
+
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00005.bin",
|
215 |
+
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
216 |
+
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
217 |
+
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
218 |
+
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
219 |
+
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
220 |
+
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
221 |
+
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
222 |
+
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
223 |
+
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
224 |
+
"model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
225 |
+
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
226 |
+
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
227 |
+
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
228 |
+
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
229 |
+
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
230 |
+
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
231 |
+
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
232 |
+
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
233 |
+
"model.layers.31.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
234 |
+
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
235 |
+
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
236 |
+
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
237 |
+
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
238 |
+
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
239 |
+
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
240 |
+
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
241 |
+
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
242 |
+
"model.layers.32.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
243 |
+
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
244 |
+
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
245 |
+
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
246 |
+
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
247 |
+
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
248 |
+
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
249 |
+
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
250 |
+
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
251 |
+
"model.layers.33.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
252 |
+
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
253 |
+
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
254 |
+
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
255 |
+
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
256 |
+
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
257 |
+
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
258 |
+
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
259 |
+
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
260 |
+
"model.layers.34.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
261 |
+
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
262 |
+
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
263 |
+
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
264 |
+
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
265 |
+
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
266 |
+
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
267 |
+
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
268 |
+
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
269 |
+
"model.layers.35.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
270 |
+
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
271 |
+
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
272 |
+
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
273 |
+
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
274 |
+
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
275 |
+
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
276 |
+
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
277 |
+
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
278 |
+
"model.layers.36.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
279 |
+
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
280 |
+
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
281 |
+
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
282 |
+
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
283 |
+
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
284 |
+
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
285 |
+
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
286 |
+
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
287 |
+
"model.layers.37.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
288 |
+
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
289 |
+
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
290 |
+
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
291 |
+
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
292 |
+
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
293 |
+
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
294 |
+
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
295 |
+
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
296 |
+
"model.layers.38.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
297 |
+
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
298 |
+
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
299 |
+
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
300 |
+
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
301 |
+
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
302 |
+
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
303 |
+
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
304 |
+
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
305 |
+
"model.layers.39.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
306 |
+
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
307 |
+
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
308 |
+
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
309 |
+
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
310 |
+
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
311 |
+
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
312 |
+
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
313 |
+
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
314 |
+
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
315 |
+
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
316 |
+
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
317 |
+
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
318 |
+
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
319 |
+
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
320 |
+
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
321 |
+
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
322 |
+
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
323 |
+
"model.layers.40.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
324 |
+
"model.layers.40.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
325 |
+
"model.layers.40.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
326 |
+
"model.layers.40.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
327 |
+
"model.layers.40.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
328 |
+
"model.layers.40.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
329 |
+
"model.layers.40.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
330 |
+
"model.layers.40.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
331 |
+
"model.layers.40.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
332 |
+
"model.layers.41.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
333 |
+
"model.layers.41.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
334 |
+
"model.layers.41.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
335 |
+
"model.layers.41.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
336 |
+
"model.layers.41.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
337 |
+
"model.layers.41.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
338 |
+
"model.layers.41.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
339 |
+
"model.layers.41.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
340 |
+
"model.layers.41.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
341 |
+
"model.layers.42.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
342 |
+
"model.layers.42.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
343 |
+
"model.layers.42.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
344 |
+
"model.layers.42.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
345 |
+
"model.layers.42.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
346 |
+
"model.layers.42.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
347 |
+
"model.layers.42.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
348 |
+
"model.layers.42.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
349 |
+
"model.layers.42.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
350 |
+
"model.layers.43.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
351 |
+
"model.layers.43.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
352 |
+
"model.layers.43.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
353 |
+
"model.layers.43.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
354 |
+
"model.layers.43.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
355 |
+
"model.layers.43.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
356 |
+
"model.layers.43.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
357 |
+
"model.layers.43.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
358 |
+
"model.layers.43.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
359 |
+
"model.layers.44.input_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
360 |
+
"model.layers.44.mlp.down_proj.weight": "pytorch_model-00003-of-00005.bin",
|
361 |
+
"model.layers.44.mlp.gate_proj.weight": "pytorch_model-00003-of-00005.bin",
|
362 |
+
"model.layers.44.mlp.up_proj.weight": "pytorch_model-00003-of-00005.bin",
|
363 |
+
"model.layers.44.post_attention_layernorm.weight": "pytorch_model-00003-of-00005.bin",
|
364 |
+
"model.layers.44.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
365 |
+
"model.layers.44.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
366 |
+
"model.layers.44.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
367 |
+
"model.layers.44.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
368 |
+
"model.layers.45.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
369 |
+
"model.layers.45.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
370 |
+
"model.layers.45.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
371 |
+
"model.layers.45.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
372 |
+
"model.layers.45.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
373 |
+
"model.layers.45.self_attn.k_proj.weight": "pytorch_model-00003-of-00005.bin",
|
374 |
+
"model.layers.45.self_attn.o_proj.weight": "pytorch_model-00003-of-00005.bin",
|
375 |
+
"model.layers.45.self_attn.q_proj.weight": "pytorch_model-00003-of-00005.bin",
|
376 |
+
"model.layers.45.self_attn.v_proj.weight": "pytorch_model-00003-of-00005.bin",
|
377 |
+
"model.layers.46.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
378 |
+
"model.layers.46.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
379 |
+
"model.layers.46.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
380 |
+
"model.layers.46.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
381 |
+
"model.layers.46.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
382 |
+
"model.layers.46.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
383 |
+
"model.layers.46.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
384 |
+
"model.layers.46.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
385 |
+
"model.layers.46.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
386 |
+
"model.layers.47.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
387 |
+
"model.layers.47.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
388 |
+
"model.layers.47.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
389 |
+
"model.layers.47.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
390 |
+
"model.layers.47.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
391 |
+
"model.layers.47.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
392 |
+
"model.layers.47.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
393 |
+
"model.layers.47.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
394 |
+
"model.layers.47.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
395 |
+
"model.layers.48.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
396 |
+
"model.layers.48.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
397 |
+
"model.layers.48.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
398 |
+
"model.layers.48.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
399 |
+
"model.layers.48.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
400 |
+
"model.layers.48.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
401 |
+
"model.layers.48.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
402 |
+
"model.layers.48.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
403 |
+
"model.layers.48.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
404 |
+
"model.layers.49.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
405 |
+
"model.layers.49.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
406 |
+
"model.layers.49.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
407 |
+
"model.layers.49.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
408 |
+
"model.layers.49.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
409 |
+
"model.layers.49.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
410 |
+
"model.layers.49.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
411 |
+
"model.layers.49.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
412 |
+
"model.layers.49.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
413 |
+
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
414 |
+
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
415 |
+
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
416 |
+
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
417 |
+
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
418 |
+
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
419 |
+
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
420 |
+
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
421 |
+
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
422 |
+
"model.layers.50.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
423 |
+
"model.layers.50.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
424 |
+
"model.layers.50.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
425 |
+
"model.layers.50.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
426 |
+
"model.layers.50.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
427 |
+
"model.layers.50.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
428 |
+
"model.layers.50.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
429 |
+
"model.layers.50.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
430 |
+
"model.layers.50.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
431 |
+
"model.layers.51.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
432 |
+
"model.layers.51.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
433 |
+
"model.layers.51.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
434 |
+
"model.layers.51.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
435 |
+
"model.layers.51.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
436 |
+
"model.layers.51.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
437 |
+
"model.layers.51.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
438 |
+
"model.layers.51.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
439 |
+
"model.layers.51.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
440 |
+
"model.layers.52.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
441 |
+
"model.layers.52.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
442 |
+
"model.layers.52.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
443 |
+
"model.layers.52.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
444 |
+
"model.layers.52.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
445 |
+
"model.layers.52.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
446 |
+
"model.layers.52.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
447 |
+
"model.layers.52.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
448 |
+
"model.layers.52.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
449 |
+
"model.layers.53.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
450 |
+
"model.layers.53.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
451 |
+
"model.layers.53.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
452 |
+
"model.layers.53.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
453 |
+
"model.layers.53.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
454 |
+
"model.layers.53.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
455 |
+
"model.layers.53.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
456 |
+
"model.layers.53.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
457 |
+
"model.layers.53.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
458 |
+
"model.layers.54.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
459 |
+
"model.layers.54.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
460 |
+
"model.layers.54.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
461 |
+
"model.layers.54.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
462 |
+
"model.layers.54.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
463 |
+
"model.layers.54.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
464 |
+
"model.layers.54.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
465 |
+
"model.layers.54.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
466 |
+
"model.layers.54.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
467 |
+
"model.layers.55.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
468 |
+
"model.layers.55.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
469 |
+
"model.layers.55.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
470 |
+
"model.layers.55.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
471 |
+
"model.layers.55.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
472 |
+
"model.layers.55.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
473 |
+
"model.layers.55.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
474 |
+
"model.layers.55.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
475 |
+
"model.layers.55.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
476 |
+
"model.layers.56.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
477 |
+
"model.layers.56.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
478 |
+
"model.layers.56.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
479 |
+
"model.layers.56.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
480 |
+
"model.layers.56.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
481 |
+
"model.layers.56.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
482 |
+
"model.layers.56.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
483 |
+
"model.layers.56.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
484 |
+
"model.layers.56.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
485 |
+
"model.layers.57.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
486 |
+
"model.layers.57.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
487 |
+
"model.layers.57.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
488 |
+
"model.layers.57.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
489 |
+
"model.layers.57.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
490 |
+
"model.layers.57.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
491 |
+
"model.layers.57.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
492 |
+
"model.layers.57.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
493 |
+
"model.layers.57.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
494 |
+
"model.layers.58.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
495 |
+
"model.layers.58.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
496 |
+
"model.layers.58.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
497 |
+
"model.layers.58.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
498 |
+
"model.layers.58.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
499 |
+
"model.layers.58.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
500 |
+
"model.layers.58.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
501 |
+
"model.layers.58.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
502 |
+
"model.layers.58.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
503 |
+
"model.layers.59.input_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
504 |
+
"model.layers.59.mlp.down_proj.weight": "pytorch_model-00004-of-00005.bin",
|
505 |
+
"model.layers.59.mlp.gate_proj.weight": "pytorch_model-00004-of-00005.bin",
|
506 |
+
"model.layers.59.mlp.up_proj.weight": "pytorch_model-00004-of-00005.bin",
|
507 |
+
"model.layers.59.post_attention_layernorm.weight": "pytorch_model-00004-of-00005.bin",
|
508 |
+
"model.layers.59.self_attn.k_proj.weight": "pytorch_model-00004-of-00005.bin",
|
509 |
+
"model.layers.59.self_attn.o_proj.weight": "pytorch_model-00004-of-00005.bin",
|
510 |
+
"model.layers.59.self_attn.q_proj.weight": "pytorch_model-00004-of-00005.bin",
|
511 |
+
"model.layers.59.self_attn.v_proj.weight": "pytorch_model-00004-of-00005.bin",
|
512 |
+
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
513 |
+
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
514 |
+
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
515 |
+
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
516 |
+
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
517 |
+
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
518 |
+
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
519 |
+
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
520 |
+
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
521 |
+
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
522 |
+
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
523 |
+
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
524 |
+
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
525 |
+
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
526 |
+
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
527 |
+
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
528 |
+
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
529 |
+
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
530 |
+
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
531 |
+
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
532 |
+
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
533 |
+
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
534 |
+
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
535 |
+
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
536 |
+
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
537 |
+
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
538 |
+
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
539 |
+
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
540 |
+
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00005.bin",
|
541 |
+
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00005.bin",
|
542 |
+
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00005.bin",
|
543 |
+
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00005.bin",
|
544 |
+
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00005.bin",
|
545 |
+
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00005.bin",
|
546 |
+
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00005.bin",
|
547 |
+
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00005.bin",
|
548 |
+
"model.norm.weight": "pytorch_model-00004-of-00005.bin"
|
549 |
+
}
|
550 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<s>",
|
3 |
+
"eos_token": "</s>",
|
4 |
+
"pad_token": "</s>",
|
5 |
+
"unk_token": "<unk>"
|
6 |
+
}
|
tokenization_internlm.py
ADDED
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
|
5 |
+
# and OPT implementations in this library. It has been modified from its
|
6 |
+
# original forms to accommodate minor architectural differences compared
|
7 |
+
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
|
8 |
+
#
|
9 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
10 |
+
# you may not use this file except in compliance with the License.
|
11 |
+
# You may obtain a copy of the License at
|
12 |
+
#
|
13 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
14 |
+
#
|
15 |
+
# Unless required by applicable law or agreed to in writing, software
|
16 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
17 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
18 |
+
# See the License for the specific language governing permissions and
|
19 |
+
# limitations under the License.
|
20 |
+
|
21 |
+
"""Tokenization classes for IntermLM."""
|
22 |
+
import os
|
23 |
+
from shutil import copyfile
|
24 |
+
from typing import Any, Dict, List, Optional, Tuple
|
25 |
+
|
26 |
+
import sentencepiece as spm
|
27 |
+
|
28 |
+
from transformers.tokenization_utils import PreTrainedTokenizer
|
29 |
+
from transformers.utils import logging
|
30 |
+
|
31 |
+
|
32 |
+
logger = logging.get_logger(__name__)
|
33 |
+
|
34 |
+
VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
|
35 |
+
|
36 |
+
PRETRAINED_VOCAB_FILES_MAP = {}
|
37 |
+
|
38 |
+
|
39 |
+
class InternLMTokenizer(PreTrainedTokenizer):
|
40 |
+
"""
|
41 |
+
Construct a InternLM tokenizer. Based on byte-level Byte-Pair-Encoding.
|
42 |
+
|
43 |
+
Args:
|
44 |
+
vocab_file (`str`):
|
45 |
+
Path to the vocabulary file.
|
46 |
+
"""
|
47 |
+
|
48 |
+
vocab_files_names = VOCAB_FILES_NAMES
|
49 |
+
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
50 |
+
model_input_names = ["input_ids", "attention_mask"]
|
51 |
+
_auto_class = "AutoTokenizer"
|
52 |
+
|
53 |
+
def __init__(
|
54 |
+
self,
|
55 |
+
vocab_file,
|
56 |
+
unk_token="<unk>",
|
57 |
+
bos_token="<s>",
|
58 |
+
eos_token="</s>",
|
59 |
+
pad_token="</s>",
|
60 |
+
sp_model_kwargs: Optional[Dict[str, Any]] = None,
|
61 |
+
add_bos_token=True,
|
62 |
+
add_eos_token=False,
|
63 |
+
decode_with_prefix_space=False,
|
64 |
+
clean_up_tokenization_spaces=False,
|
65 |
+
**kwargs,
|
66 |
+
):
|
67 |
+
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
|
68 |
+
self.vocab_file = vocab_file
|
69 |
+
self.add_bos_token = add_bos_token
|
70 |
+
self.add_eos_token = add_eos_token
|
71 |
+
self.decode_with_prefix_space = decode_with_prefix_space
|
72 |
+
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
73 |
+
self.sp_model.Load(vocab_file)
|
74 |
+
self._no_prefix_space_tokens = None
|
75 |
+
super().__init__(
|
76 |
+
bos_token=bos_token,
|
77 |
+
eos_token=eos_token,
|
78 |
+
unk_token=unk_token,
|
79 |
+
pad_token=pad_token,
|
80 |
+
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
|
81 |
+
**kwargs,
|
82 |
+
)
|
83 |
+
|
84 |
+
""" Initialization"""
|
85 |
+
|
86 |
+
@property
|
87 |
+
def no_prefix_space_tokens(self):
|
88 |
+
if self._no_prefix_space_tokens is None:
|
89 |
+
vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
|
90 |
+
self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
|
91 |
+
return self._no_prefix_space_tokens
|
92 |
+
|
93 |
+
@property
|
94 |
+
def vocab_size(self):
|
95 |
+
"""Returns vocab size"""
|
96 |
+
return self.sp_model.get_piece_size()
|
97 |
+
|
98 |
+
@property
|
99 |
+
def bos_token_id(self) -> Optional[int]:
|
100 |
+
return self.sp_model.bos_id()
|
101 |
+
|
102 |
+
@property
|
103 |
+
def eos_token_id(self) -> Optional[int]:
|
104 |
+
return self.sp_model.eos_id()
|
105 |
+
|
106 |
+
def get_vocab(self):
|
107 |
+
"""Returns vocab as a dict"""
|
108 |
+
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
|
109 |
+
vocab.update(self.added_tokens_encoder)
|
110 |
+
return vocab
|
111 |
+
|
112 |
+
def _tokenize(self, text):
|
113 |
+
"""Returns a tokenized string."""
|
114 |
+
return self.sp_model.encode(text, out_type=str)
|
115 |
+
|
116 |
+
def _convert_token_to_id(self, token):
|
117 |
+
"""Converts a token (str) in an id using the vocab."""
|
118 |
+
return self.sp_model.piece_to_id(token)
|
119 |
+
|
120 |
+
def _convert_id_to_token(self, index):
|
121 |
+
"""Converts an index (integer) in a token (str) using the vocab."""
|
122 |
+
token = self.sp_model.IdToPiece(index)
|
123 |
+
return token
|
124 |
+
|
125 |
+
def _maybe_add_prefix_space(self, tokens, decoded):
|
126 |
+
if tokens and tokens[0] not in self.no_prefix_space_tokens:
|
127 |
+
return " " + decoded
|
128 |
+
else:
|
129 |
+
return decoded
|
130 |
+
|
131 |
+
def convert_tokens_to_string(self, tokens):
|
132 |
+
"""Converts a sequence of tokens (string) in a single string."""
|
133 |
+
current_sub_tokens = []
|
134 |
+
out_string = ""
|
135 |
+
prev_is_special = False
|
136 |
+
for token in tokens:
|
137 |
+
# make sure that special tokens are not decoded using sentencepiece model
|
138 |
+
if token in self.all_special_tokens:
|
139 |
+
if not prev_is_special:
|
140 |
+
out_string += " "
|
141 |
+
out_string += self.sp_model.decode(current_sub_tokens) + token
|
142 |
+
prev_is_special = True
|
143 |
+
current_sub_tokens = []
|
144 |
+
else:
|
145 |
+
current_sub_tokens.append(token)
|
146 |
+
prev_is_special = False
|
147 |
+
out_string += self.sp_model.decode(current_sub_tokens)
|
148 |
+
out_string = self.clean_up_tokenization(out_string)
|
149 |
+
out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
|
150 |
+
return out_string[1:]
|
151 |
+
|
152 |
+
def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
|
153 |
+
"""
|
154 |
+
Save the vocabulary and special tokens file to a directory.
|
155 |
+
|
156 |
+
Args:
|
157 |
+
save_directory (`str`):
|
158 |
+
The directory in which to save the vocabulary.
|
159 |
+
|
160 |
+
Returns:
|
161 |
+
`Tuple(str)`: Paths to the files saved.
|
162 |
+
"""
|
163 |
+
if not os.path.isdir(save_directory):
|
164 |
+
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
|
165 |
+
return
|
166 |
+
out_vocab_file = os.path.join(
|
167 |
+
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
|
168 |
+
)
|
169 |
+
|
170 |
+
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
|
171 |
+
copyfile(self.vocab_file, out_vocab_file)
|
172 |
+
elif not os.path.isfile(self.vocab_file):
|
173 |
+
with open(out_vocab_file, "wb") as fi:
|
174 |
+
content_spiece_model = self.sp_model.serialized_model_proto()
|
175 |
+
fi.write(content_spiece_model)
|
176 |
+
|
177 |
+
return (out_vocab_file,)
|
178 |
+
|
179 |
+
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
180 |
+
if self.add_bos_token:
|
181 |
+
bos_token_ids = [self.bos_token_id]
|
182 |
+
else:
|
183 |
+
bos_token_ids = []
|
184 |
+
|
185 |
+
output = bos_token_ids + token_ids_0
|
186 |
+
|
187 |
+
if token_ids_1 is not None:
|
188 |
+
output = output + token_ids_1
|
189 |
+
|
190 |
+
if self.add_eos_token:
|
191 |
+
output = output + [self.eos_token_id]
|
192 |
+
|
193 |
+
return output
|
194 |
+
|
195 |
+
def get_special_tokens_mask(
|
196 |
+
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
|
197 |
+
) -> List[int]:
|
198 |
+
"""
|
199 |
+
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
|
200 |
+
special tokens using the tokenizer `prepare_for_model` method.
|
201 |
+
|
202 |
+
Args:
|
203 |
+
token_ids_0 (`List[int]`):
|
204 |
+
List of IDs.
|
205 |
+
token_ids_1 (`List[int]`, *optional*):
|
206 |
+
Optional second list of IDs for sequence pairs.
|
207 |
+
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
|
208 |
+
Whether or not the token list is already formatted with special tokens for the model.
|
209 |
+
|
210 |
+
Returns:
|
211 |
+
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
|
212 |
+
"""
|
213 |
+
if already_has_special_tokens:
|
214 |
+
return super().get_special_tokens_mask(
|
215 |
+
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
|
216 |
+
)
|
217 |
+
|
218 |
+
if token_ids_1 is None:
|
219 |
+
return [1] + ([0] * len(token_ids_0)) + [1]
|
220 |
+
return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
|
221 |
+
|
222 |
+
def create_token_type_ids_from_sequences(
|
223 |
+
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
|
224 |
+
) -> List[int]:
|
225 |
+
"""
|
226 |
+
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
|
227 |
+
use of token type ids, therefore a list of zeros is returned.
|
228 |
+
|
229 |
+
Args:
|
230 |
+
token_ids_0 (`List[int]`):
|
231 |
+
List of IDs.
|
232 |
+
token_ids_1 (`List[int]`, *optional*):
|
233 |
+
Optional second list of IDs for sequence pairs.
|
234 |
+
|
235 |
+
Returns:
|
236 |
+
`List[int]`: List of zeros.
|
237 |
+
"""
|
238 |
+
eos = [self.eos_token_id]
|
239 |
+
|
240 |
+
if token_ids_1 is None:
|
241 |
+
return len(token_ids_0 + eos) * [0]
|
242 |
+
return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aab622d98c98677a1a51f969e25765154487bf3e85c7819db105db2fcacba83f
|
3 |
+
size 1658691
|
tokenizer_config.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"auto_map": {
|
3 |
+
"AutoTokenizer": [
|
4 |
+
"tokenization_internlm.InternLMTokenizer",
|
5 |
+
null
|
6 |
+
]
|
7 |
+
},
|
8 |
+
"bos_token": "<s>",
|
9 |
+
"clean_up_tokenization_spaces": false,
|
10 |
+
"eos_token": "</s>",
|
11 |
+
"model_max_length": 1000000000000000019884624838656,
|
12 |
+
"pad_token": "</s>",
|
13 |
+
"tokenizer_class": "InternLMTokenizer",
|
14 |
+
"unk_token": "<unk>"
|
15 |
+
}
|