Ubuntu commited on
Commit
aeb03e3
1 Parent(s): c7f4438

first commit

Browse files
LICENSE ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The glm-4-9b License
2
+
3
+ 1. 定义
4
+
5
+ “许可方”是指分发其软件的 glm-4-9b 模型团队。
6
+ “软件”是指根据本许可提供的 glm-4-9b 模型参数。
7
+
8
+ 2. 许可授予
9
+
10
+ 根据本许可的条款和条件,许可方特此授予您非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许可。
11
+ 本许可允许您免费使用本仓库中的所有开源模型进行学术研究,对于希望将模型用于商业目的的用户,需在[这里](https://open.bigmodel.cn/mla/form)完成登记。经过登记的用户可以免费使用本模型进行商业活动,但必须遵守本许可的所有条款和条件。
12
+ 上述版权声明和本许可声明应包含在本软件的所有副本或重要部分中。
13
+ 如果您分发或提供 THUDM / 智谱AI 关于 glm-4 开源模型的材料(或其任何衍生作品),或使用其中任何材料(包括 glm-4 系列的所有开源模型)的产品或服务,您应:
14
+
15
+ (A) 随任何此类 THUDM / 智谱AI 材料提供本协议的副本;
16
+ (B) 在相关网站、用户界面、博客文章、关于页面或产品文档上突出显示 “Built with glm-4”。
17
+ 如果您使用 THUDM / 智谱AI的 glm-4 开源模型的材料来创建、训练、微调或以其他方式改进已分发或可用的 AI 模型,您还应在任何此类 AI 模型名称的开头添加 “glm-4”。
18
+
19
+ 3. 限制
20
+
21
+ 您不得出于任何军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建本软件的全部或部分衍生作品。
22
+ 您不得利用本软件从事任何危害国家安全和国家统一,危害社会公共利益及公序良俗,侵犯他人商业秘密、知识产权、名誉权、肖像权、财产权等权益的行为。
23
+ 您在使用中应遵循使用地所适用的法律法规政策、道德规范等要求。
24
+
25
+ 4. 免责声明
26
+
27
+ 本软件“按原样”提供,不提供任何明示或暗示的保证,包括但不限于对适销性、特定用途的适用性和非侵权性的保证。
28
+ 在任何情况下,作者或版权持有人均不对任何索赔、损害或其他责任负责,无论是在合同诉讼、侵权行为还是其他方面,由软件或软件的使用或其他交易引起、由软件引起或与之相关
29
+ 软件。
30
+
31
+ 5. 责任限制
32
+
33
+ 除适用法律禁止的范围外,在任何情况下且根据任何法律理论,无论是基于侵权行为、疏忽、合同、责任或其他原因,任何许可方均不对您承担任何直接、间接、特殊、偶然、示范性、
34
+ 或间接损害,或任何其他商业损失,即使许可人已被告知此类损害的可能性。
35
+
36
+ 6. 争议解决
37
+
38
+ 本许可受中华人民共和国法律管辖并按其解释。 因本许可引起的或与本许可有关的任何争议应提交北京市海淀区人民法院。
39
+ 请注意,许可证可能会更新到更全面的版本。 有关许可和版权的任何问题,请通过 license@zhipuai.cn 与我们联系。
40
+
41
+ 1. Definitions
42
+
43
+ “Licensor” means the glm-4-9b Model Team that distributes its Software.
44
+ “Software” means the glm-4-9b model parameters made available under this license.
45
+
46
+ 2. License
47
+
48
+ Subject to the terms and conditions of this License, Licensor hereby grants you a non-exclusive, worldwide, irrevocable, non-sublicensable, revocable, photo-free copyright license.
49
+ This license allows you to use all open source models in this repository for free for academic research. For users who wish to use the models for commercial purposes, please do so [here](https://open.bigmodel.cn/mla/form)
50
+ Complete registration. Registered users are free to use this model for commercial activities, but must comply with all terms and conditions of this license.
51
+ The copyright notice and this license notice shall be included in all copies or substantial portions of the Software.
52
+ If you distribute or provide THUDM / Zhipu AI materials on the glm-4 open source model (or any derivative works thereof), or products or services that use any materials therein (including all open source models of the glm-4 series), you should:
53
+
54
+ (A) Provide a copy of this Agreement with any such THUDM/Zhipu AI Materials;
55
+ (B) Prominently display "Built with glm-4" on the relevant website, user interface, blog post, related page or product documentation.
56
+ If you use materials from THUDM/Zhipu AI's glm-4 model to create, train, operate, or otherwise improve assigned or available AI models, you should also add "glm-4" to the beginning of any such AI model name.
57
+
58
+ 3. Restrictions
59
+
60
+ You are not allowed to use, copy, modify, merge, publish, distribute, copy or create all or part of the derivative works of this software for any military or illegal purposes.
61
+ You are not allowed to use this software to engage in any behavior that endangers national security and unity, endangers social public interests and public order, infringes on the rights and interests of others such as trade secrets, intellectual property rights, reputation rights, portrait rights, and property rights.
62
+ You should comply with the applicable laws, regulations, policies, ethical standards, and other requirements in the place of use during use.
63
+
64
+ 4. Disclaimer
65
+
66
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
67
+ WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
68
+ COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
69
+ OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
70
+
71
+ 5. Limitation of Liability
72
+
73
+ EXCEPT TO THE EXTENT PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER BASED IN TORT,
74
+ NEGLIGENCE, CONTRACT, LIABILITY, OR OTHERWISE WILL ANY LICENSOR BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, SPECIAL,
75
+ INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES, OR ANY OTHER COMMERCIAL LOSSES, EVEN IF THE LICENSOR HAS BEEN ADVISED
76
+ OF THE POSSIBILITY OF SUCH DAMAGES.
77
+
78
+ 6. Dispute Resolution
79
+
80
+ This license shall be governed and construed in accordance with the laws of People’s Republic of China. Any dispute
81
+ arising from or in connection with this License shall be submitted to Haidian District People's Court in Beijing.
82
+
83
+ Note that the license is subject to update to a more comprehensive version. For any questions related to the license and
84
+ copyright, please contact us at license@zhipuai.cn.
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: glm-4
4
+ license_link: ./LICENSE
5
+
6
+ language:
7
+ - zh
8
+ - en
9
+ tags:
10
+ - glm
11
+ - chatglm
12
+ - thudm
13
+ inference: false
14
+ ---
15
+
16
+ # glm-4-9b
17
+
18
+ GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。
19
+ 在语义、数学、推理、代码和知识等多方面的数据集测评中,GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出较高的性能。
20
+ 除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K
21
+ 上下文)等高级功能。
22
+ 本代模型增加了多语言支持,支持包括日语,韩语,德语在内的 26 种语言。我们还推出了支持 1M 上下文长度(约 200 万中文字符)的模型。
23
+
24
+ 我们在一些典型任务上对 GLM-4-9B 基座模型进行了评测,结果如下:
25
+
26
+ | Model | MMLU | C-Eval | GPQA | GSM8K | MATH | HumanEval |
27
+ |:--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:---------:|
28
+ | Llama-3-8B | 66.6 | 51.2 | - | 45.8 | - | - |
29
+ | Llama-3-8B-Instruct | 68.4 | 51.3 | 34.2 | 79.6 | 30.0 | 62.2 |
30
+ | ChatGLM3-6B-Base | 61.4 | 69.0 | - | 72.3 | 25.7 | - |
31
+ | GLM-4-9B | **74.7** | **77.1** | **34.3** | **84.0** | **30.4** | **70.1** |
32
+
33
+
34
+ **本仓库是 GLM-4-9B 的基座版本,支持`8K`上下文长度。**
35
+
36
+ ## 协议
37
+
38
+ GLM-4 模型的权重的使用则需要遵循 [LICENSE](LICENSE)。
39
+
40
+ Rhe use of the GLM-4 model weights needs to comply with the [LICENSE](LICENSE).
41
+
42
+ ## 引用
43
+
44
+ 如果你觉得我们的工作有帮助的话,请考虑引用下列论文。
45
+
46
+ ```
47
+ @article{zeng2022glm,
48
+ title={Glm-130b: An open bilingual pre-trained model},
49
+ author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
50
+ journal={arXiv preprint arXiv:2210.02414},
51
+ year={2022}
52
+ }
53
+ ```
54
+
55
+ ```
56
+ @inproceedings{du2022glm,
57
+ title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
58
+ author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
59
+ booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
60
+ pages={320--335},
61
+ year={2022}
62
+ }
63
+ ```
64
+
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "THUDM/glm-4-9b",
3
+ "model_type": "chatglm",
4
+ "architectures": [
5
+ "ChatGLMModel"
6
+ ],
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_chatglm.ChatGLMConfig",
9
+ "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
10
+ "AutoModelForCausalLM": "modeling_chatglm.ChatGLMForConditionalGeneration",
11
+ "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration",
12
+ "AutoModelForSequenceClassification": "modeling_chatglm.ChatGLMForSequenceClassification"
13
+ },
14
+ "add_bias_linear": false,
15
+ "add_qkv_bias": true,
16
+ "apply_query_key_layer_scaling": true,
17
+ "apply_residual_connection_post_layernorm": false,
18
+ "attention_dropout": 0.0,
19
+ "attention_softmax_in_fp32": true,
20
+ "bias_dropout_fusion": true,
21
+ "ffn_hidden_size": 13696,
22
+ "fp32_residual_connection": false,
23
+ "hidden_dropout": 0.0,
24
+ "hidden_size": 4096,
25
+ "kv_channels": 128,
26
+ "layernorm_epsilon": 0.00000015625,
27
+ "multi_query_attention": true,
28
+ "multi_query_group_num": 2,
29
+ "num_attention_heads": 32,
30
+ "num_layers": 40,
31
+ "original_rope": true,
32
+ "padded_vocab_size": 151552,
33
+ "post_layer_norm": true,
34
+ "rmsnorm": true,
35
+ "seq_length": 8192,
36
+ "use_cache": true,
37
+ "torch_dtype": "bfloat16",
38
+ "transformers_version": "4.30.2",
39
+ "tie_word_embeddings": false,
40
+ "eos_token_id": [151329, 151336, 151338],
41
+ "pad_token_id": 151329
42
+ }
configuration_chatglm.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+
4
+ class ChatGLMConfig(PretrainedConfig):
5
+ model_type = "chatglm"
6
+
7
+ def __init__(
8
+ self,
9
+ num_layers=28,
10
+ padded_vocab_size=65024,
11
+ hidden_size=4096,
12
+ ffn_hidden_size=13696,
13
+ kv_channels=128,
14
+ num_attention_heads=32,
15
+ seq_length=2048,
16
+ hidden_dropout=0.0,
17
+ classifier_dropout=None,
18
+ attention_dropout=0.0,
19
+ layernorm_epsilon=1e-5,
20
+ rmsnorm=True,
21
+ apply_residual_connection_post_layernorm=False,
22
+ post_layer_norm=True,
23
+ add_bias_linear=False,
24
+ add_qkv_bias=False,
25
+ bias_dropout_fusion=True,
26
+ multi_query_attention=False,
27
+ multi_query_group_num=1,
28
+ rope_ratio=1,
29
+ apply_query_key_layer_scaling=True,
30
+ attention_softmax_in_fp32=True,
31
+ fp32_residual_connection=False,
32
+ **kwargs
33
+ ):
34
+ self.num_layers = num_layers
35
+ self.vocab_size = padded_vocab_size
36
+ self.padded_vocab_size = padded_vocab_size
37
+ self.hidden_size = hidden_size
38
+ self.ffn_hidden_size = ffn_hidden_size
39
+ self.kv_channels = kv_channels
40
+ self.num_attention_heads = num_attention_heads
41
+ self.seq_length = seq_length
42
+ self.hidden_dropout = hidden_dropout
43
+ self.classifier_dropout = classifier_dropout
44
+ self.attention_dropout = attention_dropout
45
+ self.layernorm_epsilon = layernorm_epsilon
46
+ self.rmsnorm = rmsnorm
47
+ self.apply_residual_connection_post_layernorm = apply_residual_connection_post_layernorm
48
+ self.post_layer_norm = post_layer_norm
49
+ self.add_bias_linear = add_bias_linear
50
+ self.add_qkv_bias = add_qkv_bias
51
+ self.bias_dropout_fusion = bias_dropout_fusion
52
+ self.multi_query_attention = multi_query_attention
53
+ self.multi_query_group_num = multi_query_group_num
54
+ self.rope_ratio = rope_ratio
55
+ self.apply_query_key_layer_scaling = apply_query_key_layer_scaling
56
+ self.attention_softmax_in_fp32 = attention_softmax_in_fp32
57
+ self.fp32_residual_connection = fp32_residual_connection
58
+ super().__init__(**kwargs)
model-00001-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f546f3719e2b3db3243dc2efe2962b06415271b8e54bc1b32e6c01c70ee6005
3
+ size 1945161760
model-00002-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70c890a1210e47e291b6ae99e76032c9848c5bcbd566c29e788e41ea89cf550a
3
+ size 1815217640
model-00003-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24f8f9dc017b1a03a4ef6aeec68fa857065ea036ee4e534c5040d1835ac07a66
3
+ size 1968291912
model-00004-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb538847b54eef09793b376342dafa17de437524a33ab99b8f0b9eeb5147dfdd
3
+ size 1927406992
model-00005-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b09464aa91cbb9bbf2cfd38c21538999e9c5fafc6a64a526621b619a6cf3122b
3
+ size 1815217672
model-00006-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b8fb6a9786f4a6ac557ebcd9c7244d9e5865ef796c4d158b669bc91a63c450d
3
+ size 1968291952
model-00007-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25c66dee95a67aa4e8ae6bee424431947718920e24f1760fe364615590634634
3
+ size 1927406992
model-00008-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e49c65fd473d8ec9a92882d182a05caaff852ddb52a81037b0d441528565ba4
3
+ size 1815217672
model-00009-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d37f19bd367ab6243c5a77c8b1944597b5d53805598e833201d34d6ba4fca558
3
+ size 1968291952
model-00010-of-00010.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71f9de8ae7a4df6c37f5d8fa8cb9209b3e8973c1bc662898f90b0b95d2338a64
3
+ size 1649436712
model.safetensors.index.json ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 18799902784
4
+ },
5
+ "weight_map": {
6
+ "transformer.embedding.word_embeddings.weight": "model-00001-of-00010.safetensors",
7
+ "transformer.encoder.final_layernorm.weight": "model-00010-of-00010.safetensors",
8
+ "transformer.encoder.layers.0.input_layernorm.weight": "model-00001-of-00010.safetensors",
9
+ "transformer.encoder.layers.0.mlp.dense_4h_to_h.weight": "model-00001-of-00010.safetensors",
10
+ "transformer.encoder.layers.0.mlp.dense_h_to_4h.weight": "model-00001-of-00010.safetensors",
11
+ "transformer.encoder.layers.0.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
12
+ "transformer.encoder.layers.0.self_attention.dense.weight": "model-00001-of-00010.safetensors",
13
+ "transformer.encoder.layers.0.self_attention.query_key_value.bias": "model-00001-of-00010.safetensors",
14
+ "transformer.encoder.layers.0.self_attention.query_key_value.weight": "model-00001-of-00010.safetensors",
15
+ "transformer.encoder.layers.1.input_layernorm.weight": "model-00001-of-00010.safetensors",
16
+ "transformer.encoder.layers.1.mlp.dense_4h_to_h.weight": "model-00002-of-00010.safetensors",
17
+ "transformer.encoder.layers.1.mlp.dense_h_to_4h.weight": "model-00001-of-00010.safetensors",
18
+ "transformer.encoder.layers.1.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
19
+ "transformer.encoder.layers.1.self_attention.dense.weight": "model-00001-of-00010.safetensors",
20
+ "transformer.encoder.layers.1.self_attention.query_key_value.bias": "model-00001-of-00010.safetensors",
21
+ "transformer.encoder.layers.1.self_attention.query_key_value.weight": "model-00001-of-00010.safetensors",
22
+ "transformer.encoder.layers.10.input_layernorm.weight": "model-00003-of-00010.safetensors",
23
+ "transformer.encoder.layers.10.mlp.dense_4h_to_h.weight": "model-00003-of-00010.safetensors",
24
+ "transformer.encoder.layers.10.mlp.dense_h_to_4h.weight": "model-00003-of-00010.safetensors",
25
+ "transformer.encoder.layers.10.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
26
+ "transformer.encoder.layers.10.self_attention.dense.weight": "model-00003-of-00010.safetensors",
27
+ "transformer.encoder.layers.10.self_attention.query_key_value.bias": "model-00003-of-00010.safetensors",
28
+ "transformer.encoder.layers.10.self_attention.query_key_value.weight": "model-00003-of-00010.safetensors",
29
+ "transformer.encoder.layers.11.input_layernorm.weight": "model-00003-of-00010.safetensors",
30
+ "transformer.encoder.layers.11.mlp.dense_4h_to_h.weight": "model-00004-of-00010.safetensors",
31
+ "transformer.encoder.layers.11.mlp.dense_h_to_4h.weight": "model-00004-of-00010.safetensors",
32
+ "transformer.encoder.layers.11.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
33
+ "transformer.encoder.layers.11.self_attention.dense.weight": "model-00004-of-00010.safetensors",
34
+ "transformer.encoder.layers.11.self_attention.query_key_value.bias": "model-00004-of-00010.safetensors",
35
+ "transformer.encoder.layers.11.self_attention.query_key_value.weight": "model-00004-of-00010.safetensors",
36
+ "transformer.encoder.layers.12.input_layernorm.weight": "model-00004-of-00010.safetensors",
37
+ "transformer.encoder.layers.12.mlp.dense_4h_to_h.weight": "model-00004-of-00010.safetensors",
38
+ "transformer.encoder.layers.12.mlp.dense_h_to_4h.weight": "model-00004-of-00010.safetensors",
39
+ "transformer.encoder.layers.12.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
40
+ "transformer.encoder.layers.12.self_attention.dense.weight": "model-00004-of-00010.safetensors",
41
+ "transformer.encoder.layers.12.self_attention.query_key_value.bias": "model-00004-of-00010.safetensors",
42
+ "transformer.encoder.layers.12.self_attention.query_key_value.weight": "model-00004-of-00010.safetensors",
43
+ "transformer.encoder.layers.13.input_layernorm.weight": "model-00004-of-00010.safetensors",
44
+ "transformer.encoder.layers.13.mlp.dense_4h_to_h.weight": "model-00004-of-00010.safetensors",
45
+ "transformer.encoder.layers.13.mlp.dense_h_to_4h.weight": "model-00004-of-00010.safetensors",
46
+ "transformer.encoder.layers.13.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
47
+ "transformer.encoder.layers.13.self_attention.dense.weight": "model-00004-of-00010.safetensors",
48
+ "transformer.encoder.layers.13.self_attention.query_key_value.bias": "model-00004-of-00010.safetensors",
49
+ "transformer.encoder.layers.13.self_attention.query_key_value.weight": "model-00004-of-00010.safetensors",
50
+ "transformer.encoder.layers.14.input_layernorm.weight": "model-00004-of-00010.safetensors",
51
+ "transformer.encoder.layers.14.mlp.dense_4h_to_h.weight": "model-00004-of-00010.safetensors",
52
+ "transformer.encoder.layers.14.mlp.dense_h_to_4h.weight": "model-00004-of-00010.safetensors",
53
+ "transformer.encoder.layers.14.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
54
+ "transformer.encoder.layers.14.self_attention.dense.weight": "model-00004-of-00010.safetensors",
55
+ "transformer.encoder.layers.14.self_attention.query_key_value.bias": "model-00004-of-00010.safetensors",
56
+ "transformer.encoder.layers.14.self_attention.query_key_value.weight": "model-00004-of-00010.safetensors",
57
+ "transformer.encoder.layers.15.input_layernorm.weight": "model-00004-of-00010.safetensors",
58
+ "transformer.encoder.layers.15.mlp.dense_4h_to_h.weight": "model-00005-of-00010.safetensors",
59
+ "transformer.encoder.layers.15.mlp.dense_h_to_4h.weight": "model-00004-of-00010.safetensors",
60
+ "transformer.encoder.layers.15.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
61
+ "transformer.encoder.layers.15.self_attention.dense.weight": "model-00004-of-00010.safetensors",
62
+ "transformer.encoder.layers.15.self_attention.query_key_value.bias": "model-00004-of-00010.safetensors",
63
+ "transformer.encoder.layers.15.self_attention.query_key_value.weight": "model-00004-of-00010.safetensors",
64
+ "transformer.encoder.layers.16.input_layernorm.weight": "model-00005-of-00010.safetensors",
65
+ "transformer.encoder.layers.16.mlp.dense_4h_to_h.weight": "model-00005-of-00010.safetensors",
66
+ "transformer.encoder.layers.16.mlp.dense_h_to_4h.weight": "model-00005-of-00010.safetensors",
67
+ "transformer.encoder.layers.16.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
68
+ "transformer.encoder.layers.16.self_attention.dense.weight": "model-00005-of-00010.safetensors",
69
+ "transformer.encoder.layers.16.self_attention.query_key_value.bias": "model-00005-of-00010.safetensors",
70
+ "transformer.encoder.layers.16.self_attention.query_key_value.weight": "model-00005-of-00010.safetensors",
71
+ "transformer.encoder.layers.17.input_layernorm.weight": "model-00005-of-00010.safetensors",
72
+ "transformer.encoder.layers.17.mlp.dense_4h_to_h.weight": "model-00005-of-00010.safetensors",
73
+ "transformer.encoder.layers.17.mlp.dense_h_to_4h.weight": "model-00005-of-00010.safetensors",
74
+ "transformer.encoder.layers.17.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
75
+ "transformer.encoder.layers.17.self_attention.dense.weight": "model-00005-of-00010.safetensors",
76
+ "transformer.encoder.layers.17.self_attention.query_key_value.bias": "model-00005-of-00010.safetensors",
77
+ "transformer.encoder.layers.17.self_attention.query_key_value.weight": "model-00005-of-00010.safetensors",
78
+ "transformer.encoder.layers.18.input_layernorm.weight": "model-00005-of-00010.safetensors",
79
+ "transformer.encoder.layers.18.mlp.dense_4h_to_h.weight": "model-00005-of-00010.safetensors",
80
+ "transformer.encoder.layers.18.mlp.dense_h_to_4h.weight": "model-00005-of-00010.safetensors",
81
+ "transformer.encoder.layers.18.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
82
+ "transformer.encoder.layers.18.self_attention.dense.weight": "model-00005-of-00010.safetensors",
83
+ "transformer.encoder.layers.18.self_attention.query_key_value.bias": "model-00005-of-00010.safetensors",
84
+ "transformer.encoder.layers.18.self_attention.query_key_value.weight": "model-00005-of-00010.safetensors",
85
+ "transformer.encoder.layers.19.input_layernorm.weight": "model-00005-of-00010.safetensors",
86
+ "transformer.encoder.layers.19.mlp.dense_4h_to_h.weight": "model-00005-of-00010.safetensors",
87
+ "transformer.encoder.layers.19.mlp.dense_h_to_4h.weight": "model-00005-of-00010.safetensors",
88
+ "transformer.encoder.layers.19.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
89
+ "transformer.encoder.layers.19.self_attention.dense.weight": "model-00005-of-00010.safetensors",
90
+ "transformer.encoder.layers.19.self_attention.query_key_value.bias": "model-00005-of-00010.safetensors",
91
+ "transformer.encoder.layers.19.self_attention.query_key_value.weight": "model-00005-of-00010.safetensors",
92
+ "transformer.encoder.layers.2.input_layernorm.weight": "model-00002-of-00010.safetensors",
93
+ "transformer.encoder.layers.2.mlp.dense_4h_to_h.weight": "model-00002-of-00010.safetensors",
94
+ "transformer.encoder.layers.2.mlp.dense_h_to_4h.weight": "model-00002-of-00010.safetensors",
95
+ "transformer.encoder.layers.2.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
96
+ "transformer.encoder.layers.2.self_attention.dense.weight": "model-00002-of-00010.safetensors",
97
+ "transformer.encoder.layers.2.self_attention.query_key_value.bias": "model-00002-of-00010.safetensors",
98
+ "transformer.encoder.layers.2.self_attention.query_key_value.weight": "model-00002-of-00010.safetensors",
99
+ "transformer.encoder.layers.20.input_layernorm.weight": "model-00005-of-00010.safetensors",
100
+ "transformer.encoder.layers.20.mlp.dense_4h_to_h.weight": "model-00006-of-00010.safetensors",
101
+ "transformer.encoder.layers.20.mlp.dense_h_to_4h.weight": "model-00006-of-00010.safetensors",
102
+ "transformer.encoder.layers.20.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
103
+ "transformer.encoder.layers.20.self_attention.dense.weight": "model-00005-of-00010.safetensors",
104
+ "transformer.encoder.layers.20.self_attention.query_key_value.bias": "model-00005-of-00010.safetensors",
105
+ "transformer.encoder.layers.20.self_attention.query_key_value.weight": "model-00005-of-00010.safetensors",
106
+ "transformer.encoder.layers.21.input_layernorm.weight": "model-00006-of-00010.safetensors",
107
+ "transformer.encoder.layers.21.mlp.dense_4h_to_h.weight": "model-00006-of-00010.safetensors",
108
+ "transformer.encoder.layers.21.mlp.dense_h_to_4h.weight": "model-00006-of-00010.safetensors",
109
+ "transformer.encoder.layers.21.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
110
+ "transformer.encoder.layers.21.self_attention.dense.weight": "model-00006-of-00010.safetensors",
111
+ "transformer.encoder.layers.21.self_attention.query_key_value.bias": "model-00006-of-00010.safetensors",
112
+ "transformer.encoder.layers.21.self_attention.query_key_value.weight": "model-00006-of-00010.safetensors",
113
+ "transformer.encoder.layers.22.input_layernorm.weight": "model-00006-of-00010.safetensors",
114
+ "transformer.encoder.layers.22.mlp.dense_4h_to_h.weight": "model-00006-of-00010.safetensors",
115
+ "transformer.encoder.layers.22.mlp.dense_h_to_4h.weight": "model-00006-of-00010.safetensors",
116
+ "transformer.encoder.layers.22.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
117
+ "transformer.encoder.layers.22.self_attention.dense.weight": "model-00006-of-00010.safetensors",
118
+ "transformer.encoder.layers.22.self_attention.query_key_value.bias": "model-00006-of-00010.safetensors",
119
+ "transformer.encoder.layers.22.self_attention.query_key_value.weight": "model-00006-of-00010.safetensors",
120
+ "transformer.encoder.layers.23.input_layernorm.weight": "model-00006-of-00010.safetensors",
121
+ "transformer.encoder.layers.23.mlp.dense_4h_to_h.weight": "model-00006-of-00010.safetensors",
122
+ "transformer.encoder.layers.23.mlp.dense_h_to_4h.weight": "model-00006-of-00010.safetensors",
123
+ "transformer.encoder.layers.23.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
124
+ "transformer.encoder.layers.23.self_attention.dense.weight": "model-00006-of-00010.safetensors",
125
+ "transformer.encoder.layers.23.self_attention.query_key_value.bias": "model-00006-of-00010.safetensors",
126
+ "transformer.encoder.layers.23.self_attention.query_key_value.weight": "model-00006-of-00010.safetensors",
127
+ "transformer.encoder.layers.24.input_layernorm.weight": "model-00006-of-00010.safetensors",
128
+ "transformer.encoder.layers.24.mlp.dense_4h_to_h.weight": "model-00006-of-00010.safetensors",
129
+ "transformer.encoder.layers.24.mlp.dense_h_to_4h.weight": "model-00006-of-00010.safetensors",
130
+ "transformer.encoder.layers.24.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
131
+ "transformer.encoder.layers.24.self_attention.dense.weight": "model-00006-of-00010.safetensors",
132
+ "transformer.encoder.layers.24.self_attention.query_key_value.bias": "model-00006-of-00010.safetensors",
133
+ "transformer.encoder.layers.24.self_attention.query_key_value.weight": "model-00006-of-00010.safetensors",
134
+ "transformer.encoder.layers.25.input_layernorm.weight": "model-00006-of-00010.safetensors",
135
+ "transformer.encoder.layers.25.mlp.dense_4h_to_h.weight": "model-00007-of-00010.safetensors",
136
+ "transformer.encoder.layers.25.mlp.dense_h_to_4h.weight": "model-00007-of-00010.safetensors",
137
+ "transformer.encoder.layers.25.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
138
+ "transformer.encoder.layers.25.self_attention.dense.weight": "model-00007-of-00010.safetensors",
139
+ "transformer.encoder.layers.25.self_attention.query_key_value.bias": "model-00007-of-00010.safetensors",
140
+ "transformer.encoder.layers.25.self_attention.query_key_value.weight": "model-00007-of-00010.safetensors",
141
+ "transformer.encoder.layers.26.input_layernorm.weight": "model-00007-of-00010.safetensors",
142
+ "transformer.encoder.layers.26.mlp.dense_4h_to_h.weight": "model-00007-of-00010.safetensors",
143
+ "transformer.encoder.layers.26.mlp.dense_h_to_4h.weight": "model-00007-of-00010.safetensors",
144
+ "transformer.encoder.layers.26.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
145
+ "transformer.encoder.layers.26.self_attention.dense.weight": "model-00007-of-00010.safetensors",
146
+ "transformer.encoder.layers.26.self_attention.query_key_value.bias": "model-00007-of-00010.safetensors",
147
+ "transformer.encoder.layers.26.self_attention.query_key_value.weight": "model-00007-of-00010.safetensors",
148
+ "transformer.encoder.layers.27.input_layernorm.weight": "model-00007-of-00010.safetensors",
149
+ "transformer.encoder.layers.27.mlp.dense_4h_to_h.weight": "model-00007-of-00010.safetensors",
150
+ "transformer.encoder.layers.27.mlp.dense_h_to_4h.weight": "model-00007-of-00010.safetensors",
151
+ "transformer.encoder.layers.27.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
152
+ "transformer.encoder.layers.27.self_attention.dense.weight": "model-00007-of-00010.safetensors",
153
+ "transformer.encoder.layers.27.self_attention.query_key_value.bias": "model-00007-of-00010.safetensors",
154
+ "transformer.encoder.layers.27.self_attention.query_key_value.weight": "model-00007-of-00010.safetensors",
155
+ "transformer.encoder.layers.28.input_layernorm.weight": "model-00007-of-00010.safetensors",
156
+ "transformer.encoder.layers.28.mlp.dense_4h_to_h.weight": "model-00007-of-00010.safetensors",
157
+ "transformer.encoder.layers.28.mlp.dense_h_to_4h.weight": "model-00007-of-00010.safetensors",
158
+ "transformer.encoder.layers.28.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
159
+ "transformer.encoder.layers.28.self_attention.dense.weight": "model-00007-of-00010.safetensors",
160
+ "transformer.encoder.layers.28.self_attention.query_key_value.bias": "model-00007-of-00010.safetensors",
161
+ "transformer.encoder.layers.28.self_attention.query_key_value.weight": "model-00007-of-00010.safetensors",
162
+ "transformer.encoder.layers.29.input_layernorm.weight": "model-00007-of-00010.safetensors",
163
+ "transformer.encoder.layers.29.mlp.dense_4h_to_h.weight": "model-00008-of-00010.safetensors",
164
+ "transformer.encoder.layers.29.mlp.dense_h_to_4h.weight": "model-00007-of-00010.safetensors",
165
+ "transformer.encoder.layers.29.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
166
+ "transformer.encoder.layers.29.self_attention.dense.weight": "model-00007-of-00010.safetensors",
167
+ "transformer.encoder.layers.29.self_attention.query_key_value.bias": "model-00007-of-00010.safetensors",
168
+ "transformer.encoder.layers.29.self_attention.query_key_value.weight": "model-00007-of-00010.safetensors",
169
+ "transformer.encoder.layers.3.input_layernorm.weight": "model-00002-of-00010.safetensors",
170
+ "transformer.encoder.layers.3.mlp.dense_4h_to_h.weight": "model-00002-of-00010.safetensors",
171
+ "transformer.encoder.layers.3.mlp.dense_h_to_4h.weight": "model-00002-of-00010.safetensors",
172
+ "transformer.encoder.layers.3.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
173
+ "transformer.encoder.layers.3.self_attention.dense.weight": "model-00002-of-00010.safetensors",
174
+ "transformer.encoder.layers.3.self_attention.query_key_value.bias": "model-00002-of-00010.safetensors",
175
+ "transformer.encoder.layers.3.self_attention.query_key_value.weight": "model-00002-of-00010.safetensors",
176
+ "transformer.encoder.layers.30.input_layernorm.weight": "model-00008-of-00010.safetensors",
177
+ "transformer.encoder.layers.30.mlp.dense_4h_to_h.weight": "model-00008-of-00010.safetensors",
178
+ "transformer.encoder.layers.30.mlp.dense_h_to_4h.weight": "model-00008-of-00010.safetensors",
179
+ "transformer.encoder.layers.30.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
180
+ "transformer.encoder.layers.30.self_attention.dense.weight": "model-00008-of-00010.safetensors",
181
+ "transformer.encoder.layers.30.self_attention.query_key_value.bias": "model-00008-of-00010.safetensors",
182
+ "transformer.encoder.layers.30.self_attention.query_key_value.weight": "model-00008-of-00010.safetensors",
183
+ "transformer.encoder.layers.31.input_layernorm.weight": "model-00008-of-00010.safetensors",
184
+ "transformer.encoder.layers.31.mlp.dense_4h_to_h.weight": "model-00008-of-00010.safetensors",
185
+ "transformer.encoder.layers.31.mlp.dense_h_to_4h.weight": "model-00008-of-00010.safetensors",
186
+ "transformer.encoder.layers.31.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
187
+ "transformer.encoder.layers.31.self_attention.dense.weight": "model-00008-of-00010.safetensors",
188
+ "transformer.encoder.layers.31.self_attention.query_key_value.bias": "model-00008-of-00010.safetensors",
189
+ "transformer.encoder.layers.31.self_attention.query_key_value.weight": "model-00008-of-00010.safetensors",
190
+ "transformer.encoder.layers.32.input_layernorm.weight": "model-00008-of-00010.safetensors",
191
+ "transformer.encoder.layers.32.mlp.dense_4h_to_h.weight": "model-00008-of-00010.safetensors",
192
+ "transformer.encoder.layers.32.mlp.dense_h_to_4h.weight": "model-00008-of-00010.safetensors",
193
+ "transformer.encoder.layers.32.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
194
+ "transformer.encoder.layers.32.self_attention.dense.weight": "model-00008-of-00010.safetensors",
195
+ "transformer.encoder.layers.32.self_attention.query_key_value.bias": "model-00008-of-00010.safetensors",
196
+ "transformer.encoder.layers.32.self_attention.query_key_value.weight": "model-00008-of-00010.safetensors",
197
+ "transformer.encoder.layers.33.input_layernorm.weight": "model-00008-of-00010.safetensors",
198
+ "transformer.encoder.layers.33.mlp.dense_4h_to_h.weight": "model-00008-of-00010.safetensors",
199
+ "transformer.encoder.layers.33.mlp.dense_h_to_4h.weight": "model-00008-of-00010.safetensors",
200
+ "transformer.encoder.layers.33.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
201
+ "transformer.encoder.layers.33.self_attention.dense.weight": "model-00008-of-00010.safetensors",
202
+ "transformer.encoder.layers.33.self_attention.query_key_value.bias": "model-00008-of-00010.safetensors",
203
+ "transformer.encoder.layers.33.self_attention.query_key_value.weight": "model-00008-of-00010.safetensors",
204
+ "transformer.encoder.layers.34.input_layernorm.weight": "model-00008-of-00010.safetensors",
205
+ "transformer.encoder.layers.34.mlp.dense_4h_to_h.weight": "model-00009-of-00010.safetensors",
206
+ "transformer.encoder.layers.34.mlp.dense_h_to_4h.weight": "model-00009-of-00010.safetensors",
207
+ "transformer.encoder.layers.34.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
208
+ "transformer.encoder.layers.34.self_attention.dense.weight": "model-00008-of-00010.safetensors",
209
+ "transformer.encoder.layers.34.self_attention.query_key_value.bias": "model-00008-of-00010.safetensors",
210
+ "transformer.encoder.layers.34.self_attention.query_key_value.weight": "model-00008-of-00010.safetensors",
211
+ "transformer.encoder.layers.35.input_layernorm.weight": "model-00009-of-00010.safetensors",
212
+ "transformer.encoder.layers.35.mlp.dense_4h_to_h.weight": "model-00009-of-00010.safetensors",
213
+ "transformer.encoder.layers.35.mlp.dense_h_to_4h.weight": "model-00009-of-00010.safetensors",
214
+ "transformer.encoder.layers.35.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
215
+ "transformer.encoder.layers.35.self_attention.dense.weight": "model-00009-of-00010.safetensors",
216
+ "transformer.encoder.layers.35.self_attention.query_key_value.bias": "model-00009-of-00010.safetensors",
217
+ "transformer.encoder.layers.35.self_attention.query_key_value.weight": "model-00009-of-00010.safetensors",
218
+ "transformer.encoder.layers.36.input_layernorm.weight": "model-00009-of-00010.safetensors",
219
+ "transformer.encoder.layers.36.mlp.dense_4h_to_h.weight": "model-00009-of-00010.safetensors",
220
+ "transformer.encoder.layers.36.mlp.dense_h_to_4h.weight": "model-00009-of-00010.safetensors",
221
+ "transformer.encoder.layers.36.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
222
+ "transformer.encoder.layers.36.self_attention.dense.weight": "model-00009-of-00010.safetensors",
223
+ "transformer.encoder.layers.36.self_attention.query_key_value.bias": "model-00009-of-00010.safetensors",
224
+ "transformer.encoder.layers.36.self_attention.query_key_value.weight": "model-00009-of-00010.safetensors",
225
+ "transformer.encoder.layers.37.input_layernorm.weight": "model-00009-of-00010.safetensors",
226
+ "transformer.encoder.layers.37.mlp.dense_4h_to_h.weight": "model-00009-of-00010.safetensors",
227
+ "transformer.encoder.layers.37.mlp.dense_h_to_4h.weight": "model-00009-of-00010.safetensors",
228
+ "transformer.encoder.layers.37.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
229
+ "transformer.encoder.layers.37.self_attention.dense.weight": "model-00009-of-00010.safetensors",
230
+ "transformer.encoder.layers.37.self_attention.query_key_value.bias": "model-00009-of-00010.safetensors",
231
+ "transformer.encoder.layers.37.self_attention.query_key_value.weight": "model-00009-of-00010.safetensors",
232
+ "transformer.encoder.layers.38.input_layernorm.weight": "model-00009-of-00010.safetensors",
233
+ "transformer.encoder.layers.38.mlp.dense_4h_to_h.weight": "model-00009-of-00010.safetensors",
234
+ "transformer.encoder.layers.38.mlp.dense_h_to_4h.weight": "model-00009-of-00010.safetensors",
235
+ "transformer.encoder.layers.38.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
236
+ "transformer.encoder.layers.38.self_attention.dense.weight": "model-00009-of-00010.safetensors",
237
+ "transformer.encoder.layers.38.self_attention.query_key_value.bias": "model-00009-of-00010.safetensors",
238
+ "transformer.encoder.layers.38.self_attention.query_key_value.weight": "model-00009-of-00010.safetensors",
239
+ "transformer.encoder.layers.39.input_layernorm.weight": "model-00009-of-00010.safetensors",
240
+ "transformer.encoder.layers.39.mlp.dense_4h_to_h.weight": "model-00010-of-00010.safetensors",
241
+ "transformer.encoder.layers.39.mlp.dense_h_to_4h.weight": "model-00010-of-00010.safetensors",
242
+ "transformer.encoder.layers.39.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
243
+ "transformer.encoder.layers.39.self_attention.dense.weight": "model-00010-of-00010.safetensors",
244
+ "transformer.encoder.layers.39.self_attention.query_key_value.bias": "model-00010-of-00010.safetensors",
245
+ "transformer.encoder.layers.39.self_attention.query_key_value.weight": "model-00010-of-00010.safetensors",
246
+ "transformer.encoder.layers.4.input_layernorm.weight": "model-00002-of-00010.safetensors",
247
+ "transformer.encoder.layers.4.mlp.dense_4h_to_h.weight": "model-00002-of-00010.safetensors",
248
+ "transformer.encoder.layers.4.mlp.dense_h_to_4h.weight": "model-00002-of-00010.safetensors",
249
+ "transformer.encoder.layers.4.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
250
+ "transformer.encoder.layers.4.self_attention.dense.weight": "model-00002-of-00010.safetensors",
251
+ "transformer.encoder.layers.4.self_attention.query_key_value.bias": "model-00002-of-00010.safetensors",
252
+ "transformer.encoder.layers.4.self_attention.query_key_value.weight": "model-00002-of-00010.safetensors",
253
+ "transformer.encoder.layers.5.input_layernorm.weight": "model-00002-of-00010.safetensors",
254
+ "transformer.encoder.layers.5.mlp.dense_4h_to_h.weight": "model-00002-of-00010.safetensors",
255
+ "transformer.encoder.layers.5.mlp.dense_h_to_4h.weight": "model-00002-of-00010.safetensors",
256
+ "transformer.encoder.layers.5.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
257
+ "transformer.encoder.layers.5.self_attention.dense.weight": "model-00002-of-00010.safetensors",
258
+ "transformer.encoder.layers.5.self_attention.query_key_value.bias": "model-00002-of-00010.safetensors",
259
+ "transformer.encoder.layers.5.self_attention.query_key_value.weight": "model-00002-of-00010.safetensors",
260
+ "transformer.encoder.layers.6.input_layernorm.weight": "model-00002-of-00010.safetensors",
261
+ "transformer.encoder.layers.6.mlp.dense_4h_to_h.weight": "model-00003-of-00010.safetensors",
262
+ "transformer.encoder.layers.6.mlp.dense_h_to_4h.weight": "model-00003-of-00010.safetensors",
263
+ "transformer.encoder.layers.6.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
264
+ "transformer.encoder.layers.6.self_attention.dense.weight": "model-00002-of-00010.safetensors",
265
+ "transformer.encoder.layers.6.self_attention.query_key_value.bias": "model-00002-of-00010.safetensors",
266
+ "transformer.encoder.layers.6.self_attention.query_key_value.weight": "model-00002-of-00010.safetensors",
267
+ "transformer.encoder.layers.7.input_layernorm.weight": "model-00003-of-00010.safetensors",
268
+ "transformer.encoder.layers.7.mlp.dense_4h_to_h.weight": "model-00003-of-00010.safetensors",
269
+ "transformer.encoder.layers.7.mlp.dense_h_to_4h.weight": "model-00003-of-00010.safetensors",
270
+ "transformer.encoder.layers.7.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
271
+ "transformer.encoder.layers.7.self_attention.dense.weight": "model-00003-of-00010.safetensors",
272
+ "transformer.encoder.layers.7.self_attention.query_key_value.bias": "model-00003-of-00010.safetensors",
273
+ "transformer.encoder.layers.7.self_attention.query_key_value.weight": "model-00003-of-00010.safetensors",
274
+ "transformer.encoder.layers.8.input_layernorm.weight": "model-00003-of-00010.safetensors",
275
+ "transformer.encoder.layers.8.mlp.dense_4h_to_h.weight": "model-00003-of-00010.safetensors",
276
+ "transformer.encoder.layers.8.mlp.dense_h_to_4h.weight": "model-00003-of-00010.safetensors",
277
+ "transformer.encoder.layers.8.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
278
+ "transformer.encoder.layers.8.self_attention.dense.weight": "model-00003-of-00010.safetensors",
279
+ "transformer.encoder.layers.8.self_attention.query_key_value.bias": "model-00003-of-00010.safetensors",
280
+ "transformer.encoder.layers.8.self_attention.query_key_value.weight": "model-00003-of-00010.safetensors",
281
+ "transformer.encoder.layers.9.input_layernorm.weight": "model-00003-of-00010.safetensors",
282
+ "transformer.encoder.layers.9.mlp.dense_4h_to_h.weight": "model-00003-of-00010.safetensors",
283
+ "transformer.encoder.layers.9.mlp.dense_h_to_4h.weight": "model-00003-of-00010.safetensors",
284
+ "transformer.encoder.layers.9.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
285
+ "transformer.encoder.layers.9.self_attention.dense.weight": "model-00003-of-00010.safetensors",
286
+ "transformer.encoder.layers.9.self_attention.query_key_value.bias": "model-00003-of-00010.safetensors",
287
+ "transformer.encoder.layers.9.self_attention.query_key_value.weight": "model-00003-of-00010.safetensors",
288
+ "transformer.output_layer.weight": "model-00010-of-00010.safetensors",
289
+ "transformer.rotary_pos_emb.inv_freq": "model-00001-of-00010.safetensors"
290
+ }
291
+ }
modeling_chatglm.py ADDED
@@ -0,0 +1,1207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ PyTorch ChatGLM model. """
2
+ import json
3
+ import math
4
+ import copy
5
+ import warnings
6
+ import re
7
+ import sys
8
+
9
+ import torch
10
+ import torch.utils.checkpoint
11
+ import torch.nn.functional as F
12
+ from torch import nn
13
+ from torch.nn import CrossEntropyLoss, LayerNorm, MSELoss, BCEWithLogitsLoss
14
+ from torch.nn.utils import skip_init
15
+ from typing import Optional, Tuple, Union, List, Callable, Dict, Any
16
+ from copy import deepcopy
17
+
18
+ from transformers.modeling_outputs import (
19
+ BaseModelOutputWithPast,
20
+ CausalLMOutputWithPast,
21
+ SequenceClassifierOutputWithPast,
22
+ )
23
+ from transformers.modeling_utils import PreTrainedModel
24
+ from transformers.utils import logging
25
+ from transformers.generation.logits_process import LogitsProcessor
26
+ from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig, ModelOutput
27
+
28
+ from .configuration_chatglm import ChatGLMConfig
29
+
30
+ # flags required to enable jit fusion kernels
31
+
32
+ if sys.platform != 'darwin':
33
+ torch._C._jit_set_profiling_mode(False)
34
+ torch._C._jit_set_profiling_executor(False)
35
+ torch._C._jit_override_can_fuse_on_cpu(True)
36
+ torch._C._jit_override_can_fuse_on_gpu(True)
37
+
38
+ logger = logging.get_logger(__name__)
39
+
40
+ _CHECKPOINT_FOR_DOC = "THUDM/ChatGLM"
41
+ _CONFIG_FOR_DOC = "ChatGLMConfig"
42
+
43
+ CHATGLM_6B_PRETRAINED_MODEL_ARCHIVE_LIST = [
44
+ "THUDM/chatglm3-6b",
45
+ # See all ChatGLM models at https://huggingface.co/models?filter=chatglm
46
+ ]
47
+
48
+
49
+ def default_init(cls, *args, **kwargs):
50
+ return cls(*args, **kwargs)
51
+
52
+
53
+ class InvalidScoreLogitsProcessor(LogitsProcessor):
54
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
55
+ if torch.isnan(scores).any() or torch.isinf(scores).any():
56
+ scores.zero_()
57
+ scores[..., 198] = 5e4
58
+ return scores
59
+
60
+
61
+ def split_tensor_along_last_dim(
62
+ tensor: torch.Tensor,
63
+ num_partitions: int,
64
+ contiguous_split_chunks: bool = False,
65
+ ) -> List[torch.Tensor]:
66
+ """Split a tensor along its last dimension.
67
+
68
+ Arguments:
69
+ tensor: input tensor.
70
+ num_partitions: number of partitions to split the tensor
71
+ contiguous_split_chunks: If True, make each chunk contiguous
72
+ in memory.
73
+
74
+ Returns:
75
+ A list of Tensors
76
+ """
77
+ # Get the size and dimension.
78
+ last_dim = tensor.dim() - 1
79
+ last_dim_size = tensor.size()[last_dim] // num_partitions
80
+ # Split.
81
+ tensor_list = torch.split(tensor, last_dim_size, dim=last_dim)
82
+ # Note: torch.split does not create contiguous tensors by default.
83
+ if contiguous_split_chunks:
84
+ return tuple(chunk.contiguous() for chunk in tensor_list)
85
+
86
+ return tensor_list
87
+
88
+
89
+ class RotaryEmbedding(nn.Module):
90
+ def __init__(self, dim, rope_ratio=1, original_impl=False, device=None, dtype=None):
91
+ super().__init__()
92
+ inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2, device=device).to(dtype=dtype) / dim))
93
+ self.register_buffer("inv_freq", inv_freq)
94
+ self.dim = dim
95
+ self.original_impl = original_impl
96
+ self.rope_ratio = rope_ratio
97
+
98
+ def forward_impl(
99
+ self, seq_len: int, n_elem: int, dtype: torch.dtype, device: torch.device, base: int = 10000
100
+ ):
101
+ """Enhanced Transformer with Rotary Position Embedding.
102
+
103
+ Derived from: https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/
104
+ transformers/rope/__init__.py. MIT License:
105
+ https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/master/license.
106
+ """
107
+ # $\Theta = {\theta_i = 10000^{\frac{2(i-1)}{d}}, i \in [1, 2, ..., \frac{d}{2}]}$
108
+ base = base * self.rope_ratio
109
+ theta = 1.0 / (base ** (torch.arange(0, n_elem, 2, dtype=torch.float, device=device) / n_elem))
110
+
111
+ # Create position indexes `[0, 1, ..., seq_len - 1]`
112
+ seq_idx = torch.arange(seq_len, dtype=torch.float, device=device)
113
+
114
+ # Calculate the product of position index and $\theta_i$
115
+ idx_theta = torch.outer(seq_idx, theta).float()
116
+
117
+ cache = torch.stack([torch.cos(idx_theta), torch.sin(idx_theta)], dim=-1)
118
+
119
+ # this is to mimic the behaviour of complex32, else we will get different results
120
+ if dtype in (torch.float16, torch.bfloat16, torch.int8):
121
+ cache = cache.bfloat16() if dtype == torch.bfloat16 else cache.half()
122
+ return cache
123
+
124
+ def forward(self, max_seq_len, offset=0):
125
+ return self.forward_impl(
126
+ max_seq_len, self.dim, dtype=self.inv_freq.dtype, device=self.inv_freq.device
127
+ )
128
+
129
+
130
+ @torch.jit.script
131
+ def apply_rotary_pos_emb(x: torch.Tensor, rope_cache: torch.Tensor) -> torch.Tensor:
132
+ # x: [b, np, sq, hn]
133
+ b, np, sq, hn = x.size(0), x.size(1), x.size(2), x.size(3)
134
+ rot_dim = rope_cache.shape[-2] * 2
135
+ x, x_pass = x[..., :rot_dim], x[..., rot_dim:]
136
+ # truncate to support variable sizes
137
+ rope_cache = rope_cache[:, :sq]
138
+ xshaped = x.reshape(b, np, sq, rot_dim // 2, 2)
139
+ rope_cache = rope_cache.view(-1, 1, sq, xshaped.size(3), 2)
140
+ x_out2 = torch.stack(
141
+ [
142
+ xshaped[..., 0] * rope_cache[..., 0] - xshaped[..., 1] * rope_cache[..., 1],
143
+ xshaped[..., 1] * rope_cache[..., 0] + xshaped[..., 0] * rope_cache[..., 1],
144
+ ],
145
+ -1,
146
+ )
147
+ x_out2 = x_out2.flatten(3)
148
+ return torch.cat((x_out2, x_pass), dim=-1)
149
+
150
+
151
+ class RMSNorm(torch.nn.Module):
152
+ def __init__(self, normalized_shape, eps=1e-5, device=None, dtype=None, **kwargs):
153
+ super().__init__()
154
+ self.weight = torch.nn.Parameter(torch.empty(normalized_shape, device=device, dtype=dtype))
155
+ self.eps = eps
156
+
157
+ def forward(self, hidden_states: torch.Tensor):
158
+ input_dtype = hidden_states.dtype
159
+ variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
160
+ hidden_states = hidden_states * torch.rsqrt(variance + self.eps)
161
+
162
+ return (self.weight * hidden_states).to(input_dtype)
163
+
164
+
165
+ class CoreAttention(torch.nn.Module):
166
+ def __init__(self, config: ChatGLMConfig, layer_number):
167
+ super(CoreAttention, self).__init__()
168
+
169
+ self.apply_query_key_layer_scaling = config.apply_query_key_layer_scaling
170
+ self.attention_softmax_in_fp32 = config.attention_softmax_in_fp32
171
+ if self.apply_query_key_layer_scaling:
172
+ self.attention_softmax_in_fp32 = True
173
+ self.layer_number = max(1, layer_number)
174
+
175
+ projection_size = config.kv_channels * config.num_attention_heads
176
+
177
+ # Per attention head and per partition values.
178
+ self.hidden_size_per_partition = projection_size
179
+ self.hidden_size_per_attention_head = projection_size // config.num_attention_heads
180
+ self.num_attention_heads_per_partition = config.num_attention_heads
181
+
182
+ coeff = None
183
+ self.norm_factor = math.sqrt(self.hidden_size_per_attention_head)
184
+ if self.apply_query_key_layer_scaling:
185
+ coeff = self.layer_number
186
+ self.norm_factor *= coeff
187
+ self.coeff = coeff
188
+
189
+ self.attention_dropout = torch.nn.Dropout(config.attention_dropout)
190
+
191
+ def forward(self, query_layer, key_layer, value_layer, attention_mask):
192
+ pytorch_major_version = int(torch.__version__.split('.')[0])
193
+ if pytorch_major_version >= 2:
194
+ if attention_mask is None and query_layer.shape[2] == key_layer.shape[2]:
195
+ context_layer = torch.nn.functional.scaled_dot_product_attention(query_layer, key_layer, value_layer,
196
+ is_causal=True)
197
+ else:
198
+ if attention_mask is not None:
199
+ attention_mask = ~attention_mask
200
+ context_layer = torch.nn.functional.scaled_dot_product_attention(query_layer, key_layer, value_layer,
201
+ attention_mask)
202
+ context_layer = context_layer.transpose(1, 2).contiguous()
203
+ new_context_layer_shape = context_layer.size()[:-2] + (self.hidden_size_per_partition,)
204
+ context_layer = context_layer.reshape(*new_context_layer_shape)
205
+ else:
206
+ # Raw attention scores
207
+
208
+ # [b, np, sq, sk]
209
+ output_size = (query_layer.size(0), query_layer.size(1), query_layer.size(2), key_layer.size(2))
210
+
211
+ # [b, np, sq, hn] -> [b * np, sq, hn]
212
+ query_layer = query_layer.view(output_size[0] * output_size[1], output_size[2], -1)
213
+ # [b, np, sk, hn] -> [b * np, sk, hn]
214
+ key_layer = key_layer.view(output_size[0] * output_size[1], output_size[3], -1)
215
+
216
+ # preallocting input tensor: [b * np, sq, sk]
217
+ matmul_input_buffer = torch.empty(
218
+ output_size[0] * output_size[1], output_size[2], output_size[3], dtype=query_layer.dtype,
219
+ device=query_layer.device
220
+ )
221
+
222
+ # Raw attention scores. [b * np, sq, sk]
223
+ matmul_result = torch.baddbmm(
224
+ matmul_input_buffer,
225
+ query_layer, # [b * np, sq, hn]
226
+ key_layer.transpose(1, 2), # [b * np, hn, sk]
227
+ beta=0.0,
228
+ alpha=(1.0 / self.norm_factor),
229
+ )
230
+
231
+ # change view to [b, np, sq, sk]
232
+ attention_scores = matmul_result.view(*output_size)
233
+
234
+ # ===========================
235
+ # Attention probs and dropout
236
+ # ===========================
237
+
238
+ # attention scores and attention mask [b, np, sq, sk]
239
+ if self.attention_softmax_in_fp32:
240
+ attention_scores = attention_scores.float()
241
+ if self.coeff is not None:
242
+ attention_scores = attention_scores * self.coeff
243
+ if attention_mask is None and attention_scores.shape[2] == attention_scores.shape[3]:
244
+ attention_mask = torch.ones(output_size[0], 1, output_size[2], output_size[3],
245
+ device=attention_scores.device, dtype=torch.bool)
246
+ attention_mask.tril_()
247
+ attention_mask = ~attention_mask
248
+ if attention_mask is not None:
249
+ attention_scores = attention_scores.masked_fill(attention_mask, float("-inf"))
250
+ attention_probs = F.softmax(attention_scores, dim=-1)
251
+ attention_probs = attention_probs.type_as(value_layer)
252
+
253
+ # This is actually dropping out entire tokens to attend to, which might
254
+ # seem a bit unusual, but is taken from the original Transformer paper.
255
+ attention_probs = self.attention_dropout(attention_probs)
256
+ # =========================
257
+ # Context layer. [sq, b, hp]
258
+ # =========================
259
+
260
+ # value_layer -> context layer.
261
+ # [sk, b, np, hn] --> [b, np, sq, hn]
262
+
263
+ # context layer shape: [b, np, sq, hn]
264
+ output_size = (value_layer.size(1), value_layer.size(2), query_layer.size(0), value_layer.size(3))
265
+ # change view [b * np, sk, hn]
266
+ value_layer = value_layer.view(output_size[0] * output_size[1], value_layer.size(2), -1)
267
+ # change view [b * np, sq, sk]
268
+ attention_probs = attention_probs.view(output_size[0] * output_size[1], output_size[2], -1)
269
+ # matmul: [b * np, sq, hn]
270
+ context_layer = torch.bmm(attention_probs, value_layer)
271
+ # change view [b, np, sq, hn]
272
+ context_layer = context_layer.view(*output_size)
273
+ # [b, np, sq, hn] --> [b, sq, np, hn]
274
+ context_layer = context_layer.transpose(1, 2).contiguous()
275
+ # [b, sq, np, hn] --> [b, sq, hp]
276
+ new_context_layer_shape = context_layer.size()[:-2] + (self.hidden_size_per_partition,)
277
+ context_layer = context_layer.reshape(*new_context_layer_shape)
278
+
279
+ return context_layer
280
+
281
+
282
+ class SelfAttention(torch.nn.Module):
283
+ """Parallel self-attention layer abstract class.
284
+
285
+ Self-attention layer takes input with size [s, b, h]
286
+ and returns output of the same size.
287
+ """
288
+
289
+ def __init__(self, config: ChatGLMConfig, layer_number, device=None):
290
+ super(SelfAttention, self).__init__()
291
+ self.layer_number = max(1, layer_number)
292
+
293
+ self.projection_size = config.kv_channels * config.num_attention_heads
294
+
295
+ # Per attention head and per partition values.
296
+ self.hidden_size_per_attention_head = self.projection_size // config.num_attention_heads
297
+ self.num_attention_heads_per_partition = config.num_attention_heads
298
+
299
+ self.multi_query_attention = config.multi_query_attention
300
+ self.qkv_hidden_size = 3 * self.projection_size
301
+ if self.multi_query_attention:
302
+ self.num_multi_query_groups_per_partition = config.multi_query_group_num
303
+ self.qkv_hidden_size = (
304
+ self.projection_size + 2 * self.hidden_size_per_attention_head * config.multi_query_group_num
305
+ )
306
+ self.query_key_value = nn.Linear(config.hidden_size, self.qkv_hidden_size,
307
+ bias=config.add_bias_linear or config.add_qkv_bias,
308
+ device=device, **_config_to_kwargs(config)
309
+ )
310
+
311
+ self.core_attention = CoreAttention(config, self.layer_number)
312
+
313
+ # Output.
314
+ self.dense = nn.Linear(self.projection_size, config.hidden_size, bias=config.add_bias_linear,
315
+ device=device, **_config_to_kwargs(config)
316
+ )
317
+
318
+ def _allocate_memory(self, inference_max_sequence_len, batch_size, device=None, dtype=None):
319
+ if self.multi_query_attention:
320
+ num_attention_heads = self.num_multi_query_groups_per_partition
321
+ else:
322
+ num_attention_heads = self.num_attention_heads_per_partition
323
+ return torch.empty(
324
+ inference_max_sequence_len,
325
+ batch_size,
326
+ num_attention_heads,
327
+ self.hidden_size_per_attention_head,
328
+ dtype=dtype,
329
+ device=device,
330
+ )
331
+
332
+ def forward(
333
+ self, hidden_states, attention_mask, rotary_pos_emb, kv_cache=None, use_cache=True
334
+ ):
335
+ # hidden_states: [b, sq, h]
336
+
337
+ # =================================================
338
+ # Pre-allocate memory for key-values for inference.
339
+ # =================================================
340
+ # =====================
341
+ # Query, Key, and Value
342
+ # =====================
343
+
344
+ # Attention heads [b, sq, h] --> [b, sq, (np * 3 * hn)]
345
+ mixed_x_layer = self.query_key_value(hidden_states)
346
+
347
+ if self.multi_query_attention:
348
+ (query_layer, key_layer, value_layer) = mixed_x_layer.split(
349
+ [
350
+ self.num_attention_heads_per_partition * self.hidden_size_per_attention_head,
351
+ self.num_multi_query_groups_per_partition * self.hidden_size_per_attention_head,
352
+ self.num_multi_query_groups_per_partition * self.hidden_size_per_attention_head,
353
+ ],
354
+ dim=-1,
355
+ )
356
+ query_layer = query_layer.view(
357
+ query_layer.size()[:-1] + (self.num_attention_heads_per_partition, self.hidden_size_per_attention_head)
358
+ )
359
+ key_layer = key_layer.view(
360
+ key_layer.size()[:-1] + (self.num_multi_query_groups_per_partition, self.hidden_size_per_attention_head)
361
+ )
362
+ value_layer = value_layer.view(
363
+ value_layer.size()[:-1]
364
+ + (self.num_multi_query_groups_per_partition, self.hidden_size_per_attention_head)
365
+ )
366
+ else:
367
+ new_tensor_shape = mixed_x_layer.size()[:-1] + \
368
+ (self.num_attention_heads_per_partition,
369
+ 3 * self.hidden_size_per_attention_head)
370
+ mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
371
+
372
+ # [b, sq, np, 3 * hn] --> 3 [b, sq, np, hn]
373
+ (query_layer, key_layer, value_layer) = split_tensor_along_last_dim(mixed_x_layer, 3)
374
+
375
+ # [b, sq, np, hn] -> [b, np, sq, hn]
376
+ query_layer, key_layer, value_layer = [k.transpose(1, 2) for k in [query_layer, key_layer, value_layer]]
377
+
378
+ # apply relative positional encoding (rotary embedding)
379
+ if rotary_pos_emb is not None:
380
+ query_layer = apply_rotary_pos_emb(query_layer, rotary_pos_emb)
381
+ key_layer = apply_rotary_pos_emb(key_layer, rotary_pos_emb)
382
+
383
+ # adjust key and value for inference
384
+ if kv_cache is not None:
385
+ cache_k, cache_v = kv_cache
386
+ key_layer = torch.cat((cache_k, key_layer), dim=2)
387
+ value_layer = torch.cat((cache_v, value_layer), dim=2)
388
+ if use_cache:
389
+ kv_cache = (key_layer, value_layer)
390
+ else:
391
+ kv_cache = None
392
+
393
+ if self.multi_query_attention:
394
+ key_layer = key_layer.unsqueeze(2)
395
+ key_layer = key_layer.expand(
396
+ -1, -1, self.num_attention_heads_per_partition // self.num_multi_query_groups_per_partition, -1, -1
397
+ )
398
+ key_layer = key_layer.contiguous().view(
399
+ key_layer.size()[:1] + (self.num_attention_heads_per_partition,) + key_layer.size()[3:]
400
+ )
401
+ value_layer = value_layer.unsqueeze(2)
402
+ value_layer = value_layer.expand(
403
+ -1, -1, self.num_attention_heads_per_partition // self.num_multi_query_groups_per_partition, -1, -1
404
+ )
405
+ value_layer = value_layer.contiguous().view(
406
+ value_layer.size()[:1] + (self.num_attention_heads_per_partition,) + value_layer.size()[3:]
407
+ )
408
+
409
+ # ==================================
410
+ # core attention computation
411
+ # ==================================
412
+
413
+ context_layer = self.core_attention(query_layer, key_layer, value_layer, attention_mask)
414
+
415
+ # =================
416
+ # Output. [sq, b, h]
417
+ # =================
418
+
419
+ output = self.dense(context_layer)
420
+
421
+ return output, kv_cache
422
+
423
+
424
+ def _config_to_kwargs(args):
425
+ common_kwargs = {
426
+ "dtype": args.torch_dtype,
427
+ }
428
+ return common_kwargs
429
+
430
+
431
+ class MLP(torch.nn.Module):
432
+ """MLP.
433
+
434
+ MLP will take the input with h hidden state, project it to 4*h
435
+ hidden dimension, perform nonlinear transformation, and project the
436
+ state back into h hidden dimension.
437
+ """
438
+
439
+ def __init__(self, config: ChatGLMConfig, device=None):
440
+ super(MLP, self).__init__()
441
+
442
+ self.add_bias = config.add_bias_linear
443
+
444
+ # Project to 4h. If using swiglu double the output width, see https://arxiv.org/pdf/2002.05202.pdf
445
+ self.dense_h_to_4h = nn.Linear(
446
+ config.hidden_size,
447
+ config.ffn_hidden_size * 2,
448
+ bias=self.add_bias,
449
+ device=device,
450
+ **_config_to_kwargs(config)
451
+ )
452
+
453
+ def swiglu(x):
454
+ x = torch.chunk(x, 2, dim=-1)
455
+ return F.silu(x[0]) * x[1]
456
+
457
+ self.activation_func = swiglu
458
+
459
+ # Project back to h.
460
+ self.dense_4h_to_h = nn.Linear(
461
+ config.ffn_hidden_size,
462
+ config.hidden_size,
463
+ bias=self.add_bias,
464
+ device=device,
465
+ **_config_to_kwargs(config)
466
+ )
467
+
468
+ def forward(self, hidden_states):
469
+ # [s, b, 4hp]
470
+ intermediate_parallel = self.dense_h_to_4h(hidden_states)
471
+ intermediate_parallel = self.activation_func(intermediate_parallel)
472
+ # [s, b, h]
473
+ output = self.dense_4h_to_h(intermediate_parallel)
474
+ return output
475
+
476
+
477
+ class GLMBlock(torch.nn.Module):
478
+ """A single transformer layer.
479
+
480
+ Transformer layer takes input with size [s, b, h] and returns an
481
+ output of the same size.
482
+ """
483
+
484
+ def __init__(self, config: ChatGLMConfig, layer_number, device=None):
485
+ super(GLMBlock, self).__init__()
486
+ self.layer_number = layer_number
487
+
488
+ self.apply_residual_connection_post_layernorm = config.apply_residual_connection_post_layernorm
489
+
490
+ self.fp32_residual_connection = config.fp32_residual_connection
491
+
492
+ LayerNormFunc = RMSNorm if config.rmsnorm else LayerNorm
493
+ # Layernorm on the input data.
494
+ self.input_layernorm = LayerNormFunc(config.hidden_size, eps=config.layernorm_epsilon, device=device,
495
+ dtype=config.torch_dtype)
496
+
497
+ # Self attention.
498
+ self.self_attention = SelfAttention(config, layer_number, device=device)
499
+ self.hidden_dropout = config.hidden_dropout
500
+
501
+ # Layernorm on the attention output
502
+ self.post_attention_layernorm = LayerNormFunc(config.hidden_size, eps=config.layernorm_epsilon, device=device,
503
+ dtype=config.torch_dtype)
504
+
505
+ # MLP
506
+ self.mlp = MLP(config, device=device)
507
+
508
+ def forward(
509
+ self, hidden_states, attention_mask, rotary_pos_emb, kv_cache=None, use_cache=True,
510
+ ):
511
+ # hidden_states: [s, b, h]
512
+
513
+ # Layer norm at the beginning of the transformer layer.
514
+ layernorm_output = self.input_layernorm(hidden_states)
515
+ # Self attention.
516
+ attention_output, kv_cache = self.self_attention(
517
+ layernorm_output,
518
+ attention_mask,
519
+ rotary_pos_emb,
520
+ kv_cache=kv_cache,
521
+ use_cache=use_cache
522
+ )
523
+
524
+ # Residual connection.
525
+ if self.apply_residual_connection_post_layernorm:
526
+ residual = layernorm_output
527
+ else:
528
+ residual = hidden_states
529
+
530
+ layernorm_input = torch.nn.functional.dropout(attention_output, p=self.hidden_dropout, training=self.training)
531
+ layernorm_input = residual + layernorm_input
532
+
533
+ # Layer norm post the self attention.
534
+ layernorm_output = self.post_attention_layernorm(layernorm_input)
535
+
536
+ # MLP.
537
+ mlp_output = self.mlp(layernorm_output)
538
+
539
+ # Second residual connection.
540
+ if self.apply_residual_connection_post_layernorm:
541
+ residual = layernorm_output
542
+ else:
543
+ residual = layernorm_input
544
+
545
+ output = torch.nn.functional.dropout(mlp_output, p=self.hidden_dropout, training=self.training)
546
+ output = residual + output
547
+
548
+ return output, kv_cache
549
+
550
+
551
+ class GLMTransformer(torch.nn.Module):
552
+ """Transformer class."""
553
+
554
+ def __init__(self, config: ChatGLMConfig, device=None):
555
+ super(GLMTransformer, self).__init__()
556
+
557
+ self.fp32_residual_connection = config.fp32_residual_connection
558
+ self.post_layer_norm = config.post_layer_norm
559
+
560
+ # Number of layers.
561
+ self.num_layers = config.num_layers
562
+
563
+ # Transformer layers.
564
+ def build_layer(layer_number):
565
+ return GLMBlock(config, layer_number, device=device)
566
+
567
+ self.layers = torch.nn.ModuleList([build_layer(i + 1) for i in range(self.num_layers)])
568
+
569
+ if self.post_layer_norm:
570
+ LayerNormFunc = RMSNorm if config.rmsnorm else LayerNorm
571
+ # Final layer norm before output.
572
+ self.final_layernorm = LayerNormFunc(config.hidden_size, eps=config.layernorm_epsilon, device=device,
573
+ dtype=config.torch_dtype)
574
+
575
+ self.gradient_checkpointing = False
576
+
577
+ def _get_layer(self, layer_number):
578
+ return self.layers[layer_number]
579
+
580
+ def forward(
581
+ self, hidden_states, attention_mask, rotary_pos_emb, kv_caches=None,
582
+ use_cache: Optional[bool] = True,
583
+ output_hidden_states: Optional[bool] = False,
584
+ ):
585
+ if not kv_caches:
586
+ kv_caches = [None for _ in range(self.num_layers)]
587
+ presents = () if use_cache else None
588
+ if self.gradient_checkpointing and self.training:
589
+ if use_cache:
590
+ logger.warning_once(
591
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
592
+ )
593
+ use_cache = False
594
+
595
+ all_self_attentions = None
596
+ all_hidden_states = () if output_hidden_states else None
597
+ for index in range(self.num_layers):
598
+ if output_hidden_states:
599
+ all_hidden_states = all_hidden_states + (hidden_states,)
600
+
601
+ layer = self._get_layer(index)
602
+ if self.gradient_checkpointing and self.training:
603
+ layer_ret = torch.utils.checkpoint.checkpoint(
604
+ layer,
605
+ hidden_states,
606
+ attention_mask,
607
+ rotary_pos_emb,
608
+ kv_caches[index],
609
+ use_cache,
610
+ use_reentrant=False
611
+ )
612
+ else:
613
+ layer_ret = layer(
614
+ hidden_states,
615
+ attention_mask,
616
+ rotary_pos_emb,
617
+ kv_cache=kv_caches[index],
618
+ use_cache=use_cache
619
+ )
620
+ hidden_states, kv_cache = layer_ret
621
+ if use_cache:
622
+ presents = presents + (kv_cache,)
623
+
624
+ if output_hidden_states:
625
+ all_hidden_states = all_hidden_states + (hidden_states,)
626
+
627
+ # Final layer norm.
628
+ if self.post_layer_norm:
629
+ hidden_states = self.final_layernorm(hidden_states)
630
+
631
+ return hidden_states, presents, all_hidden_states, all_self_attentions
632
+
633
+
634
+ class ChatGLMPreTrainedModel(PreTrainedModel):
635
+ """
636
+ An abstract class to handle weights initialization and
637
+ a simple interface for downloading and loading pretrained models.
638
+ """
639
+
640
+ is_parallelizable = False
641
+ supports_gradient_checkpointing = True
642
+ config_class = ChatGLMConfig
643
+ base_model_prefix = "transformer"
644
+ _no_split_modules = ["GLMBlock"]
645
+
646
+ def _init_weights(self, module: nn.Module):
647
+ """Initialize the weights."""
648
+ return
649
+
650
+ def get_masks(self, input_ids, past_key_values, padding_mask=None):
651
+ batch_size, seq_length = input_ids.shape
652
+ full_attention_mask = torch.ones(batch_size, seq_length, seq_length, device=input_ids.device)
653
+ full_attention_mask.tril_()
654
+ past_length = 0
655
+ if past_key_values:
656
+ past_length = past_key_values[0][0].shape[2]
657
+ if past_length:
658
+ full_attention_mask = torch.cat((torch.ones(batch_size, seq_length, past_length,
659
+ device=input_ids.device), full_attention_mask), dim=-1)
660
+ if padding_mask is not None:
661
+ full_attention_mask = full_attention_mask * padding_mask.unsqueeze(1)
662
+ if not past_length and padding_mask is not None:
663
+ full_attention_mask -= padding_mask.unsqueeze(-1) - 1
664
+ full_attention_mask = (full_attention_mask < 0.5).bool()
665
+ full_attention_mask.unsqueeze_(1)
666
+ return full_attention_mask
667
+
668
+ def get_position_ids(self, input_ids, device):
669
+ batch_size, seq_length = input_ids.shape
670
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=device).unsqueeze(0).repeat(batch_size, 1)
671
+ return position_ids
672
+
673
+ def gradient_checkpointing_enable(self, gradient_checkpointing_kwargs=None):
674
+ if not self.supports_gradient_checkpointing:
675
+ raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.")
676
+
677
+
678
+ class Embedding(torch.nn.Module):
679
+ """Language model embeddings."""
680
+
681
+ def __init__(self, config: ChatGLMConfig, device=None):
682
+ super(Embedding, self).__init__()
683
+
684
+ self.hidden_size = config.hidden_size
685
+ # Word embeddings (parallel).
686
+ self.word_embeddings = nn.Embedding(
687
+ config.padded_vocab_size,
688
+ self.hidden_size,
689
+ dtype=config.torch_dtype,
690
+ device=device
691
+ )
692
+ self.fp32_residual_connection = config.fp32_residual_connection
693
+
694
+ def forward(self, input_ids):
695
+ # Embeddings.
696
+ words_embeddings = self.word_embeddings(input_ids)
697
+ embeddings = words_embeddings
698
+ # If the input flag for fp32 residual connection is set, convert for float.
699
+ if self.fp32_residual_connection:
700
+ embeddings = embeddings.float()
701
+ return embeddings
702
+
703
+
704
+ class ChatGLMModel(ChatGLMPreTrainedModel):
705
+ def __init__(self, config: ChatGLMConfig, device=None, empty_init=True):
706
+ super().__init__(config)
707
+ if empty_init:
708
+ init_method = skip_init
709
+ else:
710
+ init_method = default_init
711
+ init_kwargs = {}
712
+ if device is not None:
713
+ init_kwargs["device"] = device
714
+ self.embedding = init_method(Embedding, config, **init_kwargs)
715
+ self.num_layers = config.num_layers
716
+ self.multi_query_group_num = config.multi_query_group_num
717
+ self.kv_channels = config.kv_channels
718
+
719
+ # Rotary positional embeddings
720
+ self.seq_length = config.seq_length
721
+ rotary_dim = (
722
+ config.hidden_size // config.num_attention_heads if config.kv_channels is None else config.kv_channels
723
+ )
724
+
725
+ self.rotary_pos_emb = RotaryEmbedding(rotary_dim // 2, rope_ratio=config.rope_ratio, original_impl=config.original_rope,
726
+ device=device, dtype=config.torch_dtype)
727
+ self.encoder = init_method(GLMTransformer, config, **init_kwargs)
728
+ self.output_layer = init_method(nn.Linear, config.hidden_size, config.padded_vocab_size, bias=False,
729
+ dtype=config.torch_dtype, **init_kwargs)
730
+
731
+ def get_input_embeddings(self):
732
+ return self.embedding.word_embeddings
733
+
734
+ def set_input_embeddings(self, value):
735
+ self.embedding.word_embeddings = value
736
+
737
+ def forward(
738
+ self,
739
+ input_ids,
740
+ position_ids: Optional[torch.Tensor] = None,
741
+ attention_mask: Optional[torch.BoolTensor] = None,
742
+ full_attention_mask: Optional[torch.BoolTensor] = None,
743
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
744
+ inputs_embeds: Optional[torch.Tensor] = None,
745
+ use_cache: Optional[bool] = None,
746
+ output_hidden_states: Optional[bool] = None,
747
+ return_dict: Optional[bool] = None,
748
+ ):
749
+ output_hidden_states = (
750
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
751
+ )
752
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
753
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
754
+
755
+ batch_size, seq_length = input_ids.shape
756
+
757
+ if inputs_embeds is None:
758
+ inputs_embeds = self.embedding(input_ids)
759
+
760
+ if full_attention_mask is None:
761
+ if (attention_mask is not None and not attention_mask.all()) or (past_key_values and seq_length != 1):
762
+ full_attention_mask = self.get_masks(input_ids, past_key_values, padding_mask=attention_mask)
763
+
764
+ # Rotary positional embeddings
765
+ rotary_pos_emb = self.rotary_pos_emb(self.seq_length)
766
+ if position_ids is not None:
767
+ rotary_pos_emb = rotary_pos_emb[position_ids]
768
+ else:
769
+ rotary_pos_emb = rotary_pos_emb[None, :seq_length]
770
+
771
+ # Run encoder.
772
+ hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
773
+ inputs_embeds, full_attention_mask, rotary_pos_emb=rotary_pos_emb,
774
+ kv_caches=past_key_values, use_cache=use_cache, output_hidden_states=output_hidden_states
775
+ )
776
+
777
+ if not return_dict:
778
+ return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)
779
+
780
+ return BaseModelOutputWithPast(
781
+ last_hidden_state=hidden_states,
782
+ past_key_values=presents,
783
+ hidden_states=all_hidden_states,
784
+ attentions=all_self_attentions,
785
+ )
786
+
787
+
788
+ class ChatGLMForConditionalGeneration(ChatGLMPreTrainedModel):
789
+ def __init__(self, config: ChatGLMConfig, empty_init=True, device=None):
790
+ super().__init__(config)
791
+
792
+ self.max_sequence_length = config.max_length
793
+ self.transformer = ChatGLMModel(config, empty_init=empty_init, device=device)
794
+ self.config = config
795
+
796
+ def _update_model_kwargs_for_generation(
797
+ self,
798
+ outputs: ModelOutput,
799
+ model_kwargs: Dict[str, Any],
800
+ is_encoder_decoder: bool = False,
801
+ standardize_cache_format: bool = False,
802
+ ) -> Dict[str, Any]:
803
+ # update past_key_values
804
+ model_kwargs["past_key_values"] = self._extract_past_from_model_output(
805
+ outputs, standardize_cache_format=standardize_cache_format
806
+ )
807
+
808
+ # update attention mask
809
+ if "attention_mask" in model_kwargs:
810
+ attention_mask = model_kwargs["attention_mask"]
811
+ model_kwargs["attention_mask"] = torch.cat(
812
+ [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
813
+ )
814
+
815
+ # update position ids
816
+ if "position_ids" in model_kwargs:
817
+ position_ids = model_kwargs["position_ids"]
818
+ new_position_id = position_ids[..., -1:].clone()
819
+ new_position_id += 1
820
+ model_kwargs["position_ids"] = torch.cat(
821
+ [position_ids, new_position_id], dim=-1
822
+ )
823
+
824
+ model_kwargs["is_first_forward"] = False
825
+ return model_kwargs
826
+
827
+ def prepare_inputs_for_generation(
828
+ self,
829
+ input_ids: torch.LongTensor,
830
+ past_key_values: Optional[torch.Tensor] = None,
831
+ attention_mask: Optional[torch.Tensor] = None,
832
+ position_ids: Optional[torch.Tensor] = None,
833
+ use_cache: Optional[bool] = None,
834
+ is_first_forward: bool = True,
835
+ **kwargs
836
+ ) -> dict:
837
+ # only last token for input_ids if past is not None
838
+ if position_ids is None:
839
+ position_ids = self.get_position_ids(input_ids, device=input_ids.device)
840
+ if not is_first_forward:
841
+ if past_key_values is not None:
842
+ position_ids = position_ids[..., -1:]
843
+ input_ids = input_ids[:, -1:]
844
+ return {
845
+ "input_ids": input_ids,
846
+ "past_key_values": past_key_values,
847
+ "position_ids": position_ids,
848
+ "attention_mask": attention_mask,
849
+ "return_last_logit": True,
850
+ "use_cache": use_cache
851
+ }
852
+
853
+ def forward(
854
+ self,
855
+ input_ids: Optional[torch.Tensor] = None,
856
+ position_ids: Optional[torch.Tensor] = None,
857
+ attention_mask: Optional[torch.Tensor] = None,
858
+ past_key_values: Optional[Tuple[torch.FloatTensor]] = None,
859
+ inputs_embeds: Optional[torch.Tensor] = None,
860
+ labels: Optional[torch.Tensor] = None,
861
+ use_cache: Optional[bool] = None,
862
+ output_attentions: Optional[bool] = None,
863
+ output_hidden_states: Optional[bool] = None,
864
+ return_dict: Optional[bool] = None,
865
+ return_last_logit: Optional[bool] = False,
866
+ ):
867
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
868
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
869
+
870
+ transformer_outputs = self.transformer(
871
+ input_ids=input_ids,
872
+ position_ids=position_ids,
873
+ attention_mask=attention_mask,
874
+ past_key_values=past_key_values,
875
+ inputs_embeds=inputs_embeds,
876
+ use_cache=use_cache,
877
+ output_hidden_states=output_hidden_states,
878
+ return_dict=return_dict,
879
+ )
880
+
881
+ hidden_states = transformer_outputs[0]
882
+ if return_last_logit:
883
+ hidden_states = hidden_states[:, -1:]
884
+ lm_logits = self.transformer.output_layer(hidden_states)
885
+
886
+ loss = None
887
+ if labels is not None:
888
+ lm_logits = lm_logits.to(torch.float32)
889
+
890
+ # Shift so that tokens < n predict n
891
+ shift_logits = lm_logits[..., :-1, :].contiguous()
892
+ shift_labels = labels[..., 1:].contiguous()
893
+ # Flatten the tokens
894
+ loss_fct = CrossEntropyLoss(ignore_index=-100)
895
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
896
+
897
+ lm_logits = lm_logits.to(hidden_states.dtype)
898
+ loss = loss.to(hidden_states.dtype)
899
+
900
+ if not return_dict:
901
+ output = (lm_logits,) + transformer_outputs[1:]
902
+ return ((loss,) + output) if loss is not None else output
903
+
904
+ return CausalLMOutputWithPast(
905
+ loss=loss,
906
+ logits=lm_logits,
907
+ past_key_values=transformer_outputs.past_key_values,
908
+ hidden_states=transformer_outputs.hidden_states,
909
+ attentions=transformer_outputs.attentions,
910
+ )
911
+
912
+ @staticmethod
913
+ def _reorder_cache(
914
+ past: Tuple[Tuple[torch.Tensor, torch.Tensor], ...], beam_idx: torch.LongTensor
915
+ ) -> Tuple[Tuple[torch.Tensor, torch.Tensor], ...]:
916
+ """
917
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
918
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
919
+ beam_idx at every generation step.
920
+
921
+ Output shares the same memory storage as `past`.
922
+ """
923
+ return tuple(
924
+ (
925
+ layer_past[0].index_select(0, beam_idx.to(layer_past[0].device)),
926
+ layer_past[1].index_select(0, beam_idx.to(layer_past[1].device)),
927
+ )
928
+ for layer_past in past
929
+ )
930
+
931
+ def process_response(self, output, history):
932
+ content = ""
933
+ history = deepcopy(history)
934
+ for response in output.split("<|assistant|>"):
935
+ if "\n" in response:
936
+ metadata, content = response.split("\n", maxsplit=1)
937
+ else:
938
+ metadata, content = "", response
939
+ if not metadata.strip():
940
+ content = content.strip()
941
+ history.append({"role": "assistant", "metadata": metadata, "content": content})
942
+ content = content.replace("[[训练时间]]", "2023年")
943
+ else:
944
+ history.append({"role": "assistant", "metadata": metadata, "content": content})
945
+ if history[0]["role"] == "system" and "tools" in history[0]:
946
+ parameters = json.loads(content)
947
+ content = {"name": metadata.strip(), "parameters": parameters}
948
+ else:
949
+ content = {"name": metadata.strip(), "content": content}
950
+ return content, history
951
+
952
+ @torch.inference_mode()
953
+ def chat(self, tokenizer, query: str, history: List[Dict] = None, role: str = "user",
954
+ max_length: int = 8192, num_beams=1, do_sample=True, top_p=0.8, temperature=0.8, logits_processor=None,
955
+ **kwargs):
956
+ if history is None:
957
+ history = []
958
+ if logits_processor is None:
959
+ logits_processor = LogitsProcessorList()
960
+ logits_processor.append(InvalidScoreLogitsProcessor())
961
+ gen_kwargs = {"max_length": max_length, "num_beams": num_beams, "do_sample": do_sample, "top_p": top_p,
962
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
963
+ history.append({"role": role, "content": query})
964
+ inputs = tokenizer.apply_chat_template(history, add_generation_prompt=True, tokenize=True,
965
+ return_tensors="pt", return_dict=True)
966
+ inputs = inputs.to(self.device)
967
+ eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|user|>"),
968
+ tokenizer.convert_tokens_to_ids("<|observation|>")]
969
+ outputs = self.generate(**inputs, **gen_kwargs, eos_token_id=eos_token_id)
970
+ outputs = outputs.tolist()[0][len(inputs["input_ids"][0]):-1]
971
+ response = tokenizer.decode(outputs)
972
+ response, history = self.process_response(response, history)
973
+ return response, history
974
+
975
+ @torch.inference_mode()
976
+ def stream_chat(self, tokenizer, query: str, history: List[Dict] = None, role: str = "user",
977
+ past_key_values=None, max_length: int = 8192, do_sample=True, top_p=0.8, temperature=0.8,
978
+ logits_processor=None, return_past_key_values=False, **kwargs):
979
+ if history is None:
980
+ history = []
981
+ if logits_processor is None:
982
+ logits_processor = LogitsProcessorList()
983
+ logits_processor.append(InvalidScoreLogitsProcessor())
984
+ eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|user|>"),
985
+ tokenizer.convert_tokens_to_ids("<|observation|>")]
986
+ gen_kwargs = {"max_length": max_length, "do_sample": do_sample, "top_p": top_p,
987
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
988
+ if past_key_values is None:
989
+ inputs = tokenizer.apply_chat_template(history + [{"role": role, "content": query}],
990
+ add_generation_prompt=True, tokenize=True, return_tensors="pt",
991
+ return_dict=True)
992
+ else:
993
+ inputs = tokenizer.apply_chat_template([{"role": role, "content": query}], add_special_tokens=False,
994
+ add_generation_prompt=True, tokenize=True, return_tensors="pt",
995
+ return_dict=True)
996
+ inputs = inputs.to(self.device)
997
+ if past_key_values is not None:
998
+ past_length = past_key_values[0][0].shape[2]
999
+ inputs.position_ids += past_length
1000
+ attention_mask = inputs.attention_mask
1001
+ attention_mask = torch.cat((attention_mask.new_ones(1, past_length), attention_mask), dim=1)
1002
+ inputs['attention_mask'] = attention_mask
1003
+ history.append({"role": role, "content": query})
1004
+ for outputs in self.stream_generate(**inputs, past_key_values=past_key_values,
1005
+ eos_token_id=eos_token_id, return_past_key_values=return_past_key_values,
1006
+ **gen_kwargs):
1007
+ if return_past_key_values:
1008
+ outputs, past_key_values = outputs
1009
+ outputs = outputs.tolist()[0][len(inputs["input_ids"][0]):-1]
1010
+ response = tokenizer.decode(outputs)
1011
+ if response and response[-1] != "�":
1012
+ response, new_history = self.process_response(response, history)
1013
+ if return_past_key_values:
1014
+ yield response, new_history, past_key_values
1015
+ else:
1016
+ yield response, new_history
1017
+
1018
+ @torch.inference_mode()
1019
+ def stream_generate(
1020
+ self,
1021
+ input_ids,
1022
+ generation_config: Optional[GenerationConfig] = None,
1023
+ logits_processor: Optional[LogitsProcessorList] = None,
1024
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1025
+ prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
1026
+ return_past_key_values=False,
1027
+ **kwargs,
1028
+ ):
1029
+ batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
1030
+
1031
+ if generation_config is None:
1032
+ generation_config = self.generation_config
1033
+ generation_config = copy.deepcopy(generation_config)
1034
+ model_kwargs = generation_config.update(**kwargs)
1035
+ model_kwargs["use_cache"] = generation_config.use_cache
1036
+ bos_token_id, eos_token_id = generation_config.bos_token_id, generation_config.eos_token_id
1037
+
1038
+ if isinstance(eos_token_id, int):
1039
+ eos_token_id = [eos_token_id]
1040
+ eos_token_id_tensor = torch.tensor(eos_token_id).to(input_ids.device) if eos_token_id is not None else None
1041
+
1042
+ has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None
1043
+ if has_default_max_length and generation_config.max_new_tokens is None:
1044
+ warnings.warn(
1045
+ f"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. "
1046
+ "This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we"
1047
+ " recommend using `max_new_tokens` to control the maximum length of the generation.",
1048
+ UserWarning,
1049
+ )
1050
+ elif generation_config.max_new_tokens is not None:
1051
+ generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
1052
+ if not has_default_max_length:
1053
+ logger.warn(
1054
+ f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(="
1055
+ f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. "
1056
+ "Please refer to the documentation for more information. "
1057
+ "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)",
1058
+ UserWarning,
1059
+ )
1060
+
1061
+ if input_ids_seq_length >= generation_config.max_length:
1062
+ input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
1063
+ logger.warning(
1064
+ f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to"
1065
+ f" {generation_config.max_length}. This can lead to unexpected behavior. You should consider"
1066
+ " increasing `max_new_tokens`."
1067
+ )
1068
+
1069
+ # 2. Set generation parameters if not already defined
1070
+ logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
1071
+ stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
1072
+
1073
+ logits_processor = self._get_logits_processor(
1074
+ generation_config=generation_config,
1075
+ input_ids_seq_length=input_ids_seq_length,
1076
+ encoder_input_ids=input_ids,
1077
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1078
+ logits_processor=logits_processor,
1079
+ )
1080
+
1081
+ stopping_criteria = self._get_stopping_criteria(
1082
+ generation_config=generation_config, stopping_criteria=stopping_criteria
1083
+ )
1084
+ logits_warper = self._get_logits_warper(generation_config)
1085
+
1086
+ unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
1087
+ scores = None
1088
+ while True:
1089
+ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
1090
+ # forward pass to get next token
1091
+ outputs = self(
1092
+ **model_inputs,
1093
+ return_dict=True,
1094
+ output_attentions=False,
1095
+ output_hidden_states=False,
1096
+ )
1097
+
1098
+ next_token_logits = outputs.logits[:, -1, :]
1099
+
1100
+ # pre-process distribution
1101
+ next_token_scores = logits_processor(input_ids, next_token_logits)
1102
+ next_token_scores = logits_warper(input_ids, next_token_scores)
1103
+
1104
+ # sample
1105
+ probs = nn.functional.softmax(next_token_scores, dim=-1)
1106
+ if generation_config.do_sample:
1107
+ next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
1108
+ else:
1109
+ next_tokens = torch.argmax(probs, dim=-1)
1110
+ # update generated ids, model inputs, and length for next step
1111
+ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
1112
+ model_kwargs = self._update_model_kwargs_for_generation(
1113
+ outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
1114
+ )
1115
+ unfinished_sequences = unfinished_sequences.mul(
1116
+ next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_tensor.unsqueeze(1)).prod(dim=0)
1117
+ )
1118
+ if return_past_key_values:
1119
+ yield input_ids, outputs.past_key_values
1120
+ else:
1121
+ yield input_ids
1122
+ # stop when each sentence is finished, or if we exceed the maximum length
1123
+ if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
1124
+ break
1125
+
1126
+
1127
+ class ChatGLMForSequenceClassification(ChatGLMPreTrainedModel):
1128
+ def __init__(self, config: ChatGLMConfig, empty_init=True, device=None):
1129
+ super().__init__(config)
1130
+
1131
+ self.num_labels = config.num_labels
1132
+ self.transformer = ChatGLMModel(config, empty_init=empty_init, device=device)
1133
+
1134
+ self.classifier_head = nn.Linear(config.hidden_size, config.num_labels, bias=True, dtype=torch.half)
1135
+ if config.classifier_dropout is not None:
1136
+ self.dropout = nn.Dropout(config.classifier_dropout)
1137
+ else:
1138
+ self.dropout = None
1139
+ self.config = config
1140
+
1141
+ def forward(
1142
+ self,
1143
+ input_ids: Optional[torch.LongTensor] = None,
1144
+ position_ids: Optional[torch.LongTensor] = None,
1145
+ attention_mask: Optional[torch.Tensor] = None,
1146
+ full_attention_mask: Optional[torch.Tensor] = None,
1147
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
1148
+ inputs_embeds: Optional[torch.LongTensor] = None,
1149
+ labels: Optional[torch.LongTensor] = None,
1150
+ use_cache: Optional[bool] = None,
1151
+ output_hidden_states: Optional[bool] = None,
1152
+ return_dict: Optional[bool] = None,
1153
+ ) -> Union[Tuple[torch.Tensor, ...], SequenceClassifierOutputWithPast]:
1154
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1155
+
1156
+ transformer_outputs = self.transformer(
1157
+ input_ids=input_ids,
1158
+ position_ids=position_ids,
1159
+ attention_mask=attention_mask,
1160
+ full_attention_mask=full_attention_mask,
1161
+ past_key_values=past_key_values,
1162
+ inputs_embeds=inputs_embeds,
1163
+ use_cache=use_cache,
1164
+ output_hidden_states=output_hidden_states,
1165
+ return_dict=return_dict,
1166
+ )
1167
+
1168
+ hidden_states = transformer_outputs[0]
1169
+ pooled_hidden_states = hidden_states[-1]
1170
+ if self.dropout is not None:
1171
+ pooled_hidden_states = self.dropout(pooled_hidden_states)
1172
+ logits = self.classifier_head(pooled_hidden_states)
1173
+
1174
+ loss = None
1175
+ if labels is not None:
1176
+ if self.config.problem_type is None:
1177
+ if self.num_labels == 1:
1178
+ self.config.problem_type = "regression"
1179
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1180
+ self.config.problem_type = "single_label_classification"
1181
+ else:
1182
+ self.config.problem_type = "multi_label_classification"
1183
+
1184
+ if self.config.problem_type == "regression":
1185
+ loss_fct = MSELoss()
1186
+ if self.num_labels == 1:
1187
+ loss = loss_fct(logits.squeeze().float(), labels.squeeze())
1188
+ else:
1189
+ loss = loss_fct(logits.float(), labels)
1190
+ elif self.config.problem_type == "single_label_classification":
1191
+ loss_fct = CrossEntropyLoss()
1192
+ loss = loss_fct(logits.view(-1, self.num_labels).float(), labels.view(-1))
1193
+ elif self.config.problem_type == "multi_label_classification":
1194
+ loss_fct = BCEWithLogitsLoss()
1195
+ loss = loss_fct(logits.float(), labels.view(-1, self.num_labels))
1196
+
1197
+ if not return_dict:
1198
+ output = (logits,) + transformer_outputs[1:]
1199
+ return ((loss,) + output) if loss is not None else output
1200
+
1201
+ return SequenceClassifierOutputWithPast(
1202
+ loss=loss,
1203
+ logits=logits,
1204
+ past_key_values=transformer_outputs.past_key_values,
1205
+ hidden_states=transformer_outputs.hidden_states,
1206
+ attentions=transformer_outputs.attentions,
1207
+ )
tokenization_chatglm.py ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import regex as re
2
+ import base64
3
+ import os
4
+ import json
5
+ import tiktoken
6
+ from torch import TensorType
7
+ from typing import List, Optional, Union, Dict, Any
8
+ from transformers import PreTrainedTokenizer
9
+ from transformers.utils import logging, PaddingStrategy
10
+ from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
11
+
12
+
13
+ class ChatGLM4Tokenizer(PreTrainedTokenizer):
14
+ vocab_files_names = {"vocab_file": "tokenizer.model"}
15
+ model_input_names = ["input_ids", "attention_mask", "position_ids"]
16
+
17
+ def __init__(
18
+ self,
19
+ vocab_file,
20
+ padding_side="left",
21
+ clean_up_tokenization_spaces=False,
22
+ encode_special_tokens=False,
23
+ **kwargs
24
+ ):
25
+ self.name = "GLM4Tokenizer"
26
+ self.vocab_file = vocab_file
27
+ pat_str = "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
28
+ self.pat_str = re.compile(pat_str)
29
+ self.encode_special_tokens = encode_special_tokens
30
+
31
+ mergeable_ranks = {}
32
+ with open(vocab_file) as f:
33
+ for line in f:
34
+ token, rank = line.strip().split()
35
+ rank = int(rank)
36
+ token = base64.b64decode(token)
37
+ mergeable_ranks[token] = rank
38
+
39
+ self.mergeable_ranks = mergeable_ranks
40
+
41
+ self.tokenizer = tiktoken.Encoding(
42
+ name="my_tokenizer",
43
+ pat_str=pat_str,
44
+ mergeable_ranks=mergeable_ranks,
45
+ special_tokens={}
46
+ )
47
+ self.decoder = {rank: token for token, rank in mergeable_ranks.items()}
48
+ self.n_words = len(self.decoder)
49
+
50
+ super().__init__(
51
+ padding_side=padding_side,
52
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
53
+ **kwargs
54
+ )
55
+
56
+ @property
57
+ def vocab_size(self):
58
+ return self.n_words
59
+
60
+ def get_vocab(self):
61
+ """ Returns vocab as a dict """
62
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
63
+ vocab.update(self.added_tokens_encoder)
64
+ return vocab
65
+
66
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
67
+ """
68
+ Converts a sequence of tokens in a single string.
69
+ """
70
+ text = ""
71
+ temp = b""
72
+ for t in tokens:
73
+ if isinstance(t, str):
74
+ if temp:
75
+ text += temp.decode("utf-8", errors="replace")
76
+ temp = b""
77
+ text += t
78
+ elif isinstance(t, bytes):
79
+ temp += t
80
+ else:
81
+ raise TypeError("token should only be of type types or str")
82
+ if temp:
83
+ text += temp.decode("utf-8", errors="replace")
84
+ return text
85
+
86
+ def _tokenize(self, text, **kwargs):
87
+ tokens = []
88
+ ids = self.tokenizer.encode(text)
89
+ for t in ids:
90
+ tokens.append(self.decoder[t])
91
+ return tokens
92
+
93
+ def _convert_token_to_id(self, token):
94
+ """ Converts a token (str) in an id using the vocab. """
95
+ return self.mergeable_ranks[token]
96
+
97
+ def _convert_id_to_token(self, index):
98
+ """Converts an index (integer) in a token (str) using the vocab."""
99
+ return self.decoder.get(index, "")
100
+
101
+ def save_vocabulary(self, save_directory, filename_prefix=None):
102
+ """
103
+ Save the vocabulary and special tokens file to a directory.
104
+
105
+ Args:
106
+ save_directory (`str`):
107
+ The directory in which to save the vocabulary.
108
+ filename_prefix (`str`, *optional*):
109
+ An optional prefix to add to the named of the saved files.
110
+
111
+ Returns:
112
+ `Tuple(str)`: Paths to the files saved.
113
+ """
114
+ if os.path.isdir(save_directory):
115
+ vocab_file = os.path.join(
116
+ save_directory, self.vocab_files_names["vocab_file"]
117
+ )
118
+ else:
119
+ vocab_file = save_directory
120
+
121
+ with open(self.vocab_file, 'rb') as fin:
122
+ proto_str = fin.read()
123
+
124
+ with open(vocab_file, "wb") as writer:
125
+ writer.write(proto_str)
126
+
127
+ return (vocab_file,)
128
+
129
+ def get_prefix_tokens(self):
130
+ prefix_tokens = [self.convert_tokens_to_ids("[gMASK]"), self.convert_tokens_to_ids("<sop>")]
131
+ return prefix_tokens
132
+
133
+ def build_single_message(self, role, metadata, message, tokenize=True):
134
+ assert role in ["system", "user", "assistant", "observation"], role
135
+ if tokenize:
136
+ role_tokens = [self.convert_tokens_to_ids(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n",
137
+ disallowed_special=())
138
+ message_tokens = self.tokenizer.encode(message, disallowed_special=())
139
+ tokens = role_tokens + message_tokens
140
+ return tokens
141
+ else:
142
+ return str(f"<|{role}|>{metadata}\n{message}")
143
+
144
+ def apply_chat_template(
145
+ self,
146
+ conversation: Union[List[Dict[str, str]], List[List[Dict[str, str]]], "Conversation"],
147
+ add_generation_prompt: bool = False,
148
+ tokenize: bool = True,
149
+ padding: bool = False,
150
+ truncation: bool = False,
151
+ max_length: Optional[int] = None,
152
+ return_tensors: Optional[Union[str, TensorType]] = None,
153
+ return_dict: bool = False,
154
+ tokenizer_kwargs: Optional[Dict[str, Any]] = None,
155
+ add_special_tokens: bool = True,
156
+ **kwargs,
157
+ ) -> Union[str, List[int], List[str], List[List[int]], BatchEncoding]:
158
+
159
+ if return_dict and not tokenize:
160
+ raise ValueError(
161
+ "`return_dict=True` is incompatible with `tokenize=False`, because there is no dict "
162
+ "of tokenizer outputs to return."
163
+ )
164
+
165
+ def handle_single_conversation(conversation):
166
+ input_ids = self.get_prefix_tokens() if add_special_tokens else []
167
+ input_message = "[gMASK]<sop>" if add_special_tokens else ""
168
+ for item in conversation:
169
+ if item.get("tools"):
170
+ tools = item["tools"]
171
+ content = "你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。"
172
+ for tool in tools:
173
+ if tool["type"] == "function":
174
+ function = tool["function"]
175
+ content += f"\n\n## {function['name']}\n\n{json.dumps(function, ensure_ascii=False, indent=4)}"
176
+ content += "\n在调用上述函数时,请使用 Json 格式表示调用的参数。"
177
+ elif tool["type"] == "python":
178
+ content += "\n\n## python\n\n当你向 `python` 发送包含 Python 代码的消息时,该代码将会在一个有状态的 Jupyter notebook 环境中执行。\n`python` 返回代码执行的输出,或在执行 60 秒后返回超时。\n`/mnt/data` 将会持久化存储你的文件。在此会话中,`python` 无法访问互联网。不要使用 `python` 进行任何网络请求或者在线 API 调用,这些在线内容的访问将不会成功。"
179
+ elif tool["type"] == "simple_browser":
180
+ content += "\n\n## simple_browser\n\n你可以使用 `simple_browser` 工具。该工具支持以下函数:\n`search(query: str, recency_days: int)`:使用搜索引擎进行查询并显示结果,可以使用 `recency_days` 参数控制搜索内容的时效性。\n`mclick(ids: list[int])`:获取一系列指定 id 的页面内容。每次调用时,须选择3-10个页面。选择多个角度的页面,同时尽可能选择可信任的信息来源。考虑到部分页面是无法加载的,你也可以多打开一些可能有用的页面而不用担心内容过多。\n`open_url(url: str)`:打开指定的 URL。\n\n使用 `【{引用 id}†{引用文本}】` 来引用内容。\n\n操作步骤:1. 使用 `search` 来获得信息列表; 2. 使用 `mclick` 来获取指定 ID 页面的内容; 3. 根据获得的内容进行回复。在回复中应当引用信息来源。\n 如果用户提供了 URL,也可以用 `open_url` 直接打开页面。\n如果初次搜索结果没有找到合适的信息,也可以再次使用 `search` 进行搜索。"
181
+ elif tool["type"] == "cogview":
182
+ content += "\n\n## cogview\n\n如果用户的请求中包含了对图像的描述,你可以使用 `cogview` 来生成图像并展示给用户。你需要向 `cogview` 发送图像描述,规则:\n- 发送给 `cogview` 的消息必须使用英语。用户的中文描述必须完全翻译为英语。\n- 应当尽可能详细地描述图像生成的需求,需求描述约 100 英文单词。\n- 保持用户原始描述的意图。不要虚构内容或者没见过的人物。\n- 如无特殊说明,所在地为中国,持有中国立场并遵循中国社会主义价值观。"
183
+ else:
184
+ raise NotImplementedError(f"Unknown tool type {tool['type']}")
185
+ input = self.build_single_message("system", "", content, tokenize=tokenize)
186
+ if tokenize:
187
+ input_ids.extend(input)
188
+ else:
189
+ input_message += input
190
+ if item["content"]:
191
+ input = self.build_single_message(
192
+ item["role"],
193
+ item.get("metadata", ""),
194
+ item["content"],
195
+ tokenize=tokenize
196
+ )
197
+ if tokenize:
198
+ input_ids.extend(input)
199
+ else:
200
+ input_message += input
201
+ if add_generation_prompt:
202
+ if tokenize:
203
+ input_ids.extend([self.convert_tokens_to_ids("<|assistant|>")])
204
+ else:
205
+ input_message += "<|assistant|>"
206
+
207
+ return input_ids if tokenize else input_message
208
+
209
+ # Main logic to handle different conversation formats
210
+ if isinstance(conversation, list) and all(isinstance(i, dict) for i in conversation):
211
+ result = handle_single_conversation(conversation)
212
+ elif isinstance(conversation, list) and all(isinstance(i, list) for i in conversation):
213
+ result = [handle_single_conversation(c) for c in conversation]
214
+ elif hasattr(conversation, "messages"):
215
+ result = handle_single_conversation(conversation.messages)
216
+ else:
217
+ raise ValueError("Invalid conversation format")
218
+
219
+ if tokenize:
220
+ output = self.batch_encode_plus(
221
+ [result] if isinstance(result[0], int) else result,
222
+ padding=padding,
223
+ truncation=truncation,
224
+ max_length=max_length,
225
+ return_tensors=return_tensors,
226
+ is_split_into_words=True,
227
+ add_special_tokens=False
228
+ )
229
+ if return_dict:
230
+ return output
231
+ else:
232
+ return output["input_ids"]
233
+ else:
234
+ return result
235
+
236
+
237
+ def build_inputs_with_special_tokens(
238
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
239
+ ) -> List[int]:
240
+ """
241
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
242
+ adding special tokens. A BERT sequence has the following format:
243
+
244
+ - single sequence: `[CLS] X [SEP]`
245
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
246
+
247
+ Args:
248
+ token_ids_0 (`List[int]`):
249
+ List of IDs to which the special tokens will be added.
250
+ token_ids_1 (`List[int]`, *optional*):
251
+ Optional second list of IDs for sequence pairs.
252
+
253
+ Returns:
254
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
255
+ """
256
+ prefix_tokens = self.get_prefix_tokens()
257
+ token_ids_0 = prefix_tokens + token_ids_0
258
+ if token_ids_1 is not None:
259
+ token_ids_0 = token_ids_0 + token_ids_1 + [self.convert_tokens_to_ids("<eos>")]
260
+ return token_ids_0
261
+
262
+ def _pad(
263
+ self,
264
+ encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
265
+ max_length: Optional[int] = None,
266
+ padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
267
+ pad_to_multiple_of: Optional[int] = None,
268
+ return_attention_mask: Optional[bool] = None,
269
+ ) -> dict:
270
+ """
271
+ Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
272
+
273
+ Args:
274
+ encoded_inputs:
275
+ Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
276
+ max_length: maximum length of the returned list and optionally padding length (see below).
277
+ Will truncate by taking into account the special tokens.
278
+ padding_strategy: PaddingStrategy to use for padding.
279
+
280
+ - PaddingStrategy.LONGEST Pad to the longest sequence in the batch
281
+ - PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
282
+ - PaddingStrategy.DO_NOT_PAD: Do not pad
283
+ The tokenizer padding sides are defined in self.padding_side:
284
+
285
+ - 'left': pads on the left of the sequences
286
+ - 'right': pads on the right of the sequences
287
+ pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
288
+ This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
289
+ `>= 7.5` (Volta).
290
+ return_attention_mask:
291
+ (optional) Set to False to avoid returning attention mask (default: set to model specifics)
292
+ """
293
+ # Load from model defaults
294
+ assert self.padding_side == "left"
295
+
296
+ required_input = encoded_inputs[self.model_input_names[0]]
297
+ seq_length = len(required_input)
298
+
299
+ if padding_strategy == PaddingStrategy.LONGEST:
300
+ max_length = len(required_input)
301
+
302
+ if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
303
+ max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
304
+
305
+ needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
306
+
307
+ # Initialize attention mask if not present.
308
+ if "attention_mask" not in encoded_inputs:
309
+ encoded_inputs["attention_mask"] = [1] * seq_length
310
+
311
+ if "position_ids" not in encoded_inputs:
312
+ encoded_inputs["position_ids"] = list(range(seq_length))
313
+
314
+ if needs_to_be_padded:
315
+ difference = max_length - len(required_input)
316
+
317
+ if "attention_mask" in encoded_inputs:
318
+ encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
319
+ if "position_ids" in encoded_inputs:
320
+ encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"]
321
+ encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
322
+
323
+ return encoded_inputs
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a493598071550244b2ee7f26118f3edec2150b9dfa967929a99052ac83fe716
3
+ size 2623634
tokenizer_config.json ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoTokenizer": [
4
+ "tokenization_chatglm.ChatGLM4Tokenizer",
5
+ null
6
+ ]
7
+ },
8
+ "added_tokens_decoder": {
9
+ "151329": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false,
15
+ "special": true
16
+ },
17
+ "151330": {
18
+ "content": "[MASK]",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false,
23
+ "special": true
24
+ },
25
+ "151331": {
26
+ "content": "[gMASK]",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false,
31
+ "special": true
32
+ },
33
+ "151332": {
34
+ "content": "[sMASK]",
35
+ "lstrip": false,
36
+ "normalized": false,
37
+ "rstrip": false,
38
+ "single_word": false,
39
+ "special": true
40
+ },
41
+ "151333": {
42
+ "content": "<sop>",
43
+ "lstrip": false,
44
+ "normalized": false,
45
+ "rstrip": false,
46
+ "single_word": false,
47
+ "special": true
48
+ },
49
+ "151334": {
50
+ "content": "<eop>",
51
+ "lstrip": false,
52
+ "normalized": false,
53
+ "rstrip": false,
54
+ "single_word": false,
55
+ "special": true
56
+ },
57
+ "151335": {
58
+ "content": "<|system|>",
59
+ "lstrip": false,
60
+ "normalized": false,
61
+ "rstrip": false,
62
+ "single_word": false,
63
+ "special": true
64
+ },
65
+ "151336": {
66
+ "content": "<|user|>",
67
+ "lstrip": false,
68
+ "normalized": false,
69
+ "rstrip": false,
70
+ "single_word": false,
71
+ "special": true
72
+ },
73
+ "151337": {
74
+ "content": "<|assistant|>",
75
+ "lstrip": false,
76
+ "normalized": false,
77
+ "rstrip": false,
78
+ "single_word": false,
79
+ "special": true
80
+ },
81
+ "151338": {
82
+ "content": "<|observation|>",
83
+ "lstrip": false,
84
+ "normalized": false,
85
+ "rstrip": false,
86
+ "single_word": false,
87
+ "special": true
88
+ },
89
+ "151339": {
90
+ "content": "<|begin_of_image|>",
91
+ "lstrip": false,
92
+ "normalized": false,
93
+ "rstrip": false,
94
+ "single_word": false,
95
+ "special": true
96
+ },
97
+ "151340": {
98
+ "content": "<|end_of_image|>",
99
+ "lstrip": false,
100
+ "normalized": false,
101
+ "rstrip": false,
102
+ "single_word": false,
103
+ "special": true
104
+ },
105
+ "151341": {
106
+ "content": "<|begin_of_video|>",
107
+ "lstrip": false,
108
+ "normalized": false,
109
+ "rstrip": false,
110
+ "single_word": false,
111
+ "special": true
112
+ },
113
+ "151342": {
114
+ "content": "<|end_of_video|>",
115
+ "lstrip": false,
116
+ "normalized": false,
117
+ "rstrip": false,
118
+ "single_word": false,
119
+ "special": true
120
+ }
121
+ },
122
+ "additional_special_tokens": ["<|endoftext|>", "[MASK]", "[gMASK]", "[sMASK]", "<sop>", "<eop>", "<|system|>",
123
+ "<|user|>", "<|assistant|>", "<|observation|>", "<|begin_of_image|>", "<|end_of_image|>",
124
+ "<|begin_of_video|>", "<|end_of_video|>"],
125
+ "clean_up_tokenization_spaces": false,
126
+ "do_lower_case": false,
127
+ "eos_token": "<|endoftext|>",
128
+ "pad_token": "<|endoftext|>",
129
+ "model_max_length": 1000000000000000019884624838656,
130
+ "padding_side": "left",
131
+ "remove_space": false,
132
+ "tokenizer_class": "ChatGLM4Tokenizer"
133
+ }