rblb silver commited on
Commit
48ad398
0 Parent(s):

Duplicate from silver/chatglm-6b-int4-qe-slim

Browse files

Co-authored-by: Yinhe Zheng <silver@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
36
+ ice_text.model filter=lfs diff=lfs merge=lfs -text
LICENSE.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright Zhengxiao Du
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
MODEL_LICENSE.txt ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The GLM-130B License
2
+
3
+ 1. Definitions
4
+
5
+ “Licensor” means the GLM-130B Model Team that distributes its Software.
6
+
7
+ “Software” means the GLM-130B model parameters made available under this license.
8
+
9
+ 2. License Grant
10
+
11
+ Subject to the terms and conditions of this License, the Licensor hereby grants to you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty-free copyright license to use the Software solely for your non-commercial research purposes.
12
+
13
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
14
+
15
+ 3. Restriction
16
+
17
+ You will not use, copy, modify, merge, publish, distribute, reproduce, or create derivative works of the Software, in whole or in part, for any commercial, military, or illegal purposes.
18
+
19
+ You will not use the Software for any act that may undermine China's national security and national unity, harm the public interest of society, or infringe upon the rights and interests of human beings.
20
+
21
+ 4. Disclaimer
22
+
23
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
24
+
25
+ 5. Limitation of Liability
26
+
27
+ EXCEPT TO THE EXTENT PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER BASED IN TORT, NEGLIGENCE, CONTRACT, LIABILITY, OR OTHERWISE WILL ANY LICENSOR BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES, OR ANY OTHER COMMERCIAL LOSSES, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
28
+
29
+ 6. Dispute Resolution
30
+
31
+ This license shall be governed and construed in accordance with the laws of People’s Republic of China. Any dispute arising from or in connection with this License shall be submitted to Haidian District People's Court in Beijing.
32
+
33
+ Note that the license is subject to update to a more comprehensive version. For any questions related to the license and copyright, please contact us at glm-130b@googlegroups.com.
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - glm
7
+ - chatglm
8
+ - thudm
9
+ duplicated_from: silver/chatglm-6b-int4-qe-slim
10
+ ---
11
+ # ChatGLM-6B-INT4-QE-Slim: 低显存版ChatGLM-6B-INT4-QE
12
+ ## 介绍
13
+
14
+ ChatGLM-6B-INT4-QE-Slim是在[ChatGLM-6B-INT4-QE](https://huggingface.co/THUDM/chatglm-6b-int4-qe)的基础上通过裁剪词表构建的。因为ChatGLM-6B使用了icetk,在其词表中,前20000个token是预留给图片的,在文本模型中没有用到这些图片token,但是在infer和微调的时候,这些token对应的embedding依然需要被加载,并且在解码每一个token的时候需要多计算20K个logits,会占用不少显存。因此将这一部分token裁剪掉以节省显存。
15
+
16
+ 除了词表外,ChatGLM-6B-INT4-QE-Slim的其他结构与ChatGLM-6B-INT4-QE完全一致,性能也完全一样,可以认为是ChatGLM-6B-INT4的一个低显存版等价平替。
17
+
18
+ ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。
19
+
20
+ ChatGLM-6B-INT4-QE 是 ChatGLM-6B 量化后的模型权重。具体的,ChatGLM-6B-INT4-QE 对 ChatGLM-6B 中的 28 个 GLM Block 、 Embedding 和 LM Head 进行了 INT4 量化。量化后的模型权重文件仅为 3G ,理论上 6G 显存(使用 CPU 即 6G 内存)即可推理,具有在嵌入式设备(如树莓派)上运行的可能。
21
+
22
+ 在 CPU 上运行时,会根据硬件自动编译 CPU Kernel ,请确保已安装 GCC 和 OpenMP (Linux一般已安装,对于Windows则需手动安装),以获得最佳并行计算能力。
23
+
24
+ ## 软件依赖
25
+
26
+ ```shell
27
+ pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels
28
+ ```
29
+
30
+ ## 代码调用
31
+
32
+ 可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
33
+
34
+ ```ipython
35
+ >>> from transformers import AutoTokenizer, AutoModel
36
+ >>> tokenizer = AutoTokenizer.from_pretrained("silver/chatglm-6b-int4-qe-slim", trust_remote_code=True)
37
+ >>> model = AutoModel.from_pretrained("silver/chatglm-6b-int4-qe-slim", trust_remote_code=True).half().cuda()
38
+ >>> response, history = model.chat(tokenizer, "你好", history=[])
39
+ >>> print(response)
40
+ 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
41
+ >>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
42
+ >>> print(response)
43
+ 晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
44
+
45
+ 1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
46
+ 2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
47
+ 3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
48
+ 4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
49
+ 5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
50
+ 6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
51
+
52
+ 如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
53
+ ```
54
+
55
+ 关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。
56
+
57
+ ## 协议
58
+
59
+ 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
60
+
61
+ ## 引用
62
+
63
+ 如果你觉得我们的工作有帮助的话,请考虑引用下列论文:
64
+
65
+ ```
66
+ @inproceedings{
67
+ zeng2023glm-130b,
68
+ title={{GLM}-130B: An Open Bilingual Pre-trained Model},
69
+ author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang},
70
+ booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
71
+ year={2023},
72
+ url={https://openreview.net/forum?id=-Aw0rrrPUF}
73
+ }
74
+ ```
75
+ ```
76
+ @inproceedings{du2022glm,
77
+ title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
78
+ author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
79
+ booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
80
+ pages={320--335},
81
+ year={2022}
82
+ }
83
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "THUDM/chatglm-6b",
3
+ "architectures": [
4
+ "ChatGLMForConditionalGeneration"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_chatglm.ChatGLMConfig",
8
+ "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
9
+ "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration"
10
+ },
11
+ "bos_token_id": 130004,
12
+ "eos_token_id": 130005,
13
+ "hidden_size": 4096,
14
+ "inner_hidden_size": 16384,
15
+ "layernorm_epsilon": 1e-05,
16
+ "max_sequence_length": 2048,
17
+ "model_type": "chatglm",
18
+ "num_attention_heads": 32,
19
+ "num_layers": 28,
20
+ "pad_token_id": 0,
21
+ "position_encoding_2d": true,
22
+ "quantization_bit": 4,
23
+ "quantization_embeddings": true,
24
+ "torch_dtype": "float16",
25
+ "transformers_version": "4.26.1",
26
+ "use_cache": true,
27
+ "vocab_size": 130528
28
+ }
configuration_chatglm.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ ChatGLM model configuration """
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ from transformers.utils import logging
5
+
6
+ logger = logging.get_logger(__name__)
7
+
8
+
9
+ class ChatGLMConfig(PretrainedConfig):
10
+ r"""
11
+ This is the configuration class to store the configuration of a [`~ChatGLMModel`].
12
+ It is used to instantiate an ChatGLM model according to the specified arguments, defining the model
13
+ architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
14
+ the ChatGLM-6B [THUDM/ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b) architecture.
15
+
16
+ Configuration objects inherit from [`PretrainedConfig`] and can be used
17
+ to control the model outputs. Read the documentation from [`PretrainedConfig`]
18
+ for more information.
19
+
20
+
21
+ Args:
22
+ vocab_size (`int`, *optional*, defaults to 150528):
23
+ Vocabulary size of the ChatGLM-6B model. Defines the number of different tokens that can be represented by the
24
+ `inputs_ids` passed when calling [`~ChatGLMModel`] or
25
+ [`~TFChatGLMModel`].
26
+ hidden_size (`int`, *optional*, defaults to 4096):
27
+ Dimension of the encoder layers and the pooler layer.
28
+ num_hidden_layers (`int`, *optional*, defaults to 28):
29
+ Number of hidden layers in the Transformer encoder.
30
+ num_attention_heads (`int`, *optional*, defaults to 32):
31
+ Number of attention heads for each attention layer in the Transformer encoder.
32
+ inner_hidden_size (`int`, *optional*, defaults to 16384):
33
+ Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
34
+ max_sequence_length (`int`, *optional*, defaults to 512):
35
+ The maximum sequence length that this model might ever be used with.
36
+ Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
37
+ layernorm_epsilon (`float`, *optional*, defaults to 1e-5):
38
+ The epsilon used by the layer normalization layers.
39
+ use_cache (`bool`, *optional*, defaults to `True`):
40
+ Whether the model should return the last key/values attentions (not used by all models).
41
+ Example:
42
+
43
+ ```python
44
+ >>> from configuration_chatglm import ChatGLMConfig
45
+ >>> from modeling_chatglm import ChatGLMModel
46
+
47
+ >>> # Initializing a ChatGLM-6B THUDM/ChatGLM-6B style configuration
48
+ >>> configuration = ChatGLMConfig()
49
+
50
+ >>> # Initializing a model from the THUDM/ChatGLM-6B style configuration
51
+ >>> model = ChatGLMModel(configuration)
52
+
53
+ >>> # Accessing the model configuration
54
+ >>> configuration = model.config
55
+ ```
56
+ """
57
+ model_type = "chatglm"
58
+
59
+ def __init__(
60
+ self,
61
+ vocab_size=130528,
62
+ hidden_size=4096,
63
+ num_layers=28,
64
+ num_attention_heads=32,
65
+ layernorm_epsilon=1e-5,
66
+ use_cache=False,
67
+ bos_token_id=130004,
68
+ eos_token_id=130005,
69
+ pad_token_id=0,
70
+ max_sequence_length=2048,
71
+ inner_hidden_size=16384,
72
+ position_encoding_2d=True,
73
+ quantization_bit=0,
74
+ quantization_embeddings=False,
75
+ **kwargs
76
+ ):
77
+ self.num_layers = num_layers
78
+ self.vocab_size = vocab_size
79
+ self.hidden_size = hidden_size
80
+ self.num_attention_heads = num_attention_heads
81
+ self.max_sequence_length = max_sequence_length
82
+ self.layernorm_epsilon = layernorm_epsilon
83
+ self.inner_hidden_size = inner_hidden_size
84
+ self.use_cache = use_cache
85
+ self.bos_token_id = bos_token_id
86
+ self.eos_token_id = eos_token_id
87
+ self.pad_token_id = pad_token_id
88
+ self.position_encoding_2d = position_encoding_2d
89
+ self.quantization_bit=quantization_bit
90
+ self.quantization_embeddings=quantization_embeddings
91
+ super().__init__(
92
+ pad_token_id=pad_token_id,
93
+ bos_token_id=bos_token_id,
94
+ eos_token_id=eos_token_id,
95
+ **kwargs
96
+ )
ice_text.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99871e0c85db81ad7af1028854fd091cd5778c8414ae9d94bbbc10d02c831c21
3
+ size 2699926
modeling_chatglm.py ADDED
@@ -0,0 +1,1318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ PyTorch ChatGLM model. """
2
+
3
+ import math
4
+ import copy
5
+ import os
6
+ import warnings
7
+ import re
8
+
9
+ import torch
10
+ import torch.utils.checkpoint
11
+ import torch.nn.functional as F
12
+ from torch import nn
13
+ from torch.nn import CrossEntropyLoss, LayerNorm
14
+ from torch.nn.utils import skip_init
15
+ from typing import Optional, Tuple, Union, List, Callable
16
+
17
+ from transformers.utils import (
18
+ add_code_sample_docstrings,
19
+ add_start_docstrings,
20
+ add_start_docstrings_to_model_forward,
21
+ )
22
+ from transformers.modeling_outputs import (
23
+ BaseModelOutputWithPast,
24
+ CausalLMOutputWithPast,
25
+ BaseModelOutputWithPastAndCrossAttentions,
26
+ )
27
+ from transformers.modeling_utils import PreTrainedModel
28
+ from transformers.utils import logging
29
+ from transformers.generation.logits_process import LogitsProcessor
30
+ from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig
31
+
32
+ from .configuration_chatglm import ChatGLMConfig
33
+
34
+
35
+ # flags required to enable jit fusion kernels
36
+ torch._C._jit_set_profiling_mode(False)
37
+ torch._C._jit_set_profiling_executor(False)
38
+ torch._C._jit_override_can_fuse_on_cpu(True)
39
+ torch._C._jit_override_can_fuse_on_gpu(True)
40
+
41
+ logger = logging.get_logger(__name__)
42
+
43
+ _CHECKPOINT_FOR_DOC = "THUDM/ChatGLM-6B"
44
+ _CONFIG_FOR_DOC = "ChatGLM6BConfig"
45
+
46
+ CHATGLM_6B_PRETRAINED_MODEL_ARCHIVE_LIST = [
47
+ "THUDM/chatglm-6b",
48
+ # See all ChatGLM-6B models at https://huggingface.co/models?filter=chatglm
49
+ ]
50
+
51
+
52
+ class InvalidScoreLogitsProcessor(LogitsProcessor):
53
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
54
+ if torch.isnan(scores).any() or torch.isinf(scores).any():
55
+ scores.zero_()
56
+ scores[..., 5] = 5e4
57
+ return scores
58
+
59
+
60
+ def load_tf_weights_in_chatglm_6b(model, config, tf_checkpoint_path):
61
+ """Load tf checkpoints in a pytorch model."""
62
+ try:
63
+ import re
64
+
65
+ import numpy as np
66
+ import tensorflow as tf
67
+ except ImportError:
68
+ logger.error(
69
+ "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
70
+ "https://www.tensorflow.org/install/ for installation instructions."
71
+ )
72
+ raise
73
+ tf_path = os.path.abspath(tf_checkpoint_path)
74
+ logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
75
+ # Load weights from TF model
76
+ init_vars = tf.train.list_variables(tf_path)
77
+ names = []
78
+ arrays = []
79
+ for name, shape in init_vars:
80
+ logger.info(f"Loading TF weight {name} with shape {shape}")
81
+ array = tf.train.load_variable(tf_path, name)
82
+ names.append(name)
83
+ arrays.append(array)
84
+
85
+ for name, array in zip(names, arrays):
86
+ name = name.split("/")
87
+ # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
88
+ # which are not required for using pretrained model
89
+ if any(
90
+ n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"]
91
+ for n in name
92
+ ):
93
+ logger.info(f"Skipping {'/'.join(name)}")
94
+ continue
95
+ pointer = model
96
+ for m_name in name:
97
+ if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
98
+ scope_names = re.split(r"_(\d+)", m_name)
99
+ else:
100
+ scope_names = [m_name]
101
+ if scope_names[0] == "kernel" or scope_names[0] == "gamma":
102
+ pointer = getattr(pointer, "weight")
103
+ elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
104
+ pointer = getattr(pointer, "bias")
105
+ elif scope_names[0] == "output_weights":
106
+ pointer = getattr(pointer, "weight")
107
+ elif scope_names[0] == "squad":
108
+ pointer = getattr(pointer, "classifier")
109
+ else:
110
+ try:
111
+ pointer = getattr(pointer, scope_names[0])
112
+ except AttributeError:
113
+ logger.info(f"Skipping {'/'.join(name)}")
114
+ continue
115
+ if len(scope_names) >= 2:
116
+ num = int(scope_names[1])
117
+ pointer = pointer[num]
118
+ if m_name[-11:] == "_embeddings":
119
+ pointer = getattr(pointer, "weight")
120
+ elif m_name == "kernel":
121
+ array = np.transpose(array)
122
+ try:
123
+ assert (
124
+ pointer.shape == array.shape
125
+ ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
126
+ except AssertionError as e:
127
+ e.args += (pointer.shape, array.shape)
128
+ raise
129
+ logger.info(f"Initialize PyTorch weight {name}")
130
+ pointer.data = torch.from_numpy(array)
131
+ return model
132
+
133
+
134
+ @torch.jit.script
135
+ def gelu_impl(x):
136
+ """OpenAI's gelu implementation."""
137
+ return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * x *
138
+ (1.0 + 0.044715 * x * x)))
139
+
140
+
141
+ def gelu(x):
142
+ return gelu_impl(x)
143
+
144
+
145
+ class RotaryEmbedding(torch.nn.Module):
146
+ def __init__(self, dim, base=10000, precision=torch.half, learnable=False):
147
+ super().__init__()
148
+ inv_freq = 1. / (base ** (torch.arange(0, dim, 2).float() / dim))
149
+ inv_freq = inv_freq.half()
150
+ self.learnable = learnable
151
+ if learnable:
152
+ self.inv_freq = torch.nn.Parameter(inv_freq)
153
+ self.max_seq_len_cached = None
154
+ else:
155
+ self.register_buffer('inv_freq', inv_freq)
156
+ self.max_seq_len_cached = None
157
+ self.cos_cached = None
158
+ self.sin_cached = None
159
+ self.precision = precision
160
+
161
+ def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys,
162
+ error_msgs):
163
+ pass
164
+
165
+ def forward(self, x, seq_dim=1, seq_len=None):
166
+ if seq_len is None:
167
+ seq_len = x.shape[seq_dim]
168
+ if self.max_seq_len_cached is None or (seq_len > self.max_seq_len_cached):
169
+ self.max_seq_len_cached = None if self.learnable else seq_len
170
+ t = torch.arange(seq_len, device=x.device, dtype=self.inv_freq.dtype)
171
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
172
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
173
+ emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
174
+ if self.precision == torch.bfloat16:
175
+ emb = emb.float()
176
+
177
+ # [sx, 1 (b * np), hn]
178
+ cos_cached = emb.cos()[:, None, :]
179
+ sin_cached = emb.sin()[:, None, :]
180
+ if self.precision == torch.bfloat16:
181
+ cos_cached = cos_cached.bfloat16()
182
+ sin_cached = sin_cached.bfloat16()
183
+ if self.learnable:
184
+ return cos_cached, sin_cached
185
+ self.cos_cached, self.sin_cached = cos_cached, sin_cached
186
+ return self.cos_cached[:seq_len, ...], self.sin_cached[:seq_len, ...]
187
+
188
+ def _apply(self, fn):
189
+ if self.cos_cached is not None:
190
+ self.cos_cached = fn(self.cos_cached)
191
+ if self.sin_cached is not None:
192
+ self.sin_cached = fn(self.sin_cached)
193
+ return super()._apply(fn)
194
+
195
+ def rotate_half(x):
196
+ x1, x2 = x[..., :x.shape[-1] // 2], x[..., x.shape[-1] // 2:]
197
+ return torch.cat((-x2, x1), dim=x1.ndim - 1) # dim=-1 triggers a bug in earlier torch versions
198
+
199
+
200
+ @torch.jit.script
201
+ def apply_rotary_pos_emb_index(q, k, cos, sin, position_id):
202
+ # position_id: [sq, b], q, k: [sq, b, np, hn], cos: [sq, 1, hn] -> [sq, b, 1, hn]
203
+ cos, sin = F.embedding(position_id, cos.squeeze(1)).unsqueeze(2), \
204
+ F.embedding(position_id, sin.squeeze(1)).unsqueeze(2)
205
+ q, k = (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin)
206
+ return q, k
207
+
208
+
209
+ def attention_fn(
210
+ self,
211
+ query_layer,
212
+ key_layer,
213
+ value_layer,
214
+ attention_mask,
215
+ hidden_size_per_partition,
216
+ layer_id,
217
+ layer_past=None,
218
+ scaling_attention_score=True,
219
+ use_cache=False,
220
+ ):
221
+ if layer_past is not None:
222
+ past_key, past_value = layer_past
223
+ key_layer = torch.cat((past_key, key_layer), dim=0)
224
+ value_layer = torch.cat((past_value, value_layer), dim=0)
225
+
226
+ # seqlen, batch, num_attention_heads, hidden_size_per_attention_head
227
+ seq_len, b, nh, hidden_size = key_layer.shape
228
+
229
+ if use_cache:
230
+ present = (key_layer, value_layer)
231
+ else:
232
+ present = None
233
+
234
+ query_key_layer_scaling_coeff = float(layer_id + 1)
235
+ if scaling_attention_score:
236
+ query_layer = query_layer / (math.sqrt(hidden_size) * query_key_layer_scaling_coeff)
237
+
238
+ # ===================================
239
+ # Raw attention scores. [b, np, s, s]
240
+ # ===================================
241
+
242
+ # [b, np, sq, sk]
243
+ output_size = (query_layer.size(1), query_layer.size(2), query_layer.size(0), key_layer.size(0))
244
+
245
+ # [sq, b, np, hn] -> [sq, b * np, hn]
246
+ query_layer = query_layer.view(output_size[2], output_size[0] * output_size[1], -1)
247
+ # [sk, b, np, hn] -> [sk, b * np, hn]
248
+ key_layer = key_layer.view(output_size[3], output_size[0] * output_size[1], -1)
249
+
250
+ matmul_result = torch.empty(
251
+ output_size[0] * output_size[1],
252
+ output_size[2],
253
+ output_size[3],
254
+ dtype=query_layer.dtype,
255
+ device=query_layer.device,
256
+ )
257
+
258
+ matmul_result = torch.baddbmm(
259
+ matmul_result,
260
+ query_layer.transpose(0, 1), # [b * np, sq, hn]
261
+ key_layer.transpose(0, 1).transpose(1, 2), # [b * np, hn, sk]
262
+ beta=0.0,
263
+ alpha=1.0,
264
+ )
265
+
266
+ # change view to [b, np, sq, sk]
267
+ attention_scores = matmul_result.view(*output_size)
268
+
269
+ if self.scale_mask_softmax:
270
+ self.scale_mask_softmax.scale = query_key_layer_scaling_coeff
271
+ attention_probs = self.scale_mask_softmax(attention_scores, attention_mask.contiguous())
272
+ else:
273
+ if not (attention_mask == 0).all():
274
+ # if auto-regressive, skip
275
+ attention_scores.masked_fill_(attention_mask, -10000.0)
276
+ dtype = attention_scores.type()
277
+ attention_scores = attention_scores.float()
278
+ attention_scores = attention_scores * query_key_layer_scaling_coeff
279
+
280
+ attention_probs = F.softmax(attention_scores, dim=-1)
281
+
282
+ attention_probs = attention_probs.type(dtype)
283
+
284
+ # =========================
285
+ # Context layer. [sq, b, hp]
286
+ # =========================
287
+
288
+ # value_layer -> context layer.
289
+ # [sk, b, np, hn] --> [b, np, sq, hn]
290
+
291
+ # context layer shape: [b, np, sq, hn]
292
+ output_size = (value_layer.size(1), value_layer.size(2), query_layer.size(0), value_layer.size(3))
293
+
294
+ # change view [sk, b * np, hn]
295
+ value_layer = value_layer.view(value_layer.size(0), output_size[0] * output_size[1], -1)
296
+
297
+ # change view [b * np, sq, sk]
298
+ attention_probs = attention_probs.view(output_size[0] * output_size[1], output_size[2], -1)
299
+
300
+ # matmul: [b * np, sq, hn]
301
+ context_layer = torch.bmm(attention_probs, value_layer.transpose(0, 1))
302
+
303
+ # change view [b, np, sq, hn]
304
+ context_layer = context_layer.view(*output_size)
305
+
306
+ # [b, np, sq, hn] --> [sq, b, np, hn]
307
+ context_layer = context_layer.permute(2, 0, 1, 3).contiguous()
308
+
309
+ # [sq, b, np, hn] --> [sq, b, hp]
310
+ new_context_layer_shape = context_layer.size()[:-2] + (hidden_size_per_partition,)
311
+ context_layer = context_layer.view(*new_context_layer_shape)
312
+
313
+ outputs = (context_layer, present, attention_probs)
314
+
315
+ return outputs
316
+
317
+
318
+ class SelfAttention(torch.nn.Module):
319
+ def __init__(self, hidden_size, num_attention_heads,
320
+ layer_id, hidden_size_per_attention_head=None, bias=True,
321
+ params_dtype=torch.float, position_encoding_2d=True):
322
+ super(SelfAttention, self).__init__()
323
+
324
+ self.layer_id = layer_id
325
+ self.hidden_size = hidden_size
326
+ self.hidden_size_per_partition = hidden_size
327
+ self.num_attention_heads = num_attention_heads
328
+ self.num_attention_heads_per_partition = num_attention_heads
329
+ self.position_encoding_2d = position_encoding_2d
330
+ self.rotary_emb = RotaryEmbedding(
331
+ self.hidden_size // (self.num_attention_heads * 2)
332
+ if position_encoding_2d
333
+ else self.hidden_size // self.num_attention_heads,
334
+ base=10000,
335
+ precision=torch.half,
336
+ learnable=False,
337
+ )
338
+
339
+ self.scale_mask_softmax = None
340
+
341
+ if hidden_size_per_attention_head is None:
342
+ self.hidden_size_per_attention_head = hidden_size // num_attention_heads
343
+ else:
344
+ self.hidden_size_per_attention_head = hidden_size_per_attention_head
345
+
346
+ self.inner_hidden_size = num_attention_heads * self.hidden_size_per_attention_head
347
+
348
+ # Strided linear layer.
349
+ self.query_key_value = skip_init(
350
+ torch.nn.Linear,
351
+ hidden_size,
352
+ 3 * self.inner_hidden_size,
353
+ bias=bias,
354
+ dtype=params_dtype,
355
+ )
356
+
357
+ self.dense = skip_init(
358
+ torch.nn.Linear,
359
+ self.inner_hidden_size,
360
+ hidden_size,
361
+ bias=bias,
362
+ dtype=params_dtype,
363
+ )
364
+
365
+ @staticmethod
366
+ def attention_mask_func(attention_scores, attention_mask):
367
+ attention_scores.masked_fill_(attention_mask, -10000.0)
368
+ return attention_scores
369
+
370
+ def split_tensor_along_last_dim(self, tensor, num_partitions,
371
+ contiguous_split_chunks=False):
372
+ """Split a tensor along its last dimension.
373
+ Arguments:
374
+ tensor: input tensor.
375
+ num_partitions: number of partitions to split the tensor
376
+ contiguous_split_chunks: If True, make each chunk contiguous
377
+ in memory.
378
+ """
379
+ # Get the size and dimension.
380
+ last_dim = tensor.dim() - 1
381
+ last_dim_size = tensor.size()[last_dim] // num_partitions
382
+ # Split.
383
+ tensor_list = torch.split(tensor, last_dim_size, dim=last_dim)
384
+ # Note: torch.split does not create contiguous tensors by default.
385
+ if contiguous_split_chunks:
386
+ return tuple(chunk.contiguous() for chunk in tensor_list)
387
+
388
+ return tensor_list
389
+
390
+ def forward(
391
+ self,
392
+ hidden_states: torch.Tensor,
393
+ position_ids,
394
+ attention_mask: torch.Tensor,
395
+ layer_id,
396
+ layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
397
+ use_cache: bool = False,
398
+ output_attentions: bool = False,
399
+ ):
400
+ """
401
+ hidden_states: [seq_len, batch, hidden_size]
402
+ attention_mask: [(1, 1), seq_len, seq_len]
403
+ """
404
+
405
+ # [seq_len, batch, 3 * hidden_size]
406
+ mixed_raw_layer = self.query_key_value(hidden_states)
407
+
408
+ # [seq_len, batch, 3 * hidden_size] --> [seq_len, batch, num_attention_heads, 3 * hidden_size_per_attention_head]
409
+ new_tensor_shape = mixed_raw_layer.size()[:-1] + (
410
+ self.num_attention_heads_per_partition,
411
+ 3 * self.hidden_size_per_attention_head,
412
+ )
413
+ mixed_raw_layer = mixed_raw_layer.view(*new_tensor_shape)
414
+
415
+ # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
416
+ (query_layer, key_layer, value_layer) = self.split_tensor_along_last_dim(mixed_raw_layer, 3)
417
+
418
+ if self.position_encoding_2d:
419
+ q1, q2 = query_layer.chunk(2, dim=(query_layer.ndim - 1))
420
+ k1, k2 = key_layer.chunk(2, dim=(key_layer.ndim - 1))
421
+ cos, sin = self.rotary_emb(q1, seq_len=position_ids.max() + 1)
422
+ position_ids, block_position_ids = position_ids[:, 0, :].transpose(0, 1).contiguous(), \
423
+ position_ids[:, 1, :].transpose(0, 1).contiguous()
424
+ q1, k1 = apply_rotary_pos_emb_index(q1, k1, cos, sin, position_ids)
425
+ q2, k2 = apply_rotary_pos_emb_index(q2, k2, cos, sin, block_position_ids)
426
+ query_layer = torch.concat([q1, q2], dim=(q1.ndim - 1))
427
+ key_layer = torch.concat([k1, k2], dim=(k1.ndim - 1))
428
+ else:
429
+ position_ids = position_ids.transpose(0, 1)
430
+ cos, sin = self.rotary_emb(value_layer, seq_len=position_ids.max() + 1)
431
+ # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
432
+ query_layer, key_layer = apply_rotary_pos_emb_index(query_layer, key_layer, cos, sin, position_ids)
433
+
434
+ # [seq_len, batch, hidden_size]
435
+ context_layer, present, attention_probs = attention_fn(
436
+ self=self,
437
+ query_layer=query_layer,
438
+ key_layer=key_layer,
439
+ value_layer=value_layer,
440
+ attention_mask=attention_mask,
441
+ hidden_size_per_partition=self.hidden_size_per_partition,
442
+ layer_id=layer_id,
443
+ layer_past=layer_past,
444
+ use_cache=use_cache
445
+ )
446
+
447
+ output = self.dense(context_layer)
448
+
449
+ outputs = (output, present)
450
+
451
+ if output_attentions:
452
+ outputs += (attention_probs,)
453
+
454
+ return outputs # output, present, attention_probs
455
+
456
+
457
+ class GEGLU(torch.nn.Module):
458
+ def __init__(self):
459
+ super().__init__()
460
+ self.activation_fn = F.gelu
461
+
462
+ def forward(self, x):
463
+ # dim=-1 breaks in jit for pt<1.10
464
+ x1, x2 = x.chunk(2, dim=(x.ndim - 1))
465
+ return x1 * self.activation_fn(x2)
466
+
467
+
468
+ class GLU(torch.nn.Module):
469
+ def __init__(self, hidden_size, inner_hidden_size=None,
470
+ layer_id=None, bias=True, activation_func=gelu, params_dtype=torch.float):
471
+ super(GLU, self).__init__()
472
+ self.layer_id = layer_id
473
+ self.activation_func = activation_func
474
+
475
+ # Project to 4h.
476
+ self.hidden_size = hidden_size
477
+ if inner_hidden_size is None:
478
+ inner_hidden_size = 4 * hidden_size
479
+ self.inner_hidden_size = inner_hidden_size
480
+ self.dense_h_to_4h = skip_init(
481
+ torch.nn.Linear,
482
+ self.hidden_size,
483
+ self.inner_hidden_size,
484
+ bias=bias,
485
+ dtype=params_dtype,
486
+ )
487
+ # Project back to h.
488
+ self.dense_4h_to_h = skip_init(
489
+ torch.nn.Linear,
490
+ self.inner_hidden_size,
491
+ self.hidden_size,
492
+ bias=bias,
493
+ dtype=params_dtype,
494
+ )
495
+
496
+ def forward(self, hidden_states):
497
+ """
498
+ hidden_states: [seq_len, batch, hidden_size]
499
+ """
500
+
501
+ # [seq_len, batch, inner_hidden_size]
502
+ intermediate_parallel = self.dense_h_to_4h(hidden_states)
503
+
504
+ intermediate_parallel = self.activation_func(intermediate_parallel)
505
+
506
+ output = self.dense_4h_to_h(intermediate_parallel)
507
+
508
+ return output
509
+
510
+
511
+ class GLMBlock(torch.nn.Module):
512
+ def __init__(
513
+ self,
514
+ hidden_size,
515
+ num_attention_heads,
516
+ layernorm_epsilon,
517
+ layer_id,
518
+ inner_hidden_size=None,
519
+ hidden_size_per_attention_head=None,
520
+ layernorm=LayerNorm,
521
+ use_bias=True,
522
+ params_dtype=torch.float,
523
+ num_layers=28,
524
+ position_encoding_2d=True
525
+ ):
526
+ super(GLMBlock, self).__init__()
527
+ # Set output layer initialization if not provided.
528
+
529
+ self.layer_id = layer_id
530
+
531
+ # Layernorm on the input data.
532
+ self.input_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
533
+
534
+ self.position_encoding_2d = position_encoding_2d
535
+
536
+ # Self attention.
537
+ self.attention = SelfAttention(
538
+ hidden_size,
539
+ num_attention_heads,
540
+ layer_id,
541
+ hidden_size_per_attention_head=hidden_size_per_attention_head,
542
+ bias=use_bias,
543
+ params_dtype=params_dtype,
544
+ position_encoding_2d=self.position_encoding_2d
545
+ )
546
+
547
+ # Layernorm on the input data.
548
+ self.post_attention_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
549
+
550
+ self.num_layers = num_layers
551
+
552
+ # GLU
553
+ self.mlp = GLU(
554
+ hidden_size,
555
+ inner_hidden_size=inner_hidden_size,
556
+ bias=use_bias,
557
+ layer_id=layer_id,
558
+ params_dtype=params_dtype,
559
+ )
560
+
561
+ def forward(
562
+ self,
563
+ hidden_states: torch.Tensor,
564
+ position_ids,
565
+ attention_mask: torch.Tensor,
566
+ layer_id,
567
+ layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
568
+ use_cache: bool = False,
569
+ output_attentions: bool = False,
570
+ ):
571
+ """
572
+ hidden_states: [seq_len, batch, hidden_size]
573
+ attention_mask: [(1, 1), seq_len, seq_len]
574
+ """
575
+
576
+ # Layer norm at the begining of the transformer layer.
577
+ # [seq_len, batch, hidden_size]
578
+ attention_input = self.input_layernorm(hidden_states)
579
+
580
+ # Self attention.
581
+ attention_outputs = self.attention(
582
+ attention_input,
583
+ position_ids,
584
+ attention_mask=attention_mask,
585
+ layer_id=layer_id,
586
+ layer_past=layer_past,
587
+ use_cache=use_cache,
588
+ output_attentions=output_attentions
589
+ )
590
+
591
+ attention_output = attention_outputs[0]
592
+
593
+ outputs = attention_outputs[1:]
594
+
595
+ # Residual connection.
596
+ alpha = (2 * self.num_layers) ** 0.5
597
+ hidden_states = attention_input * alpha + attention_output
598
+
599
+ mlp_input = self.post_attention_layernorm(hidden_states)
600
+
601
+ # MLP.
602
+ mlp_output = self.mlp(mlp_input)
603
+
604
+ # Second residual connection.
605
+ output = mlp_input * alpha + mlp_output
606
+
607
+ if use_cache:
608
+ outputs = (output,) + outputs
609
+ else:
610
+ outputs = (output,) + outputs[1:]
611
+
612
+ return outputs # hidden_states, present, attentions
613
+
614
+
615
+ class ChatGLMPreTrainedModel(PreTrainedModel):
616
+ """
617
+ An abstract class to handle weights initialization and
618
+ a simple interface for downloading and loading pretrained models.
619
+ """
620
+
621
+ is_parallelizable = False
622
+ supports_gradient_checkpointing = False
623
+ config_class = ChatGLMConfig
624
+ base_model_prefix = "transformer"
625
+ _no_split_modules = ["GLM6BBlock"]
626
+
627
+ def __init__(self, *inputs, **kwargs):
628
+ super().__init__(*inputs, **kwargs)
629
+
630
+ def _init_weights(self, module: nn.Module):
631
+ """Initialize the weights."""
632
+ return
633
+
634
+
635
+ CHATGLM_6B_START_DOCSTRING = r"""
636
+ This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class.
637
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general
638
+ usage and behavior.
639
+
640
+ Parameters:
641
+ config ([`~ChatGLM6BConfig`]): Model configuration class with all the parameters of the model.
642
+ Initializing with a config file does not load the weights associated with the model, only the configuration.
643
+ Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
644
+ """
645
+
646
+ CHATGLM_6B_INPUTS_DOCSTRING = r"""
647
+ Args:
648
+ input_ids (`torch.LongTensor` of shape `({0})`):
649
+ Indices of input sequence tokens in the vocabulary.
650
+
651
+ Indices can be obtained using [`ChatGLM6BTokenizer`].
652
+ See [`PreTrainedTokenizer.encode`] and
653
+ [`PreTrainedTokenizer.__call__`] for details.
654
+
655
+ [What are input IDs?](../glossary#input-ids)
656
+ attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
657
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
658
+
659
+ - 1 for tokens that are **not masked**,
660
+ - 0 for tokens that are **masked**.
661
+
662
+ [What are attention masks?](../glossary#attention-mask)
663
+ token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*):
664
+ Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
665
+
666
+ - 0 corresponds to a *sentence A* token,
667
+ - 1 corresponds to a *sentence B* token.
668
+
669
+ [What are token type IDs?](../glossary#token-type-ids)
670
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
671
+ Indices of positions of each input sequence tokens in the position embeddings.
672
+ Selected in the range `[0, config.max_position_embeddings - 1]`.
673
+
674
+ [What are position IDs?](../glossary#position-ids)
675
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
676
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
677
+
678
+ - 1 indicates the head is **not masked**,
679
+ - 0 indicates the head is **masked**.
680
+
681
+ inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*):
682
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
683
+ This is useful if you want more control over how to convert *input_ids* indices into associated vectors
684
+ than the model's internal embedding lookup matrix.
685
+ output_attentions (`bool`, *optional*):
686
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
687
+ tensors for more detail.
688
+ output_hidden_states (`bool`, *optional*):
689
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
690
+ more detail.
691
+ return_dict (`bool`, *optional*):
692
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
693
+ """
694
+
695
+
696
+ @add_start_docstrings(
697
+ "The bare ChatGLM-6B Model transformer outputting raw hidden-states without any specific head on top.",
698
+ CHATGLM_6B_START_DOCSTRING,
699
+ )
700
+ class ChatGLMModel(ChatGLMPreTrainedModel):
701
+ """
702
+
703
+ The model can behave as an encoder (with only self-attention) as well
704
+ as a decoder, in which case a layer of cross-attention is added between
705
+ the self-attention layers, following the architecture described in [Attention is
706
+ all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani,
707
+ Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
708
+
709
+ To behave as an decoder the model needs to be initialized with the
710
+ `is_decoder` argument of the configuration set to `True`.
711
+ To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder`
712
+ argument and `add_cross_attention` set to `True`; an
713
+ `encoder_hidden_states` is then expected as an input to the forward pass.
714
+ """
715
+
716
+ def __init__(self, config: ChatGLMConfig):
717
+ super().__init__(config)
718
+
719
+ # recording parameters
720
+ self.max_sequence_length = config.max_sequence_length
721
+ self.hidden_size = config.hidden_size
722
+ self.params_dtype = torch.half
723
+ self.num_attention_heads = config.num_attention_heads
724
+ self.vocab_size = config.vocab_size
725
+ self.num_layers = config.num_layers
726
+ self.layernorm_epsilon = config.layernorm_epsilon
727
+ self.inner_hidden_size = config.inner_hidden_size
728
+ self.hidden_size_per_attention_head = self.hidden_size // self.num_attention_heads
729
+ self.position_encoding_2d = config.position_encoding_2d
730
+
731
+ self.word_embeddings = skip_init(
732
+ torch.nn.Embedding,
733
+ num_embeddings=self.vocab_size, embedding_dim=self.hidden_size,
734
+ dtype=self.params_dtype
735
+ )
736
+
737
+ def get_layer(layer_id):
738
+ return GLMBlock(
739
+ self.hidden_size,
740
+ self.num_attention_heads,
741
+ self.layernorm_epsilon,
742
+ layer_id,
743
+ inner_hidden_size=self.inner_hidden_size,
744
+ hidden_size_per_attention_head=self.hidden_size_per_attention_head,
745
+ layernorm=LayerNorm,
746
+ use_bias=True,
747
+ params_dtype=self.params_dtype,
748
+ position_encoding_2d=self.position_encoding_2d,
749
+ )
750
+
751
+ self.layers = torch.nn.ModuleList(
752
+ [get_layer(layer_id) for layer_id in range(self.num_layers)]
753
+ )
754
+
755
+ # Final layer norm before output.
756
+ self.final_layernorm = LayerNorm(self.hidden_size, eps=self.layernorm_epsilon)
757
+
758
+ def get_input_embeddings(self):
759
+ return self.word_embeddings
760
+
761
+ def set_input_embeddings(self, new_embeddings: torch.Tensor):
762
+ self.word_embeddings = new_embeddings
763
+
764
+ def get_masks(self, seq, device):
765
+ context_length = seq.index(self.config.bos_token_id) + 1
766
+
767
+ attention_mask = torch.ones((1, len(seq), len(seq)), device=device)
768
+ attention_mask.tril_()
769
+ attention_mask[..., :context_length - 1] = 1
770
+ attention_mask.unsqueeze_(1)
771
+ attention_mask = (attention_mask < 0.5).bool()
772
+
773
+ return attention_mask
774
+
775
+ def get_position_ids(self, seq, mask_position, device, gmask=False):
776
+ context_length = seq.index(self.config.bos_token_id) + 1
777
+ if self.position_encoding_2d:
778
+ seq_length = seq.index(self.config.bos_token_id)
779
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
780
+ if not gmask:
781
+ position_ids[seq_length:] = mask_position
782
+ block_position_ids = torch.cat((
783
+ torch.zeros(seq_length, dtype=torch.long, device=device),
784
+ torch.arange(context_length - seq_length, dtype=torch.long, device=device) + 1
785
+ ))
786
+ position_ids = torch.stack((position_ids, block_position_ids), dim=0)
787
+ else:
788
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
789
+ if not gmask:
790
+ position_ids[context_length - 1:] = mask_position
791
+
792
+ position_ids = position_ids.unsqueeze(0)
793
+
794
+ return position_ids
795
+
796
+ @add_start_docstrings_to_model_forward(CHATGLM_6B_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
797
+ @add_code_sample_docstrings(
798
+ checkpoint=_CHECKPOINT_FOR_DOC,
799
+ output_type=BaseModelOutputWithPastAndCrossAttentions,
800
+ config_class=_CONFIG_FOR_DOC,
801
+ )
802
+ def forward(
803
+ self,
804
+ input_ids: Optional[torch.LongTensor] = None,
805
+ position_ids: Optional[torch.LongTensor] = None,
806
+ attention_mask: Optional[torch.Tensor] = None,
807
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
808
+ inputs_embeds: Optional[torch.LongTensor] = None,
809
+ use_cache: Optional[bool] = None,
810
+ output_attentions: Optional[bool] = None,
811
+ output_hidden_states: Optional[bool] = None,
812
+ return_dict: Optional[bool] = None,
813
+ ) -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPast]:
814
+
815
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
816
+ output_hidden_states = (
817
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
818
+ )
819
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
820
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
821
+
822
+ if input_ids is not None and inputs_embeds is not None:
823
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
824
+ elif input_ids is not None:
825
+ batch_size, seq_length = input_ids.shape[:2]
826
+ elif inputs_embeds is not None:
827
+ batch_size, seq_length, _ = inputs_embeds.shape[:2]
828
+ else:
829
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
830
+
831
+ if past_key_values is None:
832
+ past_key_values = tuple([None] * len(self.layers))
833
+ seq = input_ids[0].tolist()
834
+
835
+ if attention_mask is None:
836
+ attention_mask = self.get_masks(
837
+ seq=seq,
838
+ device=input_ids.device
839
+ )
840
+
841
+ if position_ids is None:
842
+ MASK, gMASK = 130000, 130001
843
+ mask_token = MASK if MASK in input_ids else gMASK
844
+ use_gmask = False if MASK in input_ids else gMASK
845
+
846
+ mask_position = seq.index(mask_token)
847
+ position_ids = self.get_position_ids(
848
+ seq=seq,
849
+ mask_position=mask_position,
850
+ device=input_ids.device,
851
+ gmask=use_gmask
852
+ )
853
+
854
+ if inputs_embeds is None:
855
+ inputs_embeds = self.word_embeddings(input_ids)
856
+
857
+ # [seq_len, batch, hidden_size]
858
+ hidden_states = inputs_embeds.transpose(0, 1)
859
+
860
+ presents = () if use_cache else None
861
+ all_self_attentions = () if output_attentions else None
862
+ all_hidden_states = () if output_hidden_states else None
863
+
864
+ seq_length_with_past = seq_length
865
+ past_key_values_length = 0
866
+ if past_key_values[0] is not None:
867
+ past_key_values_length = past_key_values[0][0].shape[0]
868
+ seq_length_with_past = seq_length_with_past + past_key_values_length
869
+ if attention_mask is None:
870
+ attention_mask = torch.zeros(1, 1, device=input_ids.device).bool()
871
+
872
+ else:
873
+ attention_mask = attention_mask.to(input_ids.device)
874
+
875
+ for i, layer in enumerate(self.layers):
876
+
877
+ if output_hidden_states:
878
+ all_hidden_states = all_hidden_states + (hidden_states,)
879
+
880
+ layer_ret = layer(
881
+ hidden_states,
882
+ position_ids=position_ids,
883
+ attention_mask=attention_mask,
884
+ layer_id=torch.tensor(i),
885
+ layer_past=past_key_values[i],
886
+ use_cache=use_cache,
887
+ output_attentions=output_attentions
888
+ )
889
+
890
+ hidden_states = layer_ret[0]
891
+
892
+ if use_cache:
893
+ presents = presents + (layer_ret[1],)
894
+
895
+ if output_attentions:
896
+ all_self_attentions = all_self_attentions + (layer_ret[2 if use_cache else 1],)
897
+
898
+ # Final layer norm.
899
+ hidden_states = self.final_layernorm(hidden_states)
900
+
901
+ if output_hidden_states:
902
+ all_hidden_states = all_hidden_states + (hidden_states,)
903
+
904
+ if not return_dict:
905
+ return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)
906
+
907
+ return BaseModelOutputWithPast(
908
+ last_hidden_state=hidden_states,
909
+ past_key_values=presents,
910
+ hidden_states=all_hidden_states,
911
+ attentions=all_self_attentions,
912
+ )
913
+
914
+
915
+ class ChatGLMForConditionalGeneration(ChatGLMPreTrainedModel):
916
+ def __init__(self, config: ChatGLMConfig):
917
+ super().__init__(config)
918
+
919
+ # self.hidden_size = config.hidden_size
920
+ # self.params_dtype = torch.half
921
+ # self.vocab_size = config.vocab_size
922
+ self.max_sequence_length = config.max_sequence_length
923
+
924
+ self.position_encoding_2d = config.position_encoding_2d
925
+
926
+ self.transformer = ChatGLMModel(config)
927
+
928
+ self.lm_head = skip_init(
929
+ nn.Linear,
930
+ config.hidden_size,
931
+ config.vocab_size,
932
+ bias=False,
933
+ dtype=torch.half
934
+ )
935
+
936
+ self.config = config
937
+
938
+ self.quantized = False
939
+
940
+ if self.config.quantization_bit:
941
+ self.quantize(self.config.quantization_bit, self.config.quantization_embeddings, use_quantization_cache=True, empty_init=True)
942
+
943
+ def get_output_embeddings(self):
944
+ return self.lm_head
945
+
946
+ def set_output_embeddings(self, new_embeddings):
947
+ self.lm_head = new_embeddings
948
+
949
+ def get_masks_and_position_ids(self, seq, mask_position, context_length, device, gmask=False):
950
+ attention_mask = torch.ones((1, context_length, context_length), device=device)
951
+ attention_mask.tril_()
952
+ attention_mask[..., :context_length - 1] = 1
953
+ attention_mask.unsqueeze_(1)
954
+ attention_mask = (attention_mask < 0.5).bool()
955
+
956
+ if self.position_encoding_2d:
957
+ seq_length = seq.index(self.config.bos_token_id)
958
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
959
+ if not gmask:
960
+ position_ids[seq_length:] = mask_position
961
+ block_position_ids = torch.cat((
962
+ torch.zeros(seq_length, dtype=torch.long, device=device),
963
+ torch.arange(context_length - seq_length, dtype=torch.long, device=device) + 1
964
+ ))
965
+ position_ids = torch.stack((position_ids, block_position_ids), dim=0)
966
+ else:
967
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
968
+ if not gmask:
969
+ position_ids[context_length - 1:] = mask_position
970
+
971
+ position_ids = position_ids.unsqueeze(0)
972
+
973
+ return attention_mask, position_ids
974
+
975
+ def prepare_inputs_for_generation(
976
+ self,
977
+ input_ids: torch.LongTensor,
978
+ past: Optional[torch.Tensor] = None,
979
+ past_key_values: Optional[torch.Tensor] = None,
980
+ attention_mask: Optional[torch.Tensor] = None,
981
+ **kwargs
982
+ ) -> dict:
983
+
984
+ MASK, gMASK = 130000, 130001
985
+ mask_token = MASK if MASK in input_ids else gMASK
986
+ use_gmask = False if MASK in input_ids else gMASK
987
+ seq = input_ids[0].tolist()
988
+ mask_position = seq.index(mask_token)
989
+
990
+ if mask_token not in seq:
991
+ raise ValueError("You have to add either [MASK] or [gMASK] in your input")
992
+
993
+ # only last token for input_ids if past is not None
994
+ if past is not None or past_key_values is not None:
995
+ context_length = seq.index(self.config.bos_token_id)
996
+ last_token = input_ids[:, -1].unsqueeze(-1)
997
+ if self.position_encoding_2d:
998
+ position_ids = torch.tensor([[[mask_position], [len(seq) - context_length]]], dtype=torch.long,
999
+ device=input_ids.device)
1000
+ else:
1001
+ position_ids = torch.tensor([[mask_position]], dtype=torch.long, device=input_ids.device)
1002
+
1003
+ if past is None:
1004
+ past = past_key_values
1005
+ return {
1006
+ "input_ids": last_token,
1007
+ "past_key_values": past,
1008
+ "position_ids": position_ids,
1009
+ }
1010
+ else:
1011
+ attention_mask, position_ids = self.get_masks_and_position_ids(
1012
+ seq=seq,
1013
+ mask_position=mask_position,
1014
+ context_length=len(seq),
1015
+ device=input_ids.device,
1016
+ gmask=use_gmask
1017
+ )
1018
+
1019
+ return {
1020
+ "input_ids": input_ids,
1021
+ "past_key_values": past,
1022
+ "position_ids": position_ids,
1023
+ "attention_mask": attention_mask
1024
+ }
1025
+
1026
+ def forward(
1027
+ self,
1028
+ input_ids: Optional[torch.Tensor] = None,
1029
+ position_ids: Optional[torch.Tensor] = None,
1030
+ attention_mask: Optional[torch.Tensor] = None,
1031
+ past_key_values: Optional[Tuple[torch.FloatTensor]] = None,
1032
+ inputs_embeds: Optional[torch.Tensor] = None,
1033
+ labels: Optional[torch.Tensor] = None,
1034
+ use_cache: Optional[bool] = None,
1035
+ output_attentions: Optional[bool] = None,
1036
+ output_hidden_states: Optional[bool] = None,
1037
+ return_dict: Optional[bool] = None,
1038
+ ):
1039
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1040
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1041
+
1042
+ transformer_outputs = self.transformer(
1043
+ input_ids=input_ids,
1044
+ position_ids=position_ids,
1045
+ attention_mask=attention_mask,
1046
+ past_key_values=past_key_values,
1047
+ inputs_embeds=inputs_embeds,
1048
+ use_cache=use_cache,
1049
+ output_attentions=output_attentions,
1050
+ output_hidden_states=output_hidden_states,
1051
+ return_dict=return_dict,
1052
+ )
1053
+
1054
+ hidden_states = transformer_outputs[0]
1055
+
1056
+ lm_logits = self.lm_head(hidden_states).permute(1, 0, 2).contiguous()
1057
+
1058
+ loss = None
1059
+ if labels is not None:
1060
+ lm_logits = lm_logits.to(torch.float32)
1061
+
1062
+ # Shift so that tokens < n predict n
1063
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1064
+ shift_labels = labels[..., 1:].contiguous()
1065
+ # Flatten the tokens
1066
+ loss_fct = CrossEntropyLoss()
1067
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1068
+
1069
+ lm_logits = lm_logits.to(hidden_states.dtype)
1070
+ loss = loss.to(hidden_states.dtype)
1071
+
1072
+ if not return_dict:
1073
+ output = (lm_logits,) + transformer_outputs[1:]
1074
+ return ((loss,) + output) if loss is not None else output
1075
+
1076
+ return CausalLMOutputWithPast(
1077
+ loss=loss,
1078
+ logits=lm_logits,
1079
+ past_key_values=transformer_outputs.past_key_values,
1080
+ hidden_states=transformer_outputs.hidden_states,
1081
+ attentions=transformer_outputs.attentions,
1082
+ )
1083
+
1084
+ @staticmethod
1085
+ def _reorder_cache(
1086
+ past: Tuple[Tuple[torch.Tensor, torch.Tensor], ...], beam_idx: torch.LongTensor
1087
+ ) -> Tuple[Tuple[torch.Tensor, torch.Tensor], ...]:
1088
+ """
1089
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
1090
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
1091
+ beam_idx at every generation step.
1092
+
1093
+ Output shares the same memory storage as `past`.
1094
+ """
1095
+ return tuple(
1096
+ (
1097
+ layer_past[0].index_select(1, beam_idx.to(layer_past[0].device)),
1098
+ layer_past[1].index_select(1, beam_idx.to(layer_past[1].device)),
1099
+ )
1100
+ for layer_past in past
1101
+ )
1102
+
1103
+ def process_response(self, response):
1104
+ response = response.strip()
1105
+ response = response.replace("[[训练时间]]", "2023年")
1106
+ punkts = [
1107
+ [",", ","],
1108
+ ["!", "!"],
1109
+ [":", ":"],
1110
+ [";", ";"],
1111
+ ["\?", "?"],
1112
+ ]
1113
+ for item in punkts:
1114
+ response = re.sub(r"([\u4e00-\u9fff])%s" % item[0], r"\1%s" % item[1], response)
1115
+ response = re.sub(r"%s([\u4e00-\u9fff])" % item[0], r"%s\1" % item[1], response)
1116
+ return response
1117
+
1118
+ @torch.no_grad()
1119
+ def chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 2048, num_beams=1,
1120
+ do_sample=True, top_p=0.7, temperature=0.95, logits_processor=None, **kwargs):
1121
+ if history is None:
1122
+ history = []
1123
+ if logits_processor is None:
1124
+ logits_processor = LogitsProcessorList()
1125
+ logits_processor.append(InvalidScoreLogitsProcessor())
1126
+ gen_kwargs = {"max_length": max_length, "num_beams": num_beams, "do_sample": do_sample, "top_p": top_p,
1127
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
1128
+ if not history:
1129
+ prompt = query
1130
+ else:
1131
+ prompt = ""
1132
+ for i, (old_query, response) in enumerate(history):
1133
+ prompt += "[Round {}]\n问:{}\n答:{}\n".format(i, old_query, response)
1134
+ prompt += "[Round {}]\n问:{}\n答:".format(len(history), query)
1135
+ input_ids = tokenizer([prompt], return_tensors="pt", padding=True)
1136
+ input_ids = input_ids.to(self.device)
1137
+ outputs = self.generate(**input_ids, **gen_kwargs)
1138
+ outputs = outputs.tolist()[0][len(input_ids["input_ids"][0]):]
1139
+ response = tokenizer.decode(outputs)
1140
+ response = self.process_response(response)
1141
+ history = history + [(query, response)]
1142
+ return response, history
1143
+
1144
+ @torch.no_grad()
1145
+ def stream_chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 2048,
1146
+ do_sample=True, top_p=0.7, temperature=0.95, logits_processor=None, **kwargs):
1147
+ if history is None:
1148
+ history = []
1149
+ if logits_processor is None:
1150
+ logits_processor = LogitsProcessorList()
1151
+ logits_processor.append(InvalidScoreLogitsProcessor())
1152
+ gen_kwargs = {"max_length": max_length, "do_sample": do_sample, "top_p": top_p,
1153
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
1154
+ if not history:
1155
+ prompt = query
1156
+ else:
1157
+ prompt = ""
1158
+ for i, (old_query, response) in enumerate(history):
1159
+ prompt += "[Round {}]\n问:{}\n答:{}\n".format(i, old_query, response)
1160
+ prompt += "[Round {}]\n问:{}\n答:".format(len(history), query)
1161
+ input_ids = tokenizer([prompt], return_tensors="pt", padding=True)
1162
+ input_ids = input_ids.to(self.device)
1163
+ for outputs in self.stream_generate(**input_ids, **gen_kwargs):
1164
+ outputs = outputs.tolist()[0][len(input_ids["input_ids"][0]):]
1165
+ response = tokenizer.decode(outputs)
1166
+ response = self.process_response(response)
1167
+ new_history = history + [(query, response)]
1168
+ yield response, new_history
1169
+
1170
+ @torch.no_grad()
1171
+ def stream_generate(
1172
+ self,
1173
+ input_ids,
1174
+ generation_config: Optional[GenerationConfig] = None,
1175
+ logits_processor: Optional[LogitsProcessorList] = None,
1176
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1177
+ prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
1178
+ **kwargs,
1179
+ ):
1180
+ batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
1181
+
1182
+ if generation_config is None:
1183
+ generation_config = self.generation_config
1184
+ generation_config = copy.deepcopy(generation_config)
1185
+ model_kwargs = generation_config.update(**kwargs)
1186
+ bos_token_id, eos_token_id = generation_config.bos_token_id, generation_config.eos_token_id
1187
+
1188
+ if isinstance(eos_token_id, int):
1189
+ eos_token_id = [eos_token_id]
1190
+
1191
+ has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None
1192
+ if has_default_max_length and generation_config.max_new_tokens is None:
1193
+ warnings.warn(
1194
+ f"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. "
1195
+ "This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we"
1196
+ " recommend using `max_new_tokens` to control the maximum length of the generation.",
1197
+ UserWarning,
1198
+ )
1199
+ elif generation_config.max_new_tokens is not None:
1200
+ generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
1201
+ if not has_default_max_length:
1202
+ logger.warn(
1203
+ f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(="
1204
+ f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. "
1205
+ "Please refer to the documentation for more information. "
1206
+ "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)",
1207
+ UserWarning,
1208
+ )
1209
+
1210
+ if input_ids_seq_length >= generation_config.max_length:
1211
+ input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
1212
+ logger.warning(
1213
+ f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to"
1214
+ f" {generation_config.max_length}. This can lead to unexpected behavior. You should consider"
1215
+ " increasing `max_new_tokens`."
1216
+ )
1217
+
1218
+ # 2. Set generation parameters if not already defined
1219
+ logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
1220
+ stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
1221
+
1222
+ logits_processor = self._get_logits_processor(
1223
+ generation_config=generation_config,
1224
+ input_ids_seq_length=input_ids_seq_length,
1225
+ encoder_input_ids=input_ids,
1226
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1227
+ logits_processor=logits_processor,
1228
+ )
1229
+
1230
+ stopping_criteria = self._get_stopping_criteria(
1231
+ generation_config=generation_config, stopping_criteria=stopping_criteria
1232
+ )
1233
+ logits_warper = self._get_logits_warper(generation_config)
1234
+
1235
+ unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
1236
+ scores = None
1237
+ while True:
1238
+ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
1239
+ # forward pass to get next token
1240
+ outputs = self(
1241
+ **model_inputs,
1242
+ return_dict=True,
1243
+ output_attentions=False,
1244
+ output_hidden_states=False,
1245
+ )
1246
+
1247
+ next_token_logits = outputs.logits[:, -1, :]
1248
+
1249
+ # pre-process distribution
1250
+ next_token_scores = logits_processor(input_ids, next_token_logits)
1251
+ next_token_scores = logits_warper(input_ids, next_token_scores)
1252
+
1253
+ # sample
1254
+ probs = nn.functional.softmax(next_token_scores, dim=-1)
1255
+ if generation_config.do_sample:
1256
+ next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
1257
+ else:
1258
+ next_tokens = torch.argmax(probs, dim=-1)
1259
+
1260
+ # update generated ids, model inputs, and length for next step
1261
+ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
1262
+ model_kwargs = self._update_model_kwargs_for_generation(
1263
+ outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
1264
+ )
1265
+ unfinished_sequences = unfinished_sequences.mul((sum(next_tokens != i for i in eos_token_id)).long())
1266
+
1267
+ # stop when each sentence is finished, or if we exceed the maximum length
1268
+ if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
1269
+ break
1270
+ yield input_ids
1271
+
1272
+ def quantize(self, bits: int, quantize_embeddings=False, use_quantization_cache=False, empty_init=False, **kwargs):
1273
+ if bits == 0:
1274
+ return
1275
+
1276
+ from .quantization import quantize, QuantizedEmbedding, QuantizedLinear, load_cpu_kernel
1277
+
1278
+ if self.quantized:
1279
+ if self.device == torch.device("cpu"):
1280
+ logger.info("Already quantized, reloading cpu kernel.")
1281
+ load_cpu_kernel(**kwargs)
1282
+ else:
1283
+ logger.info("Already quantized.")
1284
+ return self
1285
+
1286
+ self.quantized = True
1287
+
1288
+ self.config.quantization_bit = bits
1289
+ self.config.quantization_embeddings = quantize_embeddings
1290
+
1291
+ self.transformer = quantize(self.transformer, bits, use_quantization_cache=use_quantization_cache, empty_init=empty_init, **kwargs)
1292
+
1293
+ if quantize_embeddings:
1294
+ logger.info("Applying quantization to embeddings")
1295
+ self.transformer.word_embeddings = QuantizedEmbedding(
1296
+ weight_bit_width=bits,
1297
+ weight_tensor=self.transformer.word_embeddings.weight.to(self.device),
1298
+ num_embeddings=self.transformer.word_embeddings.num_embeddings,
1299
+ embedding_dim=self.transformer.word_embeddings.embedding_dim,
1300
+ dtype=torch.half,
1301
+ empty_init=True,
1302
+ device=self.transformer.word_embeddings.weight.device,
1303
+ )
1304
+ self.lm_head = QuantizedLinear(
1305
+ weight_bit_width=bits,
1306
+ weight_tensor=self.lm_head.weight.to(self.device),
1307
+ bias_tensor=None,
1308
+ in_features=self.lm_head.in_features,
1309
+ out_features=self.lm_head.out_features,
1310
+ bias=False,
1311
+ quantized_weight=self.transformer.word_embeddings.weight,
1312
+ quantized_weight_scale=self.transformer.word_embeddings.weight_scale,
1313
+ dtype=torch.half,
1314
+ empty_init=True,
1315
+ device=self.lm_head.weight.device,
1316
+ )
1317
+
1318
+ return self
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7236a772df5ef4c2f62ad733adb7ad2c5dd17cde8d8c7a31ba7848e17d8affd
3
+ size 3132377293
quantization.py ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.nn import Linear, Embedding
2
+ from torch.nn.parameter import Parameter
3
+ import torch.nn.functional as F
4
+
5
+ import os
6
+ import bz2
7
+ import torch
8
+ import base64
9
+ import ctypes
10
+
11
+ from typing import List
12
+ from functools import partial
13
+
14
+ try:
15
+ from cpm_kernels.kernels.base import LazyKernelCModule, KernelFunction, round_up
16
+
17
+ class Kernel:
18
+ def __init__(self, code: bytes, function_names: List[str]):
19
+ self.code = code
20
+ self._function_names = function_names
21
+ self._cmodule = LazyKernelCModule(self.code)
22
+
23
+ for name in self._function_names:
24
+ setattr(self, name, KernelFunction(self._cmodule, name))
25
+
26
+ quantization_code = "$QlpoOTFBWSZTWU9yuJUAQHN//////////f/n/8/n///n//bt4dTidcVx8X3V9FV/92/v4B7/AD5FBQFAAAChSgKpFCFAFVSigUAAAEKhSgUUqgFBKigqVREQAABQBQIANDTTIGI00BkZBkNGE0A0BkBkGQGRkaNAaAGQNBoGgDIAAYIGTI0DQAQAaGmmQMRpoDIyDIaMJoBoDIDIMgMjI0aA0AMgaDQNAGQAAwQMmRoGgAgA0NNMgYjTQGRkGQ0YTQDQGQGQZAZGRo0BoAZA0GgaAMgABggZMjQNABABoaaZAxGmgMjIMhowmgGgMgMgyAyMjRoDQAyBoNA0AZAADBAyZGgaAAmqU1NEgJqnptU/Sn4jRR6J6epk2pqb1Q/SgAPUGgyNNGjQ2SBpoAZAAGg0NB6mgDIAAAAA2oaApSREBNAARhGiYEaEwU8pvImlP0k2aam1GaGqbFNM1MHpTwmkepmyU9R6nqPKekHqNNPUxNGhp6n6p6QaZ6o9TG1GMqcoV9ly6nRanHlq6zPNbnGZNi6HSug+2nPiZ13XcnFYZW+45W11CumhzYhchOJ2GLLV1OBjBjGf4TptOddTSOcVxhqYZMYwZXZZY00zI1paX5X9J+b+f4e+x43RXSxXPOdquiGpduatGyXneN696M9t4HU2eR5XX/kPhP261NTx3JO1Ow7LyuDmeo9a7d351T1ZxnvnrvYnrXv/hXxPCeuYx2XsNmO003eg9J3Z6U7b23meJ4ri01OdzTk9BNO96brz+qT5nuvvH3ds/G+m/JcG/F2XYuhXlvO+jP7U3XgrzPN/lr8Sf1n6j4j7jZs+s/T0tNaNNYzTs12rxjwztHlnire3Nzc3N1wuBwOBwXBvZfoHpD7rFmR99V5vj3aXza3xdBbXMalubTg/jIv5dfAi54Pdc75j4z412n3Npj3Ld/ENm7a3b/Cod6h/ret1/5vn/C+l+gdslMvgPSLJ8d8q+U66fevYn/tW1chleEtNTGlcHCbLRlq0tHzF5tsbbZZfHjjLgZu42XCuC3NrdjTasZGNzgxPIrGqp7r3p7L2p5XjnpPSmTd5XtzqnB6U87zzg1Ol0zd0zsLszxR6lkxp35u6/teL0L0W922cR7Lu1lpL9CsHirzuM2T+BgsyViT6LHcm0/Vr6U/7LGGyJeqTEjt0PHWhF5mCT7R9mtlDwriYv0Tyr/OxYt6qp5r0mPVT0608TqnqMZaarU2nFwrTzzlrs1ed7z1ux60wyr4ydCaTi3enW8x68x0zU7tXSlcmPSW1mGpWJMg4zmPC2lK96tp0OE80y4MfEvnZj8zGluR6b22ki1Ou9V2nCd9xovcPvcYMZYy0lvN60ScZ45vN6yeCeeXFb1lVjnnCar5fwXwE2bzJ4HI1XVPXfXZMm44GUsMpYsmLB65TuVdm0cl0b+i/wGNN66XjeV7zuPpHcnK/juhhjdfId5jMdE5nN0dGmmm2zZs2cexD5n9p/dY352XsvXHaZNWWsmmS1atjR452nYudzvqv2HMRyvNNnlMcDl3R2+yx2uVrBubTW9icHDVtbNXlZm7jma1rM4VurZZd2y6nUau7ZXZ7bVU+mnoOVxZGMrVmvX60605JwmzGZhhhjTWtaaaMaaGTGmNMZasY0iX8VMUl8eepaIrzGSpemWOQyZORk2bNpjUybMmxqYmknCGCFynutfksaZpjTNMaaatM0xsxcGR0sociNqxNSmhhR1ZJPbsn8qyF0t2qH6iYBclclalbtTTcHTDsPaX6rlnElph2Jyumumtynv2Kk8GI7rsvXbIcJgHJOSaSXnnGaI3m87RtVXJOZ/YtgdTE6Wpha6ZlE8ayXkef1fh602r2WwvfMXtMdLlkfnLFdYYwYso+bWqm7yJqHXZGw2nrS5ZanSYnWlxBxMF1V940K2wdrI7R6OYf7DGGamMmTSbRhlS45xmVOumF1EyPCmHrrN8wwZOOrdNtLeMtzFzDlWnfTBxMk2NaXIZHBYxYLD4w8yju0ao65Vz1OIXoS9dLanwCe1PWrYuWMqf1if1z2k2yYfKJ741PDgno1ZQ8DRqvUny3mNoWTzGO6m1DkrJI8JiR5cSd+vZdGOO8nrMoc5+NDUFsMSXaZJeNlMmGLtJsovOsUp7I9S5VojKxF6bTVEelXqlfJobQr3LozSh2Jk7VcrVMfhXqszGWMzNqGhqZY0OadxkyyMssKugZR0KNFXBHlqwmJgTE/BNVMk6ItJXZMR0H47GpXv/DMOvNkmVuaV1PRfEdxuqc7Hcd+ZV/zTLaRxWk0nl9CdCeM6mn5rstHIBcpiuwmUZXeq81DacHI2rmrZ5SuE5mOZd6LQrZg9mx32TprA8BMo5jKN6yLTCi3WzQaZSuhzTtM1fUTGVpG8Tw+KXI0tjEpiWxtLYynOlktSbVlaI5kxP8TDH8kx50xoxi5KcA4pcja8KWLRlO/Ks6q06ergnvm1ca3Tq8Uw7LTUsmWyctXPWmpitl/uvGcWTGXGuAXDfhqazGmjkxcJW5hMMMMpYsXl2TZYtVOddG3XCarUt6Ptq9CZXSNzyuRzqRZOjsxdBbFVz6OA5HI43r1jityVlVpVkxmOsyaYWE1NTGq1sOVh36mHMcxtSvcy70edG0ZGR3I1Go1GRlV7mWWo1G0ZGRqlvH40l7o4m5xMWLLLYyNjnqc8556mdPqLJ31n/1nWOncxzG1tizrHs/Z+d2vP/B/l8wdJ6rHUn2nbbDq4p6htFtYzMMMTaZis1K5GKzGNmxhmUx2DDlZ/qNnIx41xnaMfCZWYaZWtNLTNW8ND4Fw1MyZOCdM428suKG1ehW8TesOydg7J+YYcD4cYR+8dFK6M4E3HM9ZfRNNL+Sn6rsl4DsrDl2HpPCnfxjGXtbZtYys1ttlyJ4T+BvexjGWRjMszK4Jpc77D3GyuVD7q0+G8m9G+2+rGm7cOR2y7FdtY2XUYx/oNlfRYxhMYyYZkyyg55enna9Kt/FFi6GMMwYwdwxWgxGMLKYmUyGExTKMZkMFhkymKuh0NOBNnBu+23LdwDoZYYzGGMxtORaTU1pjTGWTTGGtMrNWUsyyTTLLG1qy2ZjbK2DBllWqxMtBMaYZQmcE7zvvRcTkclUwdkxTaSdyySt/7fpL+T1v516Ji97fwr5JbLu305zMn5+GMTTZ9F+y7ExwmGVfG44yxn3dLv6l5i+Wth1jCrDq21nW9LqvvDzz3Vf3LLH/O/32TJ/erx3bXftO4eF+G956D952K/An4NfvOpjFjExjevP/UmE0fIoZXx6/w6lX/no3D0bLt+ixjieBM6ksRd0yB4Lt2SwYNE+gd1detlZWUnpiZfGfFaK+4PyCa/v18V8X75pe9fLXzp7l3VjF76vWZmHwGz1IZNWT7b8yddJ4q5kyrVdfru6atWc7bVYztL9Jf4GXvT+Y8m9/YsXP6H018a8D4XVOqvfzqeR+6yZOD8dPv0+U7/q5Pl+2dNb0MjzGVH5p6MNQ7cOWvw62U9aHE8DprDek+McLyvDz+te+9Zhq5+YTruufMcWMabqysTmZVWjKPfnK0wyVcrsuhjZRdLkHNvD72b9abriOSGIxiLixMOoalNPXzy+wT/tf+U6HHONfsz+xe8ufHBdQWWGWLA9if0rsnmrxK5LvRZQeWsTCsrmOYy8VteVfuRfcVTtDLItLIsMYxZLdU/DbtSemxF6Z6Zo5WBXE4tFdCyVMMXMTEMZXVlS6Xec2T4e0tHsRcEuWshcJ2YsNF5rUx1E8ifCq6Z+ZP7qdCeu/aTwFd53l16/o0NOw6O3dLavP4Hbi4RdmuDk6DoYaninC0+o4uZjbJ7Rxeu0/FbuFg+q7DVS6fQe0rZ6NDGUNNU6DEqOaLTicKnYZMnBWruljQxoaS3dZhocDge0bSTyOvdAbG5hxe2xji7E/L55xX13wWNDi6HCekcFxfCPGxY0MXC+s7afWaMdDyjyr+o8Rudm/NabOZvdl274zH4f5XK9z6On1Pe/K5TdPAslg77BjuO6Y3eO7GqvOPG/stknp1leyvLL0Z7bl9I4noMvLkzytLhWYzrOZzLXCORe028rORzOg4N/L0HlMOQ3Pgmnbb6KczlabORpu980q37TBqRu0/p3PO6234Bl03Ynuz+9W7gnsEcmvYaYY3aMYY0wx3pYd+ujsXauWdaY5Xkbtl23fPzFHiDB/QMo0yFjBllYxTQYYyxkrwn7JufwJ/PfgJ+C83X69ni6zvXcnyXabv0ncbLwsceS+RNlyN2mnneJtX0ngYO0+e+0+UnA+Wch3ji8hj5an4h+i6XBySU4n+R0roVcbw5yvHrmr4Yw8Y7x6c+9POPYHI5HI5HI5HI5HGXGww4nE4nrVyOR8XeqPEO7PLOiukYa3Novk5hV4cdtYZLI93e+uxff2jRo0aNGjRo0aNG1bVtW1dy3m83m8+tQ5ZzHw3nObwOu8La9Rc1dtkdS8A3eTk823tnktXWlxN6Oixe06zrN70Isd9jiOgZFq9yfkPqP/SLhN2Myl8jDM43bl1nbcb4cO57jlh8Jow6pzXZdL4dyODTuuhu77FyO27DdwdRxmvO+O+3N2+BdqyTwLHVczDVY4UPE4O66/ZO2cx1LFzVdSXtF7G4HMbrauOHRw6c8FdZ5m9fHZHYZXfTlZquyynSyTTKke6vcffSD9pzPA/G7n7jxPmuhc1DHMynPMrGL6AdewYmwu5ko+UUyTwrMv27rPH1v1nGqd87+p6N6LU8k3NEng53xXyHS97+44OSg/sy/hn+Se6yfYNjW0/uTgP+PvWYzLMmjhcLB/gGpri6H83/84eUXWT6T9Hsv7785z/7z4icpW+zfXypuR7rx/gMdZb1/wC678pcs8/2a3mDitGHxl9mfPlll5MafWWqxk/eYuTDgcNMzDGWLWvsuglNxs53GtN6uWpktlW1tZZYcuinMMWmnNnJydze3b2Y1McBxrBkXw799izLMZZYyy0TkbsGM4p03S2uVu5s/XXUdSdec6smVxZYYGpVmT8A+8ajuEyV5FatkvVru2x6uxGXXbH4A+jvgP4GMYy3iPLXzq/6z65+E005ey+cwMZD3fZcqc6xpjTFjQ0P3U+e++cPYmTIwj0nrK5NPTfl3WvpfLtXDcb2HQMudYOxFXQBor4L4T6vrOauFctYXJQ++NUWmJe5bmx1jDiZS1dTqWxo4GR8jm3fttpmPHppk9PEyv4/y8/sO07XacOmcqc0x2Vi9BvNJvN5oW8x4mOsydpidRxMYJPx06m1bqPzq9KtK8sxXNXFodD/+MYYaJTLwOhc9brCsV18oOR1i4tXChyTkq4lf4y1Ke+9axjDHqs1mfBbMXuP4Hzi+X7t8vzv7bHerrUPgPCxhjre4fXdfLNtNM+Jd+Zdh8xd8wP87uNPoPgv4W7/5P2BuxfsMabNnMnza+54Pdi5U671GPZY8CehX8Voeoo7FHpkeEc6715FwHZrIrUrHaviPUbPZHND+IhczrP6FcYvhOZ0Di/ETt0OI+YwNWR9r7tpf6WDeZKZDB1+z2IthOl1mPyb5FluvEx9h9d0NnM0Y1XPFkWIsk1WotJ0PBMmkvjvQTd0e71tfeV+8r8lQ/tpzpsmxJ+InrI/dj2UajUajVTUajatRqNRtGo1Go1Go4wjeMpZFMVV9CHbofPraLsJ3JpWV2XOoanCuFky4y3PPNxucK2uKC1Lbdb1eo+m5XomN6HfeZsabHLHRX/K+offtNGGmHWctcVcG44MdSqsOLY9VzX+Zxfxn2HPdWTpzWvkrtJ8M5zorrKcquRytJ5N5DZmcaW02l76nWO+BqPXm1A2Ry/0q71dH/mqrqeFjkYxjEXtsX8qubTk67rGycyqsdm4tZx5D6D5hhi0waaWmiaMP81Yjii5qxPlPuU/GfTL1Y5E6Jyfiq63qTa39A4J0sOGDgO9WF9bOXl0XfPRbsY2bPNKPy1YrFYrFYmRhhlTIyMjJWJYZHXuCXI8OoXsvfljGLFicNifpp2XunoPiG1wtx3p1Tah+/DD66OnVtVXP9rKbVxOnL0tR/rHtqB5UDErUVcl11D4qqvjpOcxX7armUNJB3LpW6bxVvD08e8h3odKKvyCFZBdSh2FVcST9xV3n3T8t1j7Kr9qgrqXg+13Pt5U7JCvFXVIV1YG5lRhkVYZJYYDDD4KOIMoHCp26WS8GB7uBh2zIdgq/PKyInjV2STShuoapUdCpX1yTwqq/z1VvET7Kh5nVPkO8YyxjLt2MaaMmWTLQvx3qnzltnXW0p2jxgbEtSny/Osv8Y9pLMXYoHVPAhkVdWVeODhR6q9/Sxe2liwwZWMVvFXfRkeIDxAePUPIrdJ4ey6yquzH+PD/bUOWAu05qVHtFd8rrKHSoeNIOUqrYr3FXyToqfYJgwmJdKpXXOwYYegNNGMzfZPp/t3t/DVs4zjNTN61rRqaWaa4NYbRjTa0tWwy2Y2tGN8ZO8ofNKq4j9SL7I+cSm4/6ovLV5HNXLI0jJidwrtk6ynCaP6Z++GjRlWS3tLeW129Mi9evxU9mtz6s5J3Z7M2ngTgnKvmpomxpaLCzPfmx0JWE+m3NLDDGOX47RctdYYNK5jakdqLkRlI39n590T5zctGSwwZZDJj6kW8XSi6ot2MmWWJ0DUT3nuvebBudScjZ79g8cWJ8av0k+/bE5WKd5MdbFpbDVMxu1DVMmtNZGJvq1mtRbn6M+g/kP0FwDwr7quZs7xosNGpbscyxhhd9TyJyFwbLcxlTasg75vW7TsV5K7ji44XPMMrdoj+Y3rT0Hie62nlYV/pwczzOmdLqLhYkzGMzCZWGMQzGMSsZYY6Di1t4nlJ+Em63mJxrVLxPbYxNEdgc1dU2iOKyoYYWjNrEeHTYybVk0atSa7ehuwsWMWTqn1TrnS6hYsi71d1+s+k+ic70e20fzE/VaTdxT9ZtU4GIXdeNx3X77guYYfpHeTQjaMX6brOu4OY4K7Y2d9mbHarI5ox3p4GpJ2Vd/Tst60f7j999pppjR+Q/Qf8J/VaORs3cji7FfFuN61+ui9s8hix1OCh5KGVV23BPXvZfz3CLyHpix+exi8z/KnCnosY2eunor+cxyPO/xJ0vKey9OvE9VjqaYu0x3Z3jd6o2b1T12D+F8l232lwaaacD5LE8LBxu7WTlbWraWpew8Xexjel3E+wWD4APITdNqR8F3R3T0lunCQ4GaE9R37DxeCYfcHi4xci5ovKfxVs55y2hf+65E/Xdp6jR5nrebTmi5incpkyOjs50JvrZwstbbW6kfuuQw+2mykf/EXNFzxfKTrxew929TR6bWnGL//F3JFOFCQT3K4lQ"
27
+
28
+ kernels = Kernel(
29
+ bz2.decompress(base64.b64decode(quantization_code)),
30
+ [
31
+ "int4WeightCompression",
32
+ "int4WeightExtractionFloat",
33
+ "int4WeightExtractionHalf",
34
+ "int8WeightExtractionFloat",
35
+ "int8WeightExtractionHalf",
36
+ ],
37
+ )
38
+ except Exception as exception:
39
+ kernels = None
40
+ print("Failed to load cpm_kernels:", exception)
41
+
42
+
43
+ class W8A16Linear(torch.autograd.Function):
44
+ @staticmethod
45
+ def forward(ctx, inp: torch.Tensor, quant_w: torch.Tensor, scale_w: torch.Tensor, weight_bit_width):
46
+ ctx.inp_shape = inp.size()
47
+ ctx.weight_shape = quant_w.size()
48
+ ctx.weight_bit_width = weight_bit_width
49
+ out_features = quant_w.size(0)
50
+ inp = inp.contiguous().view(-1, inp.size(-1))
51
+ weight = extract_weight_to_half(quant_w, scale_w, weight_bit_width)
52
+ output = inp.mm(weight.t())
53
+ ctx.save_for_backward(inp, quant_w, scale_w)
54
+ return output.view(*(ctx.inp_shape[:-1] + (out_features,)))
55
+
56
+ @staticmethod
57
+ def backward(ctx, grad_output: torch.Tensor):
58
+ inp, quant_w, scale_w = ctx.saved_tensors
59
+ weight = extract_weight_to_half(quant_w, scale_w, ctx.weight_bit_width)
60
+ grad_output = grad_output.contiguous().view(-1, weight.size(0))
61
+ grad_input = grad_output.mm(weight)
62
+ grad_weight = grad_output.t().mm(inp)
63
+ return grad_input.view(ctx.inp_shape), grad_weight.view(ctx.weight_shape), None
64
+
65
+
66
+ class W8A16LinearCPU(torch.autograd.Function):
67
+ @staticmethod
68
+ def forward(ctx, inp: torch.Tensor, quant_w: torch.Tensor, scale_w: torch.Tensor, weight_bit_width, quantization_cache=None):
69
+ ctx.inp_shape = inp.size()
70
+ ctx.weight_shape = quant_w.size()
71
+ ctx.weight_bit_width = weight_bit_width
72
+ out_features = quant_w.size(0)
73
+ inp = inp.contiguous().view(-1, inp.size(-1))
74
+ weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)
75
+ output = inp.mm(weight.t())
76
+ ctx.save_for_backward(inp, quant_w, scale_w)
77
+ return output.view(*(ctx.inp_shape[:-1] + (out_features,)))
78
+
79
+ @staticmethod
80
+ def backward(ctx, grad_output: torch.Tensor):
81
+ inp, quant_w, scale_w = ctx.saved_tensors
82
+ weight = extract_weight_to_float(quant_w, scale_w, ctx.weight_bit_width)
83
+ grad_output = grad_output.contiguous().view(-1, weight.size(0))
84
+ grad_input = grad_output.mm(weight)
85
+ grad_weight = grad_output.t().mm(inp)
86
+ return grad_input.view(ctx.inp_shape), grad_weight.view(ctx.weight_shape), None
87
+
88
+
89
+ default_cpu_kernel_code_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "quantization_kernels.c")
90
+ default_cpu_kernel_code = "QlpoOTFBWSZTWXLbSoQAAgzbgERwQXxmTwAAr/ff3kABt0Q2oRVT0hpo9RtEAAAAyBEiSQ9EGjQGQAAAwANGhowjJoNGmgMEUplMTNSMJ5TQaDJpsoMyRMj8P4mZzFSVVwqSXG8GG7MlVwiToYEQwVD7noBxMhNfkeZYtYFtbgOBUSIGtIQjhNHCEnPJsadhb3yBmRIOD3TeAtNLSaU5GgvKUBWSNuuOIHmVt0YhW6rsmDMDUjeUJGJ64R1Jm5lrh0Aa0tKjhFwPdWcGogxLDSXPWQUWTM8Sd3Qz1HMYNxx3HMeiNqNo4jeRDEfZ3gUSHIcU/heomq0vEzL1Msz5KKGxH8FrNOYw3KaxdqaEmNHYMxJFgQbR0DyRknL2L4kwUSxKRdhjRpEtUqilVfggFL1klaMS3PPRDfNqbBOPWO7m4JTVGhS9QTBDDJaEbLbrUQNB+IpJSKQbG5SZZ5gkwJEhJ3aYKJipZ/i7kinChIOW2lQg"
91
+ default_cpu_parallel_kernel_code_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "quantization_kernels_parallel.c")
92
+ default_cpu_parallel_kernel_code = "QlpoOTFBWSZTWZzWK2UAALXbgERwSX1mTwAAr/ff3kACNyXSbZYwBpoaNGIyAaADQwRRFT/UKDINANqAD1NABFQlPUzaaJHppGRmoAG01ARKKaaMp4gmgaNAaDQDIKVKfZ/g6v1Kem5ZsWZmZtSXS5ZwRAzKmjr1E1lKMEoQNCPkEYPACgcR5I9w/0k6JrJYHqFuHnChcD7N+DHeOQ0ajF83Tc40jgmQbOB5wt3TEHyTObDBLoxrJGBuJmNbxYZwAoKTjbIcI7GsbuVRERAR8wqwhXQjQOxiHQlgSnHjQjddXERojNmQYJJVoM2xxawMeI9asi6E1rfd7GO8S0S5vacCNGry4F1nyZbcTvSBXEMipuPfM7i0Y8kjirpbxb05jpIQjCGE8DYBNCAZyHz9EoOpDRST/I1aFCNpcjoXgyc3NjVsUvYIaYq7xopYJqcxg2g4qXofm7AaGNTzJSNguOQw4utKcEl0F1UOgI+T1hk5LusbGZ9udC1CiBeGwwFxR/QdbZDndehRPxyGt3Me1DBW45MXIY24ZD30aFNuSEUdu5LWx1sSJWLGgsmqUIFTgWhU0gfxXpzhghr2AYpV3hE06mGk1I2JyuZiFgkiz/i7kinChITmsVso"
93
+
94
+ cpu_kernels = None
95
+
96
+
97
+ class CPUKernel:
98
+ def __init__(self, kernel_file="", source_code=default_cpu_kernel_code_path, compile_parallel_kernel=None, parallel_num=None):
99
+ self.load =False
100
+ self.int8WeightExtractionFloat = None
101
+ self.int4WeightExtractionFloat = None
102
+ self.int4WeightCompression = None
103
+ self.SetNumThreads = None
104
+
105
+ try:
106
+ if not os.path.exists(default_cpu_kernel_code_path):
107
+ with open(default_cpu_kernel_code_path, "w", encoding="utf-8") as file:
108
+ code = default_cpu_kernel_code
109
+ cpu_quantization_code = bz2.decompress(base64.b64decode(code)).decode()
110
+ file.write(cpu_quantization_code)
111
+
112
+ if not os.path.exists(default_cpu_parallel_kernel_code_path):
113
+ with open(default_cpu_parallel_kernel_code_path, "w", encoding="utf-8") as file:
114
+ code = default_cpu_parallel_kernel_code
115
+ cpu_quantization_code = bz2.decompress(base64.b64decode(code)).decode()
116
+ file.write(cpu_quantization_code)
117
+
118
+ except Exception as ex:
119
+ print("Error when generating default cpu kernel code(can be ignored when using custom kernels).")
120
+
121
+ if compile_parallel_kernel is None:
122
+ compile_parallel_kernel = bool(int(os.cpu_count()) >= 4)
123
+
124
+ if compile_parallel_kernel and source_code == default_cpu_kernel_code_path:
125
+ source_code = default_cpu_parallel_kernel_code_path
126
+
127
+ if (not kernel_file) or (not os.path.exists(kernel_file)):
128
+ print("No compiled kernel found.")
129
+ try:
130
+ if os.path.exists(source_code):
131
+ print("Compiling kernels :", source_code)
132
+ kernel_file = source_code[:-2] + ".so"
133
+ if compile_parallel_kernel:
134
+ compile_command = "gcc -O3 -fPIC -pthread -fopenmp -std=c99 {} -shared -o {}".format(source_code, kernel_file)
135
+ print("Compiling", compile_command)
136
+ exit_state = os.system(compile_command)
137
+ if exit_state:
138
+ print("Compile failed, using default cpu kernel code.")
139
+ compile_parallel_kernel = False
140
+ source_code = default_cpu_kernel_code_path
141
+ kernel_file = source_code[:-2] + ".so"
142
+ compile_command = "gcc -O3 -fPIC -std=c99 {} -shared -o {}".format(source_code, kernel_file)
143
+ print("Compiling", compile_command)
144
+ else:
145
+ compile_command = "gcc -O3 -fPIC -std=c99 {} -shared -o {}".format(source_code, kernel_file)
146
+ print("Compiling", compile_command)
147
+ exit_state = os.system(compile_command)
148
+
149
+ print("Kernels compiled :", kernel_file)
150
+ else:
151
+ print("Kernel source code not found.")
152
+ return
153
+ except:
154
+ print("Failed to build kernel.")
155
+ return
156
+ if kernel_file:
157
+ kernels = ctypes.cdll.LoadLibrary(kernel_file)
158
+ self.int8WeightExtractionFloat = kernels.extract_int8_weight_to_float
159
+ self.int4WeightExtractionFloat = kernels.extract_int4_weight_to_float
160
+ self.int4WeightCompression = kernels.compress_int4_weight
161
+ if compile_parallel_kernel:
162
+ try:
163
+ self.SetNumThreads = kernels.set_num_threads
164
+ except:
165
+ print("No set_num_threads() found in kernel.")
166
+ self.SetNumThreads = lambda x: x
167
+ self.load = True
168
+ print("Load kernel :", kernel_file)
169
+ else:
170
+ print("Failed to load kernel.")
171
+
172
+ if compile_parallel_kernel:
173
+ if parallel_num is None:
174
+ parallel_num = max(os.cpu_count() // 2, 1)
175
+ print("Setting CPU quantization kernel threads to", parallel_num)
176
+ if parallel_num < 4:
177
+ print("Parallel kernel is not recommended when parallel num < 4.")
178
+ self.SetNumThreads(parallel_num)
179
+
180
+ self.parallel_num = parallel_num
181
+
182
+
183
+ def compress_int4_weight(weight: torch.Tensor): # (n, m)
184
+ """compress weight on cpu or cuda to int4"""
185
+ if weight.device == torch.device("cpu"):
186
+ assert isinstance(cpu_kernels, CPUKernel)
187
+ n, m = weight.size(0), weight.size(1)
188
+ assert m % 2 == 0
189
+ m = m // 2
190
+ out = torch.empty(n, m, dtype=torch.int8, device="cpu")
191
+ cpu_kernels.int4WeightCompression(
192
+ ctypes.c_void_p(weight.data_ptr()),
193
+ ctypes.c_void_p(out.data_ptr()),
194
+ ctypes.c_int32(n),
195
+ ctypes.c_int32(m)
196
+ )
197
+ return out
198
+ else:
199
+ with torch.cuda.device(weight.device):
200
+ n, m = weight.size(0), weight.size(1)
201
+ assert m % 2 == 0
202
+ m = m // 2
203
+ out = torch.empty(n, m, dtype=torch.int8, device="cuda")
204
+ stream = torch.cuda.current_stream()
205
+
206
+ gridDim = (n, 1, 1)
207
+ blockDim = (min(round_up(m, 32), 1024), 1, 1)
208
+
209
+ kernels.int4WeightCompression(
210
+ gridDim,
211
+ blockDim,
212
+ 0,
213
+ stream,
214
+ [ctypes.c_void_p(weight.data_ptr()), ctypes.c_void_p(out.data_ptr()), ctypes.c_int32(n), ctypes.c_int32(m)],
215
+ )
216
+ return out
217
+
218
+
219
+ def extract_weight_to_half(weight: torch.Tensor, scale_list: torch.Tensor, source_bit_width: int):
220
+ if source_bit_width == 8:
221
+ func = kernels.int8WeightExtractionHalf
222
+ elif source_bit_width == 4:
223
+ func = kernels.int4WeightExtractionHalf
224
+ else:
225
+ assert False, "Unsupported bit-width"
226
+
227
+ with torch.cuda.device(weight.device):
228
+ n, m = weight.size(0), weight.size(1)
229
+ out = torch.empty(n, m * (8 // source_bit_width), dtype=torch.half, device="cuda")
230
+ stream = torch.cuda.current_stream()
231
+
232
+ gridDim = (n, 1, 1)
233
+ blockDim = (min(round_up(m, 32), 1024), 1, 1)
234
+
235
+ func(
236
+ gridDim,
237
+ blockDim,
238
+ 0,
239
+ stream,
240
+ [
241
+ ctypes.c_void_p(weight.data_ptr()),
242
+ ctypes.c_void_p(scale_list.data_ptr()),
243
+ ctypes.c_void_p(out.data_ptr()),
244
+ ctypes.c_int32(n),
245
+ ctypes.c_int32(m),
246
+ ],
247
+ )
248
+ return out
249
+
250
+
251
+ def extract_weight_to_float(weight: torch.Tensor, scale_list: torch.Tensor, source_bit_width: int, quantization_cache=None):
252
+ """extract weight on cpu to float32"""
253
+ if source_bit_width == 8:
254
+ func = cpu_kernels.int8WeightExtractionFloat
255
+ elif source_bit_width == 4:
256
+ func = cpu_kernels.int4WeightExtractionFloat
257
+ else:
258
+ assert False, "Unsupported bit-width"
259
+
260
+ n, m = weight.size(0), weight.size(1)
261
+
262
+ if quantization_cache is not None:
263
+ out = quantization_cache
264
+ func(
265
+ ctypes.c_void_p(weight.data_ptr()),
266
+ ctypes.c_void_p(scale_list.data_ptr()),
267
+ ctypes.c_void_p(out.data_ptr()),
268
+ ctypes.c_int32(n),
269
+ ctypes.c_int32(m)
270
+ )
271
+ return out.tensor
272
+ else:
273
+ out = torch.empty(n, m * (8 // source_bit_width), dtype=torch.float, device="cpu")
274
+ func(
275
+ ctypes.c_void_p(weight.data_ptr()),
276
+ ctypes.c_void_p(scale_list.data_ptr()),
277
+ ctypes.c_void_p(out.data_ptr()),
278
+ ctypes.c_int32(n),
279
+ ctypes.c_int32(m)
280
+ )
281
+ return out
282
+
283
+
284
+ class CacheTensor():
285
+ def __init__(self, *args, **kwargs):
286
+ self.tensor = torch.empty(*args, **kwargs)
287
+
288
+ def to(self, *args, **kwargs):
289
+ self.tensor = self.tensor.to(*args, **kwargs)
290
+
291
+ def data_ptr(self):
292
+ return self.tensor.data_ptr()
293
+
294
+
295
+ class QuantizedLinear(Linear):
296
+ def __init__(self, weight_bit_width: int, weight_tensor=None, bias_tensor=None, quantized_weight=None, quantized_weight_scale=None, quantization_cache=None, empty_init=False, *args, **kwargs):
297
+ super(QuantizedLinear, self).__init__(*args, **kwargs)
298
+ self.weight_bit_width = weight_bit_width
299
+ self.quantization_cache = quantization_cache
300
+
301
+ if (quantized_weight is not None) and (quantized_weight_scale is not None):
302
+ del self.weight
303
+ self.weight = Parameter(quantized_weight.to(kwargs["device"]), requires_grad=False)
304
+ self.weight_scale = Parameter(quantized_weight_scale.to(kwargs["device"]), requires_grad=False)
305
+ else:
306
+ shape = self.weight.shape
307
+ del self.weight
308
+
309
+ if weight_tensor is None or empty_init:
310
+ self.weight = torch.empty(
311
+ shape[0], shape[1] * weight_bit_width // 8, dtype=torch.int8, device=kwargs["device"]
312
+ )
313
+ self.weight_scale = torch.empty(shape[0], dtype=kwargs["dtype"], device=kwargs["device"])
314
+ else:
315
+ self.weight_scale = (weight_tensor.abs().max(dim=-1).values / ((2 ** (weight_bit_width - 1)) - 1)).to(kwargs["dtype"])
316
+ self.weight = torch.round(weight_tensor / self.weight_scale[:, None]).to(torch.int8)
317
+ if weight_bit_width == 4:
318
+ self.weight = compress_int4_weight(self.weight)
319
+
320
+ self.weight = Parameter(self.weight.to(kwargs["device"]), requires_grad=False)
321
+ self.weight_scale = Parameter(self.weight_scale.to(kwargs["device"]), requires_grad=False)
322
+
323
+ if bias_tensor is not None:
324
+ self.bias = Parameter(bias_tensor.to(kwargs["device"]), requires_grad=False)
325
+ else:
326
+ self.bias = None
327
+
328
+ def reset_parameters(self):
329
+ """To accelerate initialization"""
330
+ pass
331
+
332
+ def forward(self, input):
333
+ if self.weight.device == torch.device("cpu"):
334
+ output = W8A16LinearCPU.apply(input, self.weight, self.weight_scale, self.weight_bit_width, self.quantization_cache)
335
+ else:
336
+ output = W8A16Linear.apply(input, self.weight, self.weight_scale, self.weight_bit_width)
337
+ if self.bias is not None:
338
+ output = output + self.bias
339
+ return output
340
+
341
+ def _apply(self, fn):
342
+ self_obj = super()._apply(fn)
343
+ if self.quantization_cache is not None:
344
+ self.quantization_cache.to(self_obj.weight.device)
345
+ self.quantization_cache.to(self_obj.weight_scale.dtype)
346
+ return self_obj
347
+
348
+
349
+ class QuantizedEmbedding(Embedding): # TODO: backward, check empty_init
350
+ def __init__(self, weight_bit_width: int, weight_tensor=None, quantized_weight=None, quantized_weight_scale=None, empty_init=False, *args, **kwargs):
351
+ super(QuantizedEmbedding, self).__init__(*args, **kwargs)
352
+ self.weight_bit_width = weight_bit_width
353
+
354
+ if (quantized_weight is not None) and (quantized_weight_scale is not None):
355
+ del self.weight
356
+ self.weight = Parameter(quantized_weight.to(kwargs["device"]), requires_grad=False)
357
+ self.weight_scale = Parameter(quantized_weight_scale.to(kwargs["device"]), requires_grad=False)
358
+ else:
359
+ shape = self.weight.shape
360
+ del self.weight
361
+
362
+ if weight_tensor is None or empty_init:
363
+ self.weight = torch.empty(
364
+ shape[0], shape[1] * weight_bit_width // 8, dtype=torch.int8, device=kwargs["device"]
365
+ )
366
+ self.weight_scale = torch.empty(shape[0], dtype=kwargs["dtype"], device=kwargs["device"])
367
+ else:
368
+ self.weight_scale = (weight_tensor.abs().max(dim=-1).values / ((2 ** (weight_bit_width - 1)) - 1)).half()
369
+ self.weight = torch.round(weight_tensor / self.weight_scale[:, None]).to(torch.int8)
370
+ if weight_bit_width == 4:
371
+ self.weight = compress_int4_weight(self.weight)
372
+
373
+ self.weight = Parameter(self.weight.to(kwargs["device"]), requires_grad=False)
374
+ self.weight_scale = Parameter(self.weight_scale.to(kwargs["device"]), requires_grad=False)
375
+
376
+ def forward(self, input):
377
+ if self.weight.device == torch.device("cpu"):
378
+ original_weight = extract_weight_to_float(weight=self.weight, scale_list=self.weight_scale, source_bit_width=self.weight_bit_width)
379
+ else:
380
+ original_weight = extract_weight_to_half(weight=self.weight, scale_list=self.weight_scale, source_bit_width=self.weight_bit_width)
381
+ output = F.embedding(
382
+ input, original_weight, self.padding_idx, self.max_norm,
383
+ self.norm_type, self.scale_grad_by_freq, self.sparse
384
+ )
385
+ return output
386
+
387
+
388
+ def load_cpu_kernel(**kwargs):
389
+ global cpu_kernels
390
+ cpu_kernels = CPUKernel(**kwargs)
391
+ assert cpu_kernels.load
392
+
393
+
394
+ def quantize(model, weight_bit_width, use_quantization_cache=False, empty_init=False, **kwargs):
395
+ """Replace fp16 linear with quantized linear"""
396
+
397
+ query_key_value_quantization_cache = None
398
+ dense_quantization_cache = None
399
+ dense_h_to_4h_quantization_cache = None
400
+ dense_4h_to_h_quantization_cache = None
401
+
402
+ try:
403
+ load_cpu_kernel(**kwargs)
404
+ except:
405
+ print("Cannot load cpu kernel, don't use quantized model on cpu.")
406
+ if kernels is None: # CUDA kernels failed
407
+ print("Cannot load cuda kernel, quantization failed.")
408
+ return model
409
+
410
+ current_device = model.device
411
+
412
+ if model.device == torch.device("cpu"):
413
+ dtype=torch.float32
414
+ else:
415
+ dtype = torch.half
416
+
417
+ QuantizedLinearWithPara = partial(
418
+ QuantizedLinear,
419
+ weight_bit_width=weight_bit_width,
420
+ bias=True,
421
+ dtype=dtype,
422
+ empty_init=empty_init
423
+ )
424
+
425
+ if use_quantization_cache:
426
+ print("Using quantization cache")
427
+ layer = model.layers[0]
428
+ weight = layer.attention.query_key_value.weight
429
+ n, m = weight.size(0), weight.size(1)
430
+ query_key_value_quantization_cache = CacheTensor(n, m, dtype=dtype, device=current_device, requires_grad=False)
431
+ weight = layer.attention.dense.weight
432
+ n, m = weight.size(0), weight.size(1)
433
+ dense_quantization_cache = CacheTensor(n, m, dtype=dtype, device=current_device, requires_grad=False)
434
+ weight = layer.mlp.dense_h_to_4h.weight
435
+ n, m = weight.size(0), weight.size(1)
436
+ dense_h_to_4h_quantization_cache = CacheTensor(n, m, dtype=dtype, device=current_device, requires_grad=False)
437
+ weight = layer.mlp.dense_4h_to_h.weight
438
+ n, m = weight.size(0), weight.size(1)
439
+ dense_4h_to_h_quantization_cache = CacheTensor(n, m, dtype=dtype, device=current_device, requires_grad=False)
440
+
441
+ print("Applying quantization to glm layers")
442
+
443
+ for layer in model.layers:
444
+ layer.attention.query_key_value = QuantizedLinearWithPara(
445
+ weight_tensor=layer.attention.query_key_value.weight.to(current_device),
446
+ bias_tensor=layer.attention.query_key_value.bias,
447
+ in_features=layer.attention.query_key_value.in_features,
448
+ out_features=layer.attention.query_key_value.out_features,
449
+ device=layer.attention.query_key_value.weight.device,
450
+ quantization_cache=query_key_value_quantization_cache
451
+ )
452
+ layer.attention.dense = QuantizedLinearWithPara(
453
+ weight_tensor=layer.attention.dense.weight.to(current_device),
454
+ bias_tensor=layer.attention.dense.bias,
455
+ in_features=layer.attention.dense.in_features,
456
+ out_features=layer.attention.dense.out_features,
457
+ device=layer.attention.dense.weight.device,
458
+ quantization_cache=dense_quantization_cache
459
+ )
460
+ layer.mlp.dense_h_to_4h = QuantizedLinearWithPara(
461
+ weight_tensor=layer.mlp.dense_h_to_4h.weight.to(current_device),
462
+ bias_tensor=layer.mlp.dense_h_to_4h.bias,
463
+ in_features=layer.mlp.dense_h_to_4h.in_features,
464
+ out_features=layer.mlp.dense_h_to_4h.out_features,
465
+ device=layer.mlp.dense_h_to_4h.weight.device,
466
+ quantization_cache=dense_h_to_4h_quantization_cache
467
+ )
468
+ layer.mlp.dense_4h_to_h = QuantizedLinearWithPara(
469
+ weight_tensor=layer.mlp.dense_4h_to_h.weight.to(current_device),
470
+ bias_tensor=layer.mlp.dense_4h_to_h.bias,
471
+ in_features=layer.mlp.dense_4h_to_h.in_features,
472
+ out_features=layer.mlp.dense_4h_to_h.out_features,
473
+ device=layer.mlp.dense_4h_to_h.weight.device,
474
+ quantization_cache=dense_4h_to_h_quantization_cache
475
+ )
476
+ return model
quantization_kernels.c ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ void compress_int4_weight(void *weight, void *out, int n, int m)
2
+ {
3
+ for(int i=0;i<n*m;i++)
4
+ {
5
+ (*(unsigned char*)(out)) = ((*(unsigned char*)(weight)) << 4);
6
+ weight += sizeof(char);
7
+ (*(unsigned char*)(out)) |= ((*(unsigned char*)(weight)) & 15);
8
+ weight += sizeof(char);
9
+ out += sizeof(char);
10
+ }
11
+ }
12
+
13
+ void extract_int8_weight_to_float(void *weight, void *scale_list, void *out, int n, int m)
14
+ {
15
+ for(int i=0;i<n;i++)
16
+ for(int j=0;j<m;j++)
17
+ (*(float*)(out + sizeof(float) * (i * m + j))) = (*(float*)(scale_list + sizeof(float) * i)) * (*(char*)(weight + sizeof(char) * (i * m + j)));
18
+ }
19
+
20
+ void extract_int4_weight_to_float(void *weight, void *scale_list, void *out, int n, int m)
21
+ {
22
+ for(int i=0;i<n;i++)
23
+ {
24
+ for(int j=0;j<m;j++)
25
+ {
26
+ (*(float*)(out)) = (*(float*)(scale_list)) * ((*(char*)(weight)) >> 4);
27
+ out += sizeof(float);
28
+ (*(float*)(out)) = (*(float*)(scale_list)) * (((char)((*(unsigned char*)(weight)) << 4))>> 4);
29
+ out += sizeof(float);
30
+ weight += sizeof(char);
31
+ }
32
+ scale_list += sizeof(float);
33
+ }
34
+ }
quantization_kernels_parallel.c ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #include <omp.h>
2
+
3
+ void set_num_threads(int n_threads)
4
+ {
5
+ omp_set_num_threads(n_threads);
6
+ }
7
+
8
+ int get_num_threads()
9
+ {
10
+ return omp_get_num_threads();
11
+ }
12
+
13
+ void compress_int4_weight(void *weight, void *out, int n, int m)
14
+ {
15
+ #pragma omp parallel for
16
+ for(int i=0;i<n;i++)
17
+ {
18
+ for(int j=0;j<m;j++)
19
+ {
20
+ (*(unsigned char*)(out + sizeof(unsigned char) * (i * m + j))) |= ((*(unsigned char*)(weight + sizeof(unsigned char) * (i * (m << 1) + (j << 1)))) << 4);
21
+ (*(unsigned char*)(out + sizeof(unsigned char) * (i * m + j))) |= (((*(unsigned char*)(weight + sizeof(unsigned char) * (i * (m << 1) + ((j << 1) | 1)))) & 15));
22
+ }
23
+ }
24
+ }
25
+
26
+ void extract_int8_weight_to_float(void *weight, void *scale_list, void *out, int n, int m)
27
+ {
28
+ #pragma omp parallel for
29
+ for(int i=0;i<n;i++)
30
+ {
31
+ for(int j=0;j<m;j++)
32
+ (*(float*)(out + sizeof(float) * (i * m + j))) = (*(float*)(scale_list + sizeof(float) * i)) * (*(char*)(weight + sizeof(char) * (i * m + j)));
33
+ }
34
+ }
35
+
36
+ void extract_int4_weight_to_float(void *weight, void *scale_list, void *out, int n, int m)
37
+ {
38
+ #pragma omp parallel for
39
+ for(int i=0;i<n;i++)
40
+ {
41
+ for(int j=0;j<m;j++)
42
+ {
43
+ (*(float*)(out + sizeof(float) * (i * (m << 1) + (j << 1)))) = (*(float*)(scale_list + sizeof(float) * i)) * ((*(char*)(weight + sizeof(char) * (i * m + j))) >> 4);
44
+ (*(float*)(out + sizeof(float) * (i * (m << 1) + ((j << 1) | 1)))) = (*(float*)(scale_list + sizeof(float) * i)) * (((char)((*(unsigned char*)(weight + sizeof(char) * (i * m + j))) << 4))>> 4);
45
+ }
46
+ }
47
+ }
tokenization_chatglm.py ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tokenization classes for ChatGLM."""
2
+ import sys
3
+ import unicodedata
4
+ from typing import List, Optional, Union
5
+ from functools import lru_cache
6
+ import os
7
+ import collections
8
+ import re
9
+
10
+ from transformers.tokenization_utils import PreTrainedTokenizer
11
+ from icetk.text_tokenizer import TextTokenizer
12
+ from icetk.utils import auto_create
13
+ import icetk.sentencepiece_model_pb2 as sp_model
14
+ from transformers.utils import logging
15
+
16
+ logger = logging.get_logger(__name__)
17
+
18
+ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
19
+ "THUDM/chatglm-6b": 2048,
20
+ }
21
+
22
+
23
+ class SPTokenizer:
24
+ def __init__(
25
+ self,
26
+ vocab_file,
27
+ max_blank_length=80,
28
+ byte_fallback=True,
29
+ ):
30
+ assert vocab_file is not None
31
+ self.vocab_file = vocab_file
32
+ self.special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "<unused_0>", "<sop>", "<eop>", "<ENC>", "<dBLOCK>"]
33
+ self.max_blank_length = max_blank_length
34
+ self.byte_fallback = byte_fallback
35
+ self.text_tokenizer = self._build_text_tokenizer(encode_special_tokens=False)
36
+ self.special_text_tokenizer = self._build_text_tokenizer(encode_special_tokens=True)
37
+
38
+ @staticmethod
39
+ def _configure_tokenizer(
40
+ text_tokenizer: TextTokenizer,
41
+ special_tokens: List[str],
42
+ max_blank_length: int,
43
+ byte_fallback: bool,
44
+ encode_special_tokens=False,
45
+ ):
46
+ # special token
47
+ special_token_type = 4 if encode_special_tokens else 3 # 3 - CONTROL, 4 - USER_DEFINE
48
+ for token in special_tokens:
49
+ text_tokenizer.proto.pieces.append(
50
+ sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=special_token_type)
51
+ )
52
+ # whitespaces
53
+ for token in [SPTokenizer.get_tab_token()] + [
54
+ SPTokenizer.get_blank_token(i) for i in range(2, max_blank_length + 1)
55
+ ]:
56
+ text_tokenizer.proto.pieces.append(sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=4))
57
+ # byte fallback
58
+ if byte_fallback:
59
+ text_tokenizer.proto.trainer_spec.byte_fallback = True
60
+ for i in range(256):
61
+ text_tokenizer.proto.pieces.append(
62
+ sp_model.ModelProto.SentencePiece(piece="<0x{:02X}>".format(i), score=0.0, type=6)
63
+ )
64
+ text_tokenizer.refresh()
65
+
66
+ def _build_text_tokenizer(self, encode_special_tokens=False):
67
+ tokenizer = TextTokenizer(self.vocab_file)
68
+ self._configure_tokenizer(
69
+ tokenizer, self.special_tokens, self.max_blank_length, self.byte_fallback, encode_special_tokens
70
+ )
71
+ return tokenizer
72
+
73
+ def _get_text_tokenizer(self, encode_special_tokens=False):
74
+ if encode_special_tokens:
75
+ return self.special_text_tokenizer
76
+ else:
77
+ return self.text_tokenizer
78
+
79
+ @staticmethod
80
+ def get_blank_token(length: int):
81
+ assert length >= 2
82
+ return f"<|blank_{length}|>"
83
+
84
+ @staticmethod
85
+ def get_tab_token():
86
+ return f"<|tab|>"
87
+
88
+ @property
89
+ def num_text_tokens(self):
90
+ return self.text_tokenizer.num_tokens
91
+
92
+ @property
93
+ def num_tokens(self):
94
+ return self.num_text_tokens
95
+
96
+ @staticmethod
97
+ def _encode_whitespaces(text: str, max_len: int = 80):
98
+ text = text.replace("\t", SPTokenizer.get_tab_token())
99
+ for i in range(max_len, 1, -1):
100
+ text = text.replace(" " * i, SPTokenizer.get_blank_token(i))
101
+ return text
102
+
103
+ def _preprocess(self, text: str, linebreak=True, whitespaces=True):
104
+ if linebreak:
105
+ text = text.replace("\n", "<n>")
106
+ if whitespaces:
107
+ text = self._encode_whitespaces(text, max_len=self.max_blank_length)
108
+ return text
109
+
110
+ def encode(
111
+ self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
112
+ ) -> List[int]:
113
+ """
114
+ @param text: Text to encode.
115
+ @param linebreak: Whether to encode newline (\n) in text.
116
+ @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
117
+ @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
118
+ @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
119
+ """
120
+ text = self._preprocess(text, linebreak, whitespaces)
121
+ if not add_dummy_prefix:
122
+ text = "<n>" + text
123
+ tmp = self._get_text_tokenizer(encode_special_tokens=special_tokens).encode(text)
124
+ tokens = [x for x in tmp]
125
+ return tokens if add_dummy_prefix else tokens[2:]
126
+
127
+ def decode(self, text_ids: List[int], special_tokens=False) -> str:
128
+ ids = [int(_id) for _id in text_ids]
129
+ ids = [_id for _id in ids if _id >= 0]
130
+ text = self._get_text_tokenizer(encode_special_tokens=special_tokens).decode(ids)
131
+ text = text.replace("<n>", "\n")
132
+ text = text.replace(SPTokenizer.get_tab_token(), "\t")
133
+ for i in range(2, self.max_blank_length + 1):
134
+ text = text.replace(self.get_blank_token(i), " " * i)
135
+ return text
136
+
137
+ def tokenize(
138
+ self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
139
+ ) -> List[str]:
140
+ """
141
+ @param text: Text to encode.
142
+ @param linebreak: Whether to encode newline (\n) in text.
143
+ @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
144
+ @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
145
+ @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
146
+ """
147
+ text = self._preprocess(text, linebreak, whitespaces)
148
+ if not add_dummy_prefix:
149
+ text = "<n>" + text
150
+ tokens = self._get_text_tokenizer(encode_special_tokens=special_tokens).tokenize(text)
151
+ return tokens if add_dummy_prefix else tokens[2:]
152
+
153
+ def __getitem__(self, x: Union[int, str]):
154
+ if isinstance(x, int):
155
+ return self.text_tokenizer.convert_id_to_token(x)
156
+ elif isinstance(x, str):
157
+ return self.text_tokenizer.convert_token_to_id(x)
158
+ else:
159
+ raise ValueError("The key should be str or int.")
160
+
161
+
162
+ class ChatGLMTokenizer(PreTrainedTokenizer):
163
+ """
164
+ Construct a ChatGLM tokenizer. Based on byte-level Byte-Pair-Encoding.
165
+
166
+ Args:
167
+ vocab_file (`str`):
168
+ Path to the vocabulary file.
169
+ """
170
+
171
+ vocab_files_names = {"vocab_file": "ice_text.model"}
172
+ max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
173
+ model_input_names = ["input_ids"]
174
+
175
+ def __init__(
176
+ self,
177
+ vocab_file,
178
+ do_lower_case=False,
179
+ remove_space=False,
180
+ bos_token='sop',
181
+ eos_token='eos',
182
+ eop_token='eop',
183
+ mask_token='[MASK]',
184
+ gmask_token='[gMASK]',
185
+ padding_side="left",
186
+ **kwargs
187
+ ) -> None:
188
+ super().__init__(
189
+ do_lower_case=do_lower_case,
190
+ remove_space=remove_space,
191
+ padding_side=padding_side,
192
+ **kwargs
193
+ )
194
+
195
+ self.do_lower_case = do_lower_case
196
+ self.remove_space = remove_space
197
+ self.vocab_file = vocab_file
198
+
199
+ self.bos_token = bos_token
200
+ self.eos_token = eos_token
201
+ self.eop_token = eop_token
202
+ self.mask_token = mask_token
203
+ self.gMASK_token = gmask_token
204
+
205
+ self.sp_tokenizer = SPTokenizer(vocab_file)
206
+
207
+ """ Initialisation """
208
+
209
+ @property
210
+ def eop_token_id(self) -> Optional[int]:
211
+ """
212
+ `Optional[int]`: Id of the end of sentence token in the vocabulary. Returns `None` if the token has not been
213
+ set.
214
+ """
215
+ if self.eop_token is None:
216
+ return None
217
+ return self.convert_tokens_to_ids(self.eop_token)
218
+
219
+ @property
220
+ def vocab_size(self):
221
+ """ Returns vocab size """
222
+ return self.sp_tokenizer.num_tokens
223
+
224
+ def get_vocab(self):
225
+ """ Returns vocab as a dict """
226
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
227
+ vocab.update(self.added_tokens_encoder)
228
+ return vocab
229
+
230
+ def preprocess_text(self, inputs):
231
+ if self.remove_space:
232
+ outputs = " ".join(inputs.strip().split())
233
+ else:
234
+ outputs = inputs
235
+
236
+ if self.do_lower_case:
237
+ outputs = outputs.lower()
238
+
239
+ return outputs
240
+
241
+ def _tokenize(self, text, **kwargs):
242
+ """ Returns a tokenized string. """
243
+ text = self.preprocess_text(text)
244
+
245
+ seq = self.sp_tokenizer.tokenize(text)
246
+
247
+ return seq
248
+
249
+ def decode(
250
+ self,
251
+ token_ids: Union[List[int], List[List[int]]],
252
+ skip_special_tokens: bool = False,
253
+ clean_up_tokenization_spaces: bool = True,
254
+ spaces_between_special_tokens: bool = True,
255
+ **kwargs
256
+ ) -> str:
257
+ if isinstance(token_ids[0], list):
258
+ tokens = []
259
+ for single_token_ids in token_ids:
260
+ if self.pad_token_id in single_token_ids: # remove pad
261
+ single_token_ids = list(filter((self.pad_token_id).__ne__, single_token_ids))
262
+ tokens.append(self.sp_tokenizer.decode(single_token_ids))
263
+ return (tokens)
264
+ else:
265
+ if self.pad_token_id in token_ids: # remove pad
266
+ token_ids = list(filter((self.pad_token_id).__ne__, token_ids))
267
+ return self.sp_tokenizer.decode(token_ids)
268
+
269
+ def _convert_token_to_id(self, token):
270
+ """ Converts a token (str) in an id using the vocab. """
271
+ return self.sp_tokenizer[token]
272
+
273
+ def _convert_id_to_token(self, index):
274
+ """Converts an index (integer) in a token (str) using the vocab."""
275
+ return self.sp_tokenizer[index]
276
+
277
+ def save_vocabulary(self, save_directory, filename_prefix=None):
278
+ """
279
+ Save the vocabulary and special tokens file to a directory.
280
+
281
+ Args:
282
+ save_directory (`str`):
283
+ The directory in which to save the vocabulary.
284
+ filename_prefix (`str`, *optional*):
285
+ An optional prefix to add to the named of the saved files.
286
+
287
+ Returns:
288
+ `Tuple(str)`: Paths to the files saved.
289
+ """
290
+ if os.path.isdir(save_directory):
291
+ vocab_file = os.path.join(
292
+ save_directory, self.vocab_files_names["vocab_file"]
293
+ )
294
+ else:
295
+ vocab_file = save_directory
296
+
297
+ with open(self.vocab_file, 'rb') as fin:
298
+ proto_str = fin.read()
299
+
300
+ with open(vocab_file, "wb") as writer:
301
+ writer.write(proto_str)
302
+
303
+ return (vocab_file,)
304
+
305
+ def build_inputs_with_special_tokens(
306
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
307
+ ) -> List[int]:
308
+ """
309
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
310
+ adding special tokens. A BERT sequence has the following format:
311
+
312
+ - single sequence: `[CLS] X [SEP]`
313
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
314
+
315
+ Args:
316
+ token_ids_0 (`List[int]`):
317
+ List of IDs to which the special tokens will be added.
318
+ token_ids_1 (`List[int]`, *optional*):
319
+ Optional second list of IDs for sequence pairs.
320
+
321
+ Returns:
322
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
323
+ """
324
+ if token_ids_1 is not None:
325
+ token_ids_0 += token_ids_1
326
+ mask_ids = self.sp_tokenizer[self.mask_token]
327
+ gmask_ids = self.sp_tokenizer[self.gMASK_token]
328
+ if mask_ids not in token_ids_0 and gmask_ids not in token_ids_0:
329
+ token_ids_0 += [gmask_ids]
330
+
331
+ if token_ids_0[-1] != mask_ids and token_ids_0[-1] != gmask_ids:
332
+ token_ids_0 += [self.sp_tokenizer[self.eos_token]]
333
+
334
+ token_ids_0 += [self.sp_tokenizer[self.bos_token]]
335
+
336
+ return token_ids_0
tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "THUDM/chatglm-6b",
3
+ "bos_token": "<sop>",
4
+ "eop_token": "<eop>",
5
+ "eos_token": "</s>",
6
+ "gmask_token": "[gMASK]",
7
+ "mask_token": "[MASK]",
8
+ "pad_token": "<pad>",
9
+ "unk_token": "<unk>",
10
+ "remove_space": false,
11
+ "do_lower_case": false,
12
+ "tokenizer_class": "ChatGLMTokenizer",
13
+ "auto_map": {
14
+ "AutoTokenizer": [
15
+ "tokenization_chatglm.ChatGLMTokenizer",
16
+ null
17
+ ]
18
+ }
19
+ }