Files changed (4) hide show
  1. MODEL_LICENSE +10 -13
  2. README.md +15 -13
  3. modeling_chatglm.py +4 -18
  4. tokenization_chatglm.py +0 -2
MODEL_LICENSE CHANGED
@@ -9,17 +9,14 @@ The ChatGLM3-6B License
9
  2. 许可授予
10
 
11
  根据本许可的条款和条件,许可方特此授予您非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许可。
12
- 本许可允许您免费使用本仓库中的所有开源模型进行学术研究,对于希望将模型用于商业目的的用户,需在[这里](https://open.bigmodel.cn/mla/form)完成登记。
13
- 经过登记的用户可以免费使用本模型进行商业活动,但必须遵守本许可的所有条款和条件。
14
  上述版权声明和本许可声明应包含在本软件的所有副本或重要部分中。
15
 
16
  3.限制
17
 
18
  您不得出于任何军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建本软件的全部或部分衍生作品。
19
 
20
- 您不得利用本软件从事任何危害国家安全和国家统一,危害社会公共利益及公序良俗,侵犯他人商业秘密、知识产权、名誉权、肖像权、财产权等权益的行为。
21
-
22
- 您在使用中应遵循使用地所适用的法律法规政策、道德规范等要求。
23
 
24
  4.免责声明
25
 
@@ -43,15 +40,15 @@ The ChatGLM3-6B License
43
 
44
  2. License Grant
45
 
46
- Under the terms and conditions of this license, the Licensor hereby grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty-free copyright license.
47
- This license permits you to use all open-source models in this repository for academic research free. Users who wish to use the models for commercial purposes must register [here](https://open.bigmodel.cn/mla/form).
48
- Registered users may use the models for commercial activities free of charge, but must comply with all terms and conditions of this license.
49
- The license notice shall be included in all copies or substantial portions of the Software.
 
 
 
50
 
51
- 3. Restrictions
52
- You are not allowed to use, copy, modify, merge, publish, distribute, copy or create all or part of the derivative works of this software for any military or illegal purposes.
53
- You are not allowed to use this software to engage in any behavior that endangers national security and unity, endangers social public interests and public order, infringes on the rights and interests of others such as trade secrets, intellectual property rights, reputation rights, portrait rights, and property rights.
54
- You should comply with the applicable laws, regulations, policies, ethical standards, and other requirements in the place of use during use.
55
 
56
  4. Disclaimer
57
 
 
9
  2. 许可授予
10
 
11
  根据本许可的条款和条件,许可方特此授予您非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许可。
12
+
 
13
  上述版权声明和本许可声明应包含在本软件的所有副本或重要部分中。
14
 
15
  3.限制
16
 
17
  您不得出于任何军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建本软件的全部或部分衍生作品。
18
 
19
+ 您不得利用本软件从事任何危害国家安全和国家统一、危害社会公共利益、侵犯人身权益的行为。
 
 
20
 
21
  4.免责声明
22
 
 
40
 
41
  2. License Grant
42
 
43
+ Subject to the terms and conditions of this License, the Licensor hereby grants to you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty-free copyright license to use the Software.
44
+
45
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
46
+
47
+ 3. Restriction
48
+
49
+ You will not use, copy, modify, merge, publish, distribute, reproduce, or create derivative works of the Software, in whole or in part, for any military, or illegal purposes.
50
 
51
+ You will not use the Software for any act that may undermine China's national security and national unity, harm the public interest of society, or infringe upon the rights and interests of human beings.
 
 
 
52
 
53
  4. Disclaimer
54
 
README.md CHANGED
@@ -19,11 +19,6 @@ tags:
19
  📍Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a>
20
  </p>
21
 
22
- ## GLM-4 开源模型
23
-
24
- 我们已经发布最新的 **GLM-4** 模型,该模型在多个指标上有了新的突破,您可以在以下两个渠道体验我们的最新模型。
25
- + [GLM-4 开源模型](https://huggingface.co/THUDM/glm-4-9b-chat) 我们已经开源了 GLM-4-9B 系列模型,在各项指标的测试上有明显提升,欢迎尝试。
26
-
27
  ## 介绍 (Introduction)
28
  ChatGLM3-6B 是 ChatGLM 系列最新一代的开源模型,在保留了前两代模型对话流畅、部署门槛低等众多优秀特性的基础上,ChatGLM3-6B 引入了如下特性:
29
 
@@ -86,15 +81,22 @@ The code in this repository is open-sourced under the [Apache-2.0 license](LICEN
86
 
87
  如果你觉得我们的工作有帮助的话,请考虑引用下列论文。
88
 
89
- If you find our work helpful, please consider citing the following paper.
90
 
91
  ```
92
- @misc{glm2024chatglm,
93
- title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
94
- author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
95
- year={2024},
96
- eprint={2406.12793},
97
- archivePrefix={arXiv},
98
- primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
99
  }
100
  ```
 
 
 
 
 
 
 
 
 
 
19
  📍Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a>
20
  </p>
21
 
 
 
 
 
 
22
  ## 介绍 (Introduction)
23
  ChatGLM3-6B 是 ChatGLM 系列最新一代的开源模型,在保留了前两代模型对话流畅、部署门槛低等众多优秀特性的基础上,ChatGLM3-6B 引入了如下特性:
24
 
 
81
 
82
  如果你觉得我们的工作有帮助的话,请考虑引用下列论文。
83
 
84
+ If you find our work helpful, please consider citing the following papers.
85
 
86
  ```
87
+ @article{zeng2022glm,
88
+ title={Glm-130b: An open bilingual pre-trained model},
89
+ author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
90
+ journal={arXiv preprint arXiv:2210.02414},
91
+ year={2022}
 
 
92
  }
93
  ```
94
+ ```
95
+ @inproceedings{du2022glm,
96
+ title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
97
+ author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
98
+ booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
99
+ pages={320--335},
100
+ year={2022}
101
+ }
102
+ ```
modeling_chatglm.py CHANGED
@@ -14,7 +14,6 @@ from torch.nn import CrossEntropyLoss, LayerNorm, MSELoss, BCEWithLogitsLoss
14
  from torch.nn.utils import skip_init
15
  from typing import Optional, Tuple, Union, List, Callable, Dict, Any
16
  from copy import deepcopy
17
- import transformers
18
 
19
  from transformers.modeling_outputs import (
20
  BaseModelOutputWithPast,
@@ -46,9 +45,6 @@ CHATGLM_6B_PRETRAINED_MODEL_ARCHIVE_LIST = [
46
  # See all ChatGLM models at https://huggingface.co/models?filter=chatglm
47
  ]
48
 
49
- is_transformers_4_42_or_higher = int(transformers.__version__.split(".")[1]) >= 42
50
- is_transformers_4_44_or_higher = int(transformers.__version__.split(".")[1]) >= 44
51
-
52
 
53
  def default_init(cls, *args, **kwargs):
54
  return cls(*args, **kwargs)
@@ -185,7 +181,7 @@ def apply_rotary_pos_emb(x: torch.Tensor, rope_cache: torch.Tensor) -> torch.Ten
185
  class RMSNorm(torch.nn.Module):
186
  def __init__(self, normalized_shape, eps=1e-5, device=None, dtype=None, **kwargs):
187
  super().__init__()
188
- self.weight = torch.nn.Parameter(torch.ones(normalized_shape, device=device, dtype=dtype))
189
  self.eps = eps
190
 
191
  def forward(self, hidden_states: torch.Tensor):
@@ -876,19 +872,9 @@ class ChatGLMForConditionalGeneration(ChatGLMPreTrainedModel):
876
  standardize_cache_format: bool = False,
877
  ) -> Dict[str, Any]:
878
  # update past_key_values
879
- if is_transformers_4_44_or_higher:
880
- model_kwargs["past_key_values"] = self._extract_past_from_model_output(
881
- outputs
882
- )[1]
883
- elif is_transformers_4_42_or_higher:
884
- # update past_key_values
885
- model_kwargs["past_key_values"] = self._extract_past_from_model_output(
886
- outputs, standardize_cache_format=standardize_cache_format
887
- )[1]
888
- else:
889
- model_kwargs["past_key_values"] = self._extract_past_from_model_output(
890
- outputs, standardize_cache_format=standardize_cache_format
891
- )
892
 
893
  # update attention mask
894
  if "attention_mask" in model_kwargs:
 
14
  from torch.nn.utils import skip_init
15
  from typing import Optional, Tuple, Union, List, Callable, Dict, Any
16
  from copy import deepcopy
 
17
 
18
  from transformers.modeling_outputs import (
19
  BaseModelOutputWithPast,
 
45
  # See all ChatGLM models at https://huggingface.co/models?filter=chatglm
46
  ]
47
 
 
 
 
48
 
49
  def default_init(cls, *args, **kwargs):
50
  return cls(*args, **kwargs)
 
181
  class RMSNorm(torch.nn.Module):
182
  def __init__(self, normalized_shape, eps=1e-5, device=None, dtype=None, **kwargs):
183
  super().__init__()
184
+ self.weight = torch.nn.Parameter(torch.empty(normalized_shape, device=device, dtype=dtype))
185
  self.eps = eps
186
 
187
  def forward(self, hidden_states: torch.Tensor):
 
872
  standardize_cache_format: bool = False,
873
  ) -> Dict[str, Any]:
874
  # update past_key_values
875
+ model_kwargs["past_key_values"] = self._extract_past_from_model_output(
876
+ outputs, standardize_cache_format=standardize_cache_format
877
+ )
 
 
 
 
 
 
 
 
 
 
878
 
879
  # update attention mask
880
  if "attention_mask" in model_kwargs:
tokenization_chatglm.py CHANGED
@@ -268,7 +268,6 @@ class ChatGLMTokenizer(PreTrainedTokenizer):
268
  self,
269
  encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
270
  max_length: Optional[int] = None,
271
- padding_side: str = "left",
272
  padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
273
  pad_to_multiple_of: Optional[int] = None,
274
  return_attention_mask: Optional[bool] = None,
@@ -298,7 +297,6 @@ class ChatGLMTokenizer(PreTrainedTokenizer):
298
  """
299
  # Load from model defaults
300
  assert self.padding_side == "left"
301
- assert padding_side is None or padding_side == "left"
302
 
303
  required_input = encoded_inputs[self.model_input_names[0]]
304
  seq_length = len(required_input)
 
268
  self,
269
  encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
270
  max_length: Optional[int] = None,
 
271
  padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
272
  pad_to_multiple_of: Optional[int] = None,
273
  return_attention_mask: Optional[bool] = None,
 
297
  """
298
  # Load from model defaults
299
  assert self.padding_side == "left"
 
300
 
301
  required_input = encoded_inputs[self.model_input_names[0]]
302
  seq_length = len(required_input)