xiaotinghe commited on
Commit
76f1e5d
1 Parent(s): ce9713e
README.md CHANGED
@@ -1,3 +1,156 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
  ---
5
+
6
+ **InternLM**
7
+
8
+ <div align="center">
9
+
10
+ <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
11
+ <div>&nbsp;</div>
12
+ <div align="center">
13
+ <b><font size="5">InternLM</font></b>
14
+ <sup>
15
+ <a href="https://internlm.intern-ai.org.cn/">
16
+ <i><font size="4">HOT</font></i>
17
+ </a>
18
+ </sup>
19
+ <div>&nbsp;</div>
20
+ </div>
21
+
22
+ [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/)
23
+
24
+ [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new)
25
+
26
+ </div>
27
+
28
+
29
+ ## Introduction
30
+
31
+ The Shanghai Artificial Intelligence Laboratory, in collaboration with SenseTime Technology, the Chinese University of Hong Kong, and Fudan University, has officially released the 20 billion parameter pretrained model, InternLM-20B. InternLM-20B was pre-trained on over **2.3T** Tokens containing high-quality English, Chinese, and code data. Additionally, the Chat version has undergone SFT and RLHF training, enabling it to better and more securely meet users' needs.
32
+
33
+ In terms of model structure, InternLM-20B opted for a deeper architecture, with a depth set at 60 layers. This surpasses the conventional 7B and 13B models that utilize 32 or 40 layers. When parameters are limited, increasing the number of layers can enhance the model's overall capability. Furthermore, compared to InternLM-7B, the pre-training data used for InternLM-20B underwent higher quality cleansing and was supplemented with data rich in knowledge and designed for reinforcing understanding and reasoning capabilities. As a result, it exhibits significant improvements in understanding, reasoning, mathematical, and programming abilities—all of which test the technical proficiency of language models. Overall, InternLM-20B features the following characteristics:
34
+ - Outstanding overall performance
35
+ - Strong utility invocation capability
36
+ - Supports a 16k context length (Through infererence extrapolation)
37
+ - Better value alignment.
38
+
39
+ ## Performance Evaluation
40
+ On the 5 capability dimensions proposed by OpenCompass, InternLM-20B has achieved excellent results (the bolded scores represent the best performances within the 13B-33B parameter range).
41
+
42
+ | Capability | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
43
+ |----------|-----------|------------|---------------|--------------|-----------|-----------|------------|
44
+ | Language | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 |
45
+ | Knowledge | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 |
46
+ | Understanding | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 |
47
+ | Reasoning | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 |
48
+ | Examination | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 |
49
+ | Overall | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 |
50
+
51
+ The table below compares the performance of mainstream open-source models on some influential and typical datasets.
52
+
53
+ | | Benchmarks | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
54
+ |------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------|
55
+ | Examination | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 |
56
+ | | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 |
57
+ | | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 |
58
+ | Knowledge | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 |
59
+ | | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 |
60
+ | | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 |
61
+ | Understanding | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 |
62
+ | | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 |
63
+ | | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 |
64
+ | | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 |
65
+ | | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 |
66
+ | Reasoning | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 |
67
+ | | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 |
68
+ | | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 |
69
+ | | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 |
70
+ | Programming | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 |
71
+ | | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 |
72
+
73
+ Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B.
74
+
75
+ ## Import from Transformers
76
+ To load the InternLM 20B model using Transformers, use the following code:
77
+ ```python
78
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
79
+ >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True)
80
+ >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True).cuda()
81
+ >>> model = model.eval()
82
+ >>> output, history = model.chat(tokenizer, "Hello! Today is sunny, it is time to go out")
83
+ >>> print(output)
84
+ Hello! Today is sunny, and it sounds like a great day to go out an enjoy the weather. What would you like to do?
85
+ ```
86
+
87
+ **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
88
+
89
+
90
+ ## Open Source License
91
+
92
+ The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>.
93
+
94
+
95
+ ## 简介
96
+ 上海人工智能实验室与商汤科技联合香港中文大学和复旦大学正式推出书生·浦语200亿参数模型版本 InternLM-20B ,InternLM-20B 在超过 **2.3T** Tokens 包含高质量英文、中文和代码的数据上进行预训练,其中 Chat 版本还经过了 SFT 和 RLHF 训练,使其能够更好、更安全地满足用户的需求。
97
+
98
+ InternLM 20B 在模型结构上选择了深结构,层数设定为60层,超过常规7B和13B模型所使用的32层或者40层。在参数受限的情况下,提高层数有利于提高模型的综合能力。此外,相较于InternLM-7B,InternLM-20B使用的预训练数据经过了更高质量的清洗,并补充了高知识密度和用于强化理解与推理能力的训练数据。因此,它在理解能力、推理能力、数学能力、编程能力等考验语言模型技术水平的方面都得到了显著提升。总体而言,InternLM-20B具有以下的特点:
99
+ - 优异的综合性能
100
+ - 很强的工具调用功能
101
+ - 支持16k语境长度(通过推理时外推)
102
+ - 更好的价值对齐
103
+
104
+ ## 性能评测
105
+ 在OpenCompass提出的5个能力维度上,InternLM-20B都取得很好的效果(粗体为13B-33B这个量级范围内,各项最佳成绩)
106
+
107
+ | 能力维度 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
108
+ |----------|-----------|------------|---------------|--------------|-----------|-----------|------------|
109
+ | 语言 | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 |
110
+ | 知识 | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 |
111
+ | 理解 | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 |
112
+ | 推理 | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 |
113
+ | 学科 | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 |
114
+ | 总平均 | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 |
115
+
116
+ 下表展示了在多个经典数据集上 InternLM 20B 与各个主流开源模型的表现
117
+
118
+ | | 评测集 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B |
119
+ |------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------|
120
+ | 学科 | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 |
121
+ | | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 |
122
+ | | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 |
123
+ | 知识 | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 |
124
+ | | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 |
125
+ | | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 |
126
+ | 理解 | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 |
127
+ | | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 |
128
+ | | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 |
129
+ | | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 |
130
+ | | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 |
131
+ | 推理 | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 |
132
+ | | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 |
133
+ | | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 |
134
+ | | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 |
135
+ | 编程 | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 |
136
+ | | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 |
137
+
138
+ 总体而言,InternLM-20B 在综合能力上全面领先于13B量级的开源模型,同时在推理评测集上能够接近甚至超越Llama-65B的性能。
139
+
140
+ ## 通过 Transformers 加载
141
+ 通过以下的代码加载 InternLM 20B 模型
142
+ ```python
143
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
144
+ >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True)
145
+ >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True).cuda()
146
+ >>> model = model.eval()
147
+ >>> output, history = model.chat(tokenizer, "你好呀!今天天气真好")
148
+ >>> print(output)
149
+ 你好!是的,今天的天气非常晴朗,非常适合户外活动。
150
+ ```
151
+
152
+ **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
153
+
154
+ ## 开源许可证
155
+
156
+ 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。
config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "InternLMForCausalLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_internlm.InternLMConfig",
7
+ "AutoModel": "modeling_internlm.InternLMForCausalLM",
8
+ "AutoModelForCausalLM": "modeling_internlm.InternLMForCausalLM"
9
+ },
10
+ "bias": false,
11
+ "bos_token_id": 1,
12
+ "eos_token_id": 2,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 5120,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 13824,
17
+ "max_position_embeddings": 4096,
18
+ "model_type": "internlm",
19
+ "num_attention_heads": 40,
20
+ "num_hidden_layers": 60,
21
+ "num_key_value_heads": 40,
22
+ "pad_token_id": 0,
23
+ "pretraining_tp": 1,
24
+ "rms_norm_eps": 1e-06,
25
+ "rope_scaling": null,
26
+ "rope_theta": 10000.0,
27
+ "tie_word_embeddings": false,
28
+ "torch_dtype": "bfloat16",
29
+ "transformers_version": "4.33.1",
30
+ "use_cache": true,
31
+ "vocab_size": 103168,
32
+ "quantization_config": {
33
+ "bits": 4,
34
+ "group_size": 32,
35
+ "damp_percent": 0.1,
36
+ "desc_act": true,
37
+ "sym": true,
38
+ "true_sequential": true,
39
+ "model_name_or_path": null,
40
+ "model_file_base_name": "model",
41
+ "quant_method": "gptq"
42
+ }
43
+ }
configuration_internlm.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ InternLM model configuration"""
21
+
22
+ from transformers.utils import logging
23
+ from transformers.configuration_utils import PretrainedConfig
24
+
25
+
26
+ logger = logging.get_logger(__name__)
27
+
28
+ INTERNLM_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
29
+
30
+
31
+ class InternLMConfig(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`InternLMModel`]. It is used to instantiate an InternLM
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the InternLM-7B.
36
+
37
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
38
+ documentation from [`PretrainedConfig`] for more information.
39
+
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 32000):
43
+ Vocabulary size of the InternLM model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`InternLMModel`]
45
+ hidden_size (`int`, *optional*, defaults to 4096):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 11008):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 32):
50
+ Number of hidden layers in the Transformer encoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 32):
52
+ Number of attention heads for each attention layer in the Transformer encoder.
53
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
54
+ The non-linear activation function (function or string) in the decoder.
55
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
56
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
57
+ just in case (e.g., 512 or 1024 or 2048).
58
+ initializer_range (`float`, *optional*, defaults to 0.02):
59
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
60
+ rms_norm_eps (`float`, *optional*, defaults to 1e-12):
61
+ The epsilon used by the rms normalization layers.
62
+ use_cache (`bool`, *optional*, defaults to `True`):
63
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
64
+ relevant if `config.is_decoder=True`.
65
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
66
+ Whether to tie weight embeddings
67
+ Example:
68
+
69
+ ```python
70
+ >>> from transformers import InternLMModel, InternLMConfig
71
+
72
+ >>> # Initializing a InternLM internlm-7b style configuration
73
+ >>> configuration = InternLMConfig()
74
+
75
+ >>> # Initializing a model from the internlm-7b style configuration
76
+ >>> model = InternLMModel(configuration)
77
+
78
+ >>> # Accessing the model configuration
79
+ >>> configuration = model.config
80
+ ```"""
81
+ model_type = "internlm"
82
+ _auto_class = "AutoConfig"
83
+
84
+ def __init__(
85
+ self,
86
+ vocab_size=103168,
87
+ hidden_size=4096,
88
+ intermediate_size=11008,
89
+ num_hidden_layers=32,
90
+ num_attention_heads=32,
91
+ hidden_act="silu",
92
+ max_position_embeddings=2048,
93
+ initializer_range=0.02,
94
+ rms_norm_eps=1e-6,
95
+ use_cache=True,
96
+ pad_token_id=0,
97
+ bos_token_id=1,
98
+ eos_token_id=2,
99
+ tie_word_embeddings=False,
100
+ bias=True,
101
+ **kwargs,
102
+ ):
103
+ self.vocab_size = vocab_size
104
+ self.max_position_embeddings = max_position_embeddings
105
+ self.hidden_size = hidden_size
106
+ self.intermediate_size = intermediate_size
107
+ self.num_hidden_layers = num_hidden_layers
108
+ self.num_attention_heads = num_attention_heads
109
+ self.hidden_act = hidden_act
110
+ self.initializer_range = initializer_range
111
+ self.rms_norm_eps = rms_norm_eps
112
+ self.use_cache = use_cache
113
+ self.bias = bias
114
+ super().__init__(
115
+ pad_token_id=pad_token_id,
116
+ bos_token_id=bos_token_id,
117
+ eos_token_id=eos_token_id,
118
+ tie_word_embeddings=tie_word_embeddings,
119
+ **kwargs,
120
+ )
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.33.1"
6
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92739ebd6c09d80dd29bc110673994ef71ed11c2918782dddce9c68355e76c1b
3
+ size 13134127064
modeling_internlm.py ADDED
@@ -0,0 +1,998 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ PyTorch InternLM model."""
21
+ import math
22
+ from typing import List, Optional, Tuple, Union
23
+ import threading, queue
24
+
25
+ import torch
26
+ import torch.utils.checkpoint
27
+ from torch import nn
28
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
29
+
30
+ from transformers.activations import ACT2FN
31
+ from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast
32
+ from transformers.modeling_utils import PreTrainedModel
33
+ from transformers.generation.streamers import BaseStreamer
34
+ from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
35
+ from .configuration_internlm import InternLMConfig
36
+
37
+
38
+ logger = logging.get_logger(__name__)
39
+
40
+ _CONFIG_FOR_DOC = "InternLMConfig"
41
+
42
+ # Copied from transformers.models.bart.modeling_bart._make_causal_mask
43
+ def _make_causal_mask(
44
+ input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
45
+ ):
46
+ """
47
+ Make causal mask used for bi-directional self-attention.
48
+ """
49
+ bsz, tgt_len = input_ids_shape
50
+ mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
51
+ mask_cond = torch.arange(mask.size(-1), device=device)
52
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
53
+ mask = mask.to(dtype)
54
+
55
+ if past_key_values_length > 0:
56
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
57
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
58
+
59
+
60
+ # Copied from transformers.models.bart.modeling_bart._expand_mask
61
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
62
+ """
63
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
64
+ """
65
+ bsz, src_len = mask.size()
66
+ tgt_len = tgt_len if tgt_len is not None else src_len
67
+
68
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
69
+
70
+ inverted_mask = 1.0 - expanded_mask
71
+
72
+ return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
73
+
74
+
75
+ class InternLMRMSNorm(nn.Module):
76
+ def __init__(self, hidden_size, eps=1e-6):
77
+ """
78
+ InternLMRMSNorm is equivalent to T5LayerNorm
79
+ """
80
+ super().__init__()
81
+ self.weight = nn.Parameter(torch.ones(hidden_size))
82
+ self.variance_epsilon = eps
83
+
84
+ def forward(self, hidden_states):
85
+ variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
86
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
87
+
88
+ # convert into half-precision if necessary
89
+ if self.weight.dtype in [torch.float16, torch.bfloat16]:
90
+ hidden_states = hidden_states.to(self.weight.dtype)
91
+
92
+ return self.weight * hidden_states
93
+
94
+
95
+ class InternLMRotaryEmbedding(torch.nn.Module):
96
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
97
+ super().__init__()
98
+ inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
99
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
100
+
101
+ # Build here to make `torch.jit.trace` work.
102
+ self.max_seq_len_cached = max_position_embeddings
103
+ t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
104
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
105
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
106
+ emb = torch.cat((freqs, freqs), dim=-1)
107
+ self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
108
+ self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
109
+
110
+ def forward(self, x, seq_len=None):
111
+ # x: [bs, num_attention_heads, seq_len, head_size]
112
+ # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
113
+ if seq_len > self.max_seq_len_cached:
114
+ self.max_seq_len_cached = seq_len
115
+ t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
116
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
117
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
118
+ emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
119
+ self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
120
+ self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
121
+ return (
122
+ self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
123
+ self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
124
+ )
125
+
126
+
127
+ def rotate_half(x):
128
+ """Rotates half the hidden dims of the input."""
129
+ x1 = x[..., : x.shape[-1] // 2]
130
+ x2 = x[..., x.shape[-1] // 2 :]
131
+ return torch.cat((-x2, x1), dim=-1)
132
+
133
+
134
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
135
+ # The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
136
+ cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
137
+ sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
138
+ cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
139
+ sin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
140
+ q_embed = (q * cos) + (rotate_half(q) * sin)
141
+ k_embed = (k * cos) + (rotate_half(k) * sin)
142
+ return q_embed, k_embed
143
+
144
+
145
+ class InternLMMLP(nn.Module):
146
+ def __init__(
147
+ self,
148
+ hidden_size: int,
149
+ intermediate_size: int,
150
+ hidden_act: str,
151
+ ):
152
+ super().__init__()
153
+ self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
154
+ self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
155
+ self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
156
+ self.act_fn = ACT2FN[hidden_act]
157
+
158
+ def forward(self, x):
159
+ return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
160
+
161
+
162
+ class InternLMAttention(nn.Module):
163
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
164
+
165
+ def __init__(self, config: InternLMConfig):
166
+ super().__init__()
167
+ self.config = config
168
+ self.hidden_size = config.hidden_size
169
+ self.num_heads = config.num_attention_heads
170
+ self.head_dim = self.hidden_size // self.num_heads
171
+ self.max_position_embeddings = config.max_position_embeddings
172
+
173
+ if (self.head_dim * self.num_heads) != self.hidden_size:
174
+ raise ValueError(
175
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
176
+ f" and `num_heads`: {self.num_heads})."
177
+ )
178
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
179
+ self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
180
+ self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
181
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
182
+ self.rotary_emb = InternLMRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
183
+
184
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
185
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
186
+
187
+ def forward(
188
+ self,
189
+ hidden_states: torch.Tensor,
190
+ attention_mask: Optional[torch.Tensor] = None,
191
+ position_ids: Optional[torch.LongTensor] = None,
192
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
193
+ output_attentions: bool = False,
194
+ use_cache: bool = False,
195
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
196
+ bsz, q_len, _ = hidden_states.size()
197
+
198
+ query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
199
+ key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
200
+ value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
201
+
202
+ kv_seq_len = key_states.shape[-2]
203
+ if past_key_value is not None:
204
+ kv_seq_len += past_key_value[0].shape[-2]
205
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
206
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
207
+ # [bsz, nh, t, hd]
208
+
209
+ if past_key_value is not None:
210
+ # reuse k, v, self_attention
211
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
212
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
213
+
214
+ past_key_value = (key_states, value_states) if use_cache else None
215
+
216
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
217
+
218
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
219
+ raise ValueError(
220
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
221
+ f" {attn_weights.size()}"
222
+ )
223
+
224
+ if attention_mask is not None:
225
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
226
+ raise ValueError(
227
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
228
+ )
229
+ attn_weights = attn_weights + attention_mask
230
+ attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
231
+
232
+ # upcast attention to fp32
233
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
234
+ attn_output = torch.matmul(attn_weights, value_states)
235
+
236
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
237
+ raise ValueError(
238
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
239
+ f" {attn_output.size()}"
240
+ )
241
+
242
+ attn_output = attn_output.transpose(1, 2)
243
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
244
+
245
+ attn_output = self.o_proj(attn_output)
246
+
247
+ if not output_attentions:
248
+ attn_weights = None
249
+
250
+ return attn_output, attn_weights, past_key_value
251
+
252
+
253
+ class InternLMDecoderLayer(nn.Module):
254
+ def __init__(self, config: InternLMConfig):
255
+ super().__init__()
256
+ self.hidden_size = config.hidden_size
257
+ self.self_attn = InternLMAttention(config=config)
258
+ self.mlp = InternLMMLP(
259
+ hidden_size=self.hidden_size,
260
+ intermediate_size=config.intermediate_size,
261
+ hidden_act=config.hidden_act,
262
+ )
263
+ self.input_layernorm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
264
+ self.post_attention_layernorm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
265
+
266
+ def forward(
267
+ self,
268
+ hidden_states: torch.Tensor,
269
+ attention_mask: Optional[torch.Tensor] = None,
270
+ position_ids: Optional[torch.LongTensor] = None,
271
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
272
+ output_attentions: Optional[bool] = False,
273
+ use_cache: Optional[bool] = False,
274
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
275
+ """
276
+ Args:
277
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
278
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
279
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
280
+ output_attentions (`bool`, *optional*):
281
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
282
+ returned tensors for more detail.
283
+ use_cache (`bool`, *optional*):
284
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
285
+ (see `past_key_values`).
286
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
287
+ """
288
+
289
+ residual = hidden_states
290
+
291
+ hidden_states = self.input_layernorm(hidden_states)
292
+
293
+ # Self Attention
294
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
295
+ hidden_states=hidden_states,
296
+ attention_mask=attention_mask,
297
+ position_ids=position_ids,
298
+ past_key_value=past_key_value,
299
+ output_attentions=output_attentions,
300
+ use_cache=use_cache,
301
+ )
302
+ hidden_states = residual + hidden_states
303
+
304
+ # Fully Connected
305
+ residual = hidden_states
306
+ hidden_states = self.post_attention_layernorm(hidden_states)
307
+ hidden_states = self.mlp(hidden_states)
308
+ hidden_states = residual + hidden_states
309
+
310
+ outputs = (hidden_states,)
311
+
312
+ if output_attentions:
313
+ outputs += (self_attn_weights,)
314
+
315
+ if use_cache:
316
+ outputs += (present_key_value,)
317
+
318
+ return outputs
319
+
320
+
321
+ INTERNLM_START_DOCSTRING = r"""
322
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
323
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
324
+ etc.)
325
+
326
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
327
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
328
+ and behavior.
329
+
330
+ Parameters:
331
+ config ([`InternLMConfig`]):
332
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
333
+ load the weights associated with the model, only the configuration. Check out the
334
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
335
+ """
336
+
337
+
338
+ @add_start_docstrings(
339
+ "The bare InternLM Model outputting raw hidden-states without any specific head on top.",
340
+ INTERNLM_START_DOCSTRING,
341
+ )
342
+ class InternLMPreTrainedModel(PreTrainedModel):
343
+ config_class = InternLMConfig
344
+ base_model_prefix = "model"
345
+ supports_gradient_checkpointing = True
346
+ _no_split_modules = ["InternLMDecoderLayer"]
347
+ _keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
348
+
349
+ def _init_weights(self, module):
350
+ std = self.config.initializer_range
351
+ if isinstance(module, nn.Linear):
352
+ module.weight.data.normal_(mean=0.0, std=std)
353
+ if module.bias is not None:
354
+ module.bias.data.zero_()
355
+ elif isinstance(module, nn.Embedding):
356
+ module.weight.data.normal_(mean=0.0, std=std)
357
+ if module.padding_idx is not None:
358
+ module.weight.data[module.padding_idx].zero_()
359
+
360
+ def _set_gradient_checkpointing(self, module, value=False):
361
+ if isinstance(module, InternLMModel):
362
+ module.gradient_checkpointing = value
363
+
364
+
365
+ INTERNLM_INPUTS_DOCSTRING = r"""
366
+ Args:
367
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
368
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
369
+ it.
370
+
371
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
372
+ [`PreTrainedTokenizer.__call__`] for details.
373
+
374
+ [What are input IDs?](../glossary#input-ids)
375
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
376
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
377
+
378
+ - 1 for tokens that are **not masked**,
379
+ - 0 for tokens that are **masked**.
380
+
381
+ [What are attention masks?](../glossary#attention-mask)
382
+
383
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
384
+ [`PreTrainedTokenizer.__call__`] for details.
385
+
386
+ If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
387
+ `past_key_values`).
388
+
389
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
390
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
391
+ information on the default strategy.
392
+
393
+ - 1 indicates the head is **not masked**,
394
+ - 0 indicates the head is **masked**.
395
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
396
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
397
+ config.n_positions - 1]`.
398
+
399
+ [What are position IDs?](../glossary#position-ids)
400
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
401
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
402
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
403
+ `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
404
+
405
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
406
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
407
+
408
+ If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
409
+ don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
410
+ `decoder_input_ids` of shape `(batch_size, sequence_length)`.
411
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
412
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
413
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
414
+ model's internal embedding lookup matrix.
415
+ use_cache (`bool`, *optional*):
416
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
417
+ `past_key_values`).
418
+ output_attentions (`bool`, *optional*):
419
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
420
+ tensors for more detail.
421
+ output_hidden_states (`bool`, *optional*):
422
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
423
+ more detail.
424
+ return_dict (`bool`, *optional*):
425
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
426
+ """
427
+
428
+
429
+ @add_start_docstrings(
430
+ "The bare InternLM Model outputting raw hidden-states without any specific head on top.",
431
+ INTERNLM_START_DOCSTRING,
432
+ )
433
+ class InternLMModel(InternLMPreTrainedModel):
434
+ """
435
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLMDecoderLayer`]
436
+
437
+ Args:
438
+ config: InternLMConfig
439
+ """
440
+ _auto_class = "AutoModel"
441
+
442
+ def __init__(self, config: InternLMConfig):
443
+ super().__init__(config)
444
+ self.padding_idx = config.pad_token_id
445
+ self.vocab_size = config.vocab_size
446
+
447
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
448
+ self.layers = nn.ModuleList([InternLMDecoderLayer(config) for _ in range(config.num_hidden_layers)])
449
+ self.norm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
450
+
451
+ self.gradient_checkpointing = False
452
+ # Initialize weights and apply final processing
453
+ self.post_init()
454
+
455
+ def get_input_embeddings(self):
456
+ return self.embed_tokens
457
+
458
+ def set_input_embeddings(self, value):
459
+ self.embed_tokens = value
460
+
461
+ # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
462
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
463
+ # create causal mask
464
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
465
+ combined_attention_mask = None
466
+ if input_shape[-1] > 1:
467
+ combined_attention_mask = _make_causal_mask(
468
+ input_shape,
469
+ inputs_embeds.dtype,
470
+ device=inputs_embeds.device,
471
+ past_key_values_length=past_key_values_length,
472
+ )
473
+
474
+ if attention_mask is not None:
475
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
476
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
477
+ inputs_embeds.device
478
+ )
479
+ combined_attention_mask = (
480
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
481
+ )
482
+
483
+ return combined_attention_mask
484
+
485
+ @add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
486
+ def forward(
487
+ self,
488
+ input_ids: torch.LongTensor = None,
489
+ attention_mask: Optional[torch.Tensor] = None,
490
+ position_ids: Optional[torch.LongTensor] = None,
491
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
492
+ inputs_embeds: Optional[torch.FloatTensor] = None,
493
+ use_cache: Optional[bool] = None,
494
+ output_attentions: Optional[bool] = None,
495
+ output_hidden_states: Optional[bool] = None,
496
+ return_dict: Optional[bool] = None,
497
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
498
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
499
+ output_hidden_states = (
500
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
501
+ )
502
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
503
+
504
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
505
+
506
+ # retrieve input_ids and inputs_embeds
507
+ if input_ids is not None and inputs_embeds is not None:
508
+ raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
509
+ elif input_ids is not None:
510
+ batch_size, seq_length = input_ids.shape
511
+ elif inputs_embeds is not None:
512
+ batch_size, seq_length, _ = inputs_embeds.shape
513
+ else:
514
+ raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
515
+
516
+ seq_length_with_past = seq_length
517
+ past_key_values_length = 0
518
+
519
+ if past_key_values is not None:
520
+ past_key_values_length = past_key_values[0][0].shape[2]
521
+ seq_length_with_past = seq_length_with_past + past_key_values_length
522
+
523
+ if position_ids is None:
524
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
525
+ position_ids = torch.arange(
526
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
527
+ )
528
+ position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
529
+ else:
530
+ position_ids = position_ids.view(-1, seq_length).long()
531
+
532
+ if inputs_embeds is None:
533
+ inputs_embeds = self.embed_tokens(input_ids)
534
+ # embed positions
535
+ if attention_mask is None:
536
+ attention_mask = torch.ones(
537
+ (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
538
+ )
539
+ attention_mask = self._prepare_decoder_attention_mask(
540
+ attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
541
+ )
542
+
543
+ hidden_states = inputs_embeds
544
+
545
+ if self.gradient_checkpointing and self.training:
546
+ if use_cache:
547
+ logger.warning_once(
548
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
549
+ )
550
+ use_cache = False
551
+
552
+ # decoder layers
553
+ all_hidden_states = () if output_hidden_states else None
554
+ all_self_attns = () if output_attentions else None
555
+ next_decoder_cache = () if use_cache else None
556
+
557
+ for idx, decoder_layer in enumerate(self.layers):
558
+ if output_hidden_states:
559
+ all_hidden_states += (hidden_states,)
560
+
561
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
562
+
563
+ if self.gradient_checkpointing and self.training:
564
+
565
+ def create_custom_forward(module):
566
+ def custom_forward(*inputs):
567
+ # None for past_key_value
568
+ return module(*inputs, output_attentions, None)
569
+
570
+ return custom_forward
571
+
572
+ layer_outputs = torch.utils.checkpoint.checkpoint(
573
+ create_custom_forward(decoder_layer),
574
+ hidden_states,
575
+ attention_mask,
576
+ position_ids,
577
+ None,
578
+ )
579
+ else:
580
+ layer_outputs = decoder_layer(
581
+ hidden_states,
582
+ attention_mask=attention_mask,
583
+ position_ids=position_ids,
584
+ past_key_value=past_key_value,
585
+ output_attentions=output_attentions,
586
+ use_cache=use_cache,
587
+ )
588
+
589
+ hidden_states = layer_outputs[0]
590
+
591
+ if use_cache:
592
+ next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
593
+
594
+ if output_attentions:
595
+ all_self_attns += (layer_outputs[1],)
596
+
597
+ hidden_states = self.norm(hidden_states)
598
+
599
+ # add hidden states from the last decoder layer
600
+ if output_hidden_states:
601
+ all_hidden_states += (hidden_states,)
602
+
603
+ next_cache = next_decoder_cache if use_cache else None
604
+ if not return_dict:
605
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
606
+ return BaseModelOutputWithPast(
607
+ last_hidden_state=hidden_states,
608
+ past_key_values=next_cache,
609
+ hidden_states=all_hidden_states,
610
+ attentions=all_self_attns,
611
+ )
612
+
613
+
614
+ class InternLMForCausalLM(InternLMPreTrainedModel):
615
+ _auto_class = "AutoModelForCausalLM"
616
+
617
+ def __init__(self, config):
618
+ super().__init__(config)
619
+ self.model = InternLMModel(config)
620
+
621
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
622
+
623
+ # Initialize weights and apply final processing
624
+ self.post_init()
625
+
626
+ def get_input_embeddings(self):
627
+ return self.model.embed_tokens
628
+
629
+ def set_input_embeddings(self, value):
630
+ self.model.embed_tokens = value
631
+
632
+ def get_output_embeddings(self):
633
+ return self.lm_head
634
+
635
+ def set_output_embeddings(self, new_embeddings):
636
+ self.lm_head = new_embeddings
637
+
638
+ def set_decoder(self, decoder):
639
+ self.model = decoder
640
+
641
+ def get_decoder(self):
642
+ return self.model
643
+
644
+ @add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
645
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
646
+ def forward(
647
+ self,
648
+ input_ids: torch.LongTensor = None,
649
+ attention_mask: Optional[torch.Tensor] = None,
650
+ position_ids: Optional[torch.LongTensor] = None,
651
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
652
+ inputs_embeds: Optional[torch.FloatTensor] = None,
653
+ labels: Optional[torch.LongTensor] = None,
654
+ use_cache: Optional[bool] = None,
655
+ output_attentions: Optional[bool] = None,
656
+ output_hidden_states: Optional[bool] = None,
657
+ return_dict: Optional[bool] = None,
658
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
659
+ r"""
660
+ Args:
661
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
662
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
663
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
664
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
665
+
666
+ Returns:
667
+
668
+ Example:
669
+
670
+ ```python
671
+ >>> from transformers import AutoTokenizer, InternLMForCausalLM
672
+
673
+ >>> model = InternLMForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
674
+ >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
675
+
676
+ >>> prompt = "Hey, are you consciours? Can you talk to me?"
677
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
678
+
679
+ >>> # Generate
680
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
681
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
682
+ "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
683
+ ```"""
684
+
685
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
686
+ output_hidden_states = (
687
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
688
+ )
689
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
690
+
691
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
692
+ outputs = self.model(
693
+ input_ids=input_ids,
694
+ attention_mask=attention_mask,
695
+ position_ids=position_ids,
696
+ past_key_values=past_key_values,
697
+ inputs_embeds=inputs_embeds,
698
+ use_cache=use_cache,
699
+ output_attentions=output_attentions,
700
+ output_hidden_states=output_hidden_states,
701
+ return_dict=return_dict,
702
+ )
703
+
704
+ hidden_states = outputs[0]
705
+ logits = self.lm_head(hidden_states)
706
+
707
+ loss = None
708
+ if labels is not None:
709
+ # Shift so that tokens < n predict n
710
+ shift_logits = logits[..., :-1, :].contiguous()
711
+ shift_labels = labels[..., 1:].contiguous()
712
+ # Flatten the tokens
713
+ loss_fct = CrossEntropyLoss()
714
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
715
+ shift_labels = shift_labels.view(-1)
716
+ # Enable model parallelism
717
+ shift_labels = shift_labels.to(shift_logits.device)
718
+ loss = loss_fct(shift_logits, shift_labels)
719
+
720
+ if not return_dict:
721
+ output = (logits,) + outputs[1:]
722
+ return (loss,) + output if loss is not None else output
723
+
724
+ return CausalLMOutputWithPast(
725
+ loss=loss,
726
+ logits=logits,
727
+ past_key_values=outputs.past_key_values,
728
+ hidden_states=outputs.hidden_states,
729
+ attentions=outputs.attentions,
730
+ )
731
+
732
+ def prepare_inputs_for_generation(
733
+ self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
734
+ ):
735
+ if past_key_values:
736
+ input_ids = input_ids[:, -1:]
737
+
738
+ position_ids = kwargs.get("position_ids", None)
739
+ if attention_mask is not None and position_ids is None:
740
+ # create position_ids on the fly for batch generation
741
+ position_ids = attention_mask.long().cumsum(-1) - 1
742
+ position_ids.masked_fill_(attention_mask == 0, 1)
743
+ if past_key_values:
744
+ position_ids = position_ids[:, -1].unsqueeze(-1)
745
+
746
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
747
+ if inputs_embeds is not None and past_key_values is None:
748
+ model_inputs = {"inputs_embeds": inputs_embeds}
749
+ else:
750
+ model_inputs = {"input_ids": input_ids}
751
+
752
+ model_inputs.update(
753
+ {
754
+ "position_ids": position_ids,
755
+ "past_key_values": past_key_values,
756
+ "use_cache": kwargs.get("use_cache"),
757
+ "attention_mask": attention_mask,
758
+ }
759
+ )
760
+ return model_inputs
761
+
762
+ @staticmethod
763
+ def _reorder_cache(past_key_values, beam_idx):
764
+ reordered_past = ()
765
+ for layer_past in past_key_values:
766
+ reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
767
+ return reordered_past
768
+
769
+ def build_inputs(self, tokenizer, query: str, history: List[Tuple[str, str]] = []):
770
+ prompt = ""
771
+ for record in history:
772
+ prompt += f"""<s><|User|>:{record[0]}<eoh>\n<|Bot|>:{record[1]}<eoa>\n"""
773
+ if len(prompt) == 0:
774
+ prompt += "<s>"
775
+ prompt += f"""<|User|>:{query}<eoh>\n<|Bot|>:"""
776
+ return tokenizer([prompt], return_tensors="pt", add_special_tokens=False)
777
+
778
+ @torch.no_grad()
779
+ def chat(self,
780
+ tokenizer,
781
+ query: str,
782
+ history: List[Tuple[str, str]] = [],
783
+ streamer: Optional[BaseStreamer] = None,
784
+ max_new_tokens: int = 1024,
785
+ do_sample: bool = True,
786
+ temperature: float = 0.8,
787
+ top_p: float = 0.8,
788
+ **kwargs):
789
+ inputs = self.build_inputs(tokenizer, query, history)
790
+ inputs = {k: v.to(self.device) for k, v in inputs.items() if torch.is_tensor(v)}
791
+ outputs = self.generate(**inputs,
792
+ streamer=streamer,
793
+ max_new_tokens=max_new_tokens,
794
+ do_sample=do_sample,
795
+ temperature=temperature,
796
+ top_p=top_p,
797
+ **kwargs)
798
+ outputs = outputs[0].cpu().tolist()[len(inputs["input_ids"][0]):]
799
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
800
+ response = response.split("<eoa>")[0]
801
+ history = history + [(query, response)]
802
+ return response, history
803
+
804
+ @torch.no_grad()
805
+ def stream_chat(self,
806
+ tokenizer,
807
+ query: str,
808
+ history: List[Tuple[str, str]] = [],
809
+ max_new_tokens: int = 1024,
810
+ do_sample: bool = True,
811
+ temperature: float = 0.8,
812
+ top_p: float = 0.8,
813
+ **kwargs):
814
+ """
815
+ Return a generator in format: (response, history)
816
+ Eg.
817
+ ('你好,有什么可以帮助您的吗', [('你好', '你好,有什么可以帮助您的吗')])
818
+ ('你好,有什么可以帮助您的吗?', [('你好', '你好,有什么可以帮助您的吗?')])
819
+ """
820
+
821
+ response_queue = queue.Queue(maxsize=20)
822
+
823
+ class ChatStreamer(BaseStreamer):
824
+ def __init__(self, tokenizer) -> None:
825
+ super().__init__()
826
+ self.tokenizer = tokenizer
827
+ self.queue = response_queue
828
+ self.query = query
829
+ self.history = history
830
+ self.response = ""
831
+ self.received_inputs = False
832
+ self.queue.put((self.response, history + [(self.query, self.response)]))
833
+
834
+ def put(self, value):
835
+ if len(value.shape) > 1 and value.shape[0] > 1:
836
+ raise ValueError("ChatStreamer only supports batch size 1")
837
+ elif len(value.shape) > 1:
838
+ value = value[0]
839
+
840
+ if not self.received_inputs:
841
+ # The first received value is input_ids, ignore here
842
+ self.received_inputs = True
843
+ return
844
+
845
+ token = self.tokenizer.decode([value[-1]], skip_special_tokens=True)
846
+ if token.strip() != "<eoa>":
847
+ self.response = self.response + token
848
+ history = self.history + [(self.query, self.response)]
849
+ self.queue.put((self.response, history))
850
+
851
+ def end(self):
852
+ self.queue.put(None)
853
+
854
+ def stream_producer():
855
+ return self.chat(
856
+ tokenizer=tokenizer,
857
+ query=query,
858
+ streamer=ChatStreamer(tokenizer=tokenizer),
859
+ history=history,
860
+ max_new_tokens=max_new_tokens,
861
+ do_sample=do_sample,
862
+ temperature=temperature,
863
+ top_p=top_p,
864
+ **kwargs
865
+ )
866
+
867
+ def consumer():
868
+ producer = threading.Thread(target=stream_producer)
869
+ producer.start()
870
+ while True:
871
+ res = response_queue.get()
872
+ if res is None:
873
+ return
874
+ yield res
875
+
876
+ return consumer()
877
+
878
+
879
+ @add_start_docstrings(
880
+ """
881
+ The InternLM Model transformer with a sequence classification head on top (linear layer).
882
+
883
+ [`InternLMForSequenceClassification`] uses the last token in order to do the classification, as other causal models
884
+ (e.g. GPT-2) do.
885
+
886
+ Since it does classification on the last token, it requires to know the position of the last token. If a
887
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
888
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
889
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
890
+ each row of the batch).
891
+ """,
892
+ INTERNLM_START_DOCSTRING,
893
+ )
894
+ class InternLMForSequenceClassification(InternLMPreTrainedModel):
895
+ _keys_to_ignore_on_load_missing = [r"lm_head.weight"]
896
+
897
+ def __init__(self, config):
898
+ super().__init__(config)
899
+ self.num_labels = config.num_labels
900
+ self.model = InternLMModel(config)
901
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
902
+
903
+ # Initialize weights and apply final processing
904
+ self.post_init()
905
+
906
+ def get_input_embeddings(self):
907
+ return self.model.embed_tokens
908
+
909
+ def set_input_embeddings(self, value):
910
+ self.model.embed_tokens = value
911
+
912
+ @add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
913
+ def forward(
914
+ self,
915
+ input_ids: torch.LongTensor = None,
916
+ attention_mask: Optional[torch.Tensor] = None,
917
+ position_ids: Optional[torch.LongTensor] = None,
918
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
919
+ inputs_embeds: Optional[torch.FloatTensor] = None,
920
+ labels: Optional[torch.LongTensor] = None,
921
+ use_cache: Optional[bool] = None,
922
+ output_attentions: Optional[bool] = None,
923
+ output_hidden_states: Optional[bool] = None,
924
+ return_dict: Optional[bool] = None,
925
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
926
+ r"""
927
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
928
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
929
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
930
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
931
+ """
932
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
933
+
934
+ transformer_outputs = self.model(
935
+ input_ids,
936
+ attention_mask=attention_mask,
937
+ position_ids=position_ids,
938
+ past_key_values=past_key_values,
939
+ inputs_embeds=inputs_embeds,
940
+ use_cache=use_cache,
941
+ output_attentions=output_attentions,
942
+ output_hidden_states=output_hidden_states,
943
+ return_dict=return_dict,
944
+ )
945
+ hidden_states = transformer_outputs[0]
946
+ logits = self.score(hidden_states)
947
+
948
+ if input_ids is not None:
949
+ batch_size = input_ids.shape[0]
950
+ else:
951
+ batch_size = inputs_embeds.shape[0]
952
+
953
+ if self.config.pad_token_id is None and batch_size != 1:
954
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
955
+ if self.config.pad_token_id is None:
956
+ sequence_lengths = -1
957
+ else:
958
+ if input_ids is not None:
959
+ sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device)
960
+ else:
961
+ sequence_lengths = -1
962
+
963
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
964
+
965
+ loss = None
966
+ if labels is not None:
967
+ labels = labels.to(logits.device)
968
+ if self.config.problem_type is None:
969
+ if self.num_labels == 1:
970
+ self.config.problem_type = "regression"
971
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
972
+ self.config.problem_type = "single_label_classification"
973
+ else:
974
+ self.config.problem_type = "multi_label_classification"
975
+
976
+ if self.config.problem_type == "regression":
977
+ loss_fct = MSELoss()
978
+ if self.num_labels == 1:
979
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
980
+ else:
981
+ loss = loss_fct(pooled_logits, labels)
982
+ elif self.config.problem_type == "single_label_classification":
983
+ loss_fct = CrossEntropyLoss()
984
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
985
+ elif self.config.problem_type == "multi_label_classification":
986
+ loss_fct = BCEWithLogitsLoss()
987
+ loss = loss_fct(pooled_logits, labels)
988
+ if not return_dict:
989
+ output = (pooled_logits,) + transformer_outputs[1:]
990
+ return ((loss,) + output) if loss is not None else output
991
+
992
+ return SequenceClassifierOutputWithPast(
993
+ loss=loss,
994
+ logits=pooled_logits,
995
+ past_key_values=transformer_outputs.past_key_values,
996
+ hidden_states=transformer_outputs.hidden_states,
997
+ attentions=transformer_outputs.attentions,
998
+ )
quantize_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bits": 4,
3
+ "group_size": 32,
4
+ "damp_percent": 0.1,
5
+ "desc_act": true,
6
+ "static_groups": false,
7
+ "sym": true,
8
+ "true_sequential": true,
9
+ "model_name_or_path": null,
10
+ "model_file_base_name": "model"
11
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "</s>",
5
+ "unk_token": "<unk>"
6
+ }
tokenization_internlm.py ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+
21
+ """Tokenization classes for IntermLM."""
22
+ import os
23
+ from shutil import copyfile
24
+ from typing import Any, Dict, List, Optional, Tuple
25
+
26
+ import sentencepiece as spm
27
+
28
+ from transformers.tokenization_utils import PreTrainedTokenizer
29
+ from transformers.utils import logging
30
+
31
+
32
+ logger = logging.get_logger(__name__)
33
+
34
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
35
+
36
+ PRETRAINED_VOCAB_FILES_MAP = {}
37
+
38
+
39
+ class InternLMTokenizer(PreTrainedTokenizer):
40
+ """
41
+ Construct a InternLM tokenizer. Based on byte-level Byte-Pair-Encoding.
42
+
43
+ Args:
44
+ vocab_file (`str`):
45
+ Path to the vocabulary file.
46
+ """
47
+
48
+ vocab_files_names = VOCAB_FILES_NAMES
49
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
50
+ model_input_names = ["input_ids", "attention_mask"]
51
+ _auto_class = "AutoTokenizer"
52
+
53
+ def __init__(
54
+ self,
55
+ vocab_file,
56
+ unk_token="<unk>",
57
+ bos_token="<s>",
58
+ eos_token="</s>",
59
+ pad_token="</s>",
60
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
61
+ add_bos_token=True,
62
+ add_eos_token=False,
63
+ decode_with_prefix_space=False,
64
+ clean_up_tokenization_spaces=False,
65
+ **kwargs,
66
+ ):
67
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
68
+ super().__init__(
69
+ bos_token=bos_token,
70
+ eos_token=eos_token,
71
+ unk_token=unk_token,
72
+ pad_token=pad_token,
73
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
74
+ **kwargs,
75
+ )
76
+ self.vocab_file = vocab_file
77
+ self.add_bos_token = add_bos_token
78
+ self.add_eos_token = add_eos_token
79
+ self.decode_with_prefix_space = decode_with_prefix_space
80
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
81
+ self.sp_model.Load(vocab_file)
82
+ self._no_prefix_space_tokens = None
83
+
84
+ """ Initialisation"""
85
+
86
+ @property
87
+ def no_prefix_space_tokens(self):
88
+ if self._no_prefix_space_tokens is None:
89
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
90
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
91
+ return self._no_prefix_space_tokens
92
+
93
+ @property
94
+ def vocab_size(self):
95
+ """Returns vocab size"""
96
+ return self.sp_model.get_piece_size()
97
+
98
+ @property
99
+ def bos_token_id(self) -> Optional[int]:
100
+ return self.sp_model.bos_id()
101
+
102
+ @property
103
+ def eos_token_id(self) -> Optional[int]:
104
+ return self.sp_model.eos_id()
105
+
106
+ def get_vocab(self):
107
+ """Returns vocab as a dict"""
108
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
109
+ vocab.update(self.added_tokens_encoder)
110
+ return vocab
111
+
112
+ def _tokenize(self, text):
113
+ """Returns a tokenized string."""
114
+ return self.sp_model.encode(text, out_type=str)
115
+
116
+ def _convert_token_to_id(self, token):
117
+ """Converts a token (str) in an id using the vocab."""
118
+ return self.sp_model.piece_to_id(token)
119
+
120
+ def _convert_id_to_token(self, index):
121
+ """Converts an index (integer) in a token (str) using the vocab."""
122
+ token = self.sp_model.IdToPiece(index)
123
+ return token
124
+
125
+ def _maybe_add_prefix_space(self, tokens, decoded):
126
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
127
+ return " " + decoded
128
+ else:
129
+ return decoded
130
+
131
+ def convert_tokens_to_string(self, tokens):
132
+ """Converts a sequence of tokens (string) in a single string."""
133
+ current_sub_tokens = []
134
+ out_string = ""
135
+ prev_is_special = False
136
+ for token in tokens:
137
+ # make sure that special tokens are not decoded using sentencepiece model
138
+ if token in self.all_special_tokens:
139
+ if not prev_is_special:
140
+ out_string += " "
141
+ out_string += self.sp_model.decode(current_sub_tokens) + token
142
+ prev_is_special = True
143
+ current_sub_tokens = []
144
+ else:
145
+ current_sub_tokens.append(token)
146
+ prev_is_special = False
147
+ out_string += self.sp_model.decode(current_sub_tokens)
148
+ out_string = self.clean_up_tokenization(out_string)
149
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
150
+ return out_string[1:]
151
+
152
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
153
+ """
154
+ Save the vocabulary and special tokens file to a directory.
155
+
156
+ Args:
157
+ save_directory (`str`):
158
+ The directory in which to save the vocabulary.
159
+
160
+ Returns:
161
+ `Tuple(str)`: Paths to the files saved.
162
+ """
163
+ if not os.path.isdir(save_directory):
164
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
165
+ return
166
+ out_vocab_file = os.path.join(
167
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
168
+ )
169
+
170
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
171
+ copyfile(self.vocab_file, out_vocab_file)
172
+ elif not os.path.isfile(self.vocab_file):
173
+ with open(out_vocab_file, "wb") as fi:
174
+ content_spiece_model = self.sp_model.serialized_model_proto()
175
+ fi.write(content_spiece_model)
176
+
177
+ return (out_vocab_file,)
178
+
179
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
180
+ if self.add_bos_token:
181
+ bos_token_ids = [self.bos_token_id]
182
+ else:
183
+ bos_token_ids = []
184
+
185
+ output = bos_token_ids + token_ids_0
186
+
187
+ if token_ids_1 is not None:
188
+ output = output + token_ids_1
189
+
190
+ if self.add_eos_token:
191
+ output = output + [self.eos_token_id]
192
+
193
+ return output
194
+
195
+ def get_special_tokens_mask(
196
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
197
+ ) -> List[int]:
198
+ """
199
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
200
+ special tokens using the tokenizer `prepare_for_model` method.
201
+
202
+ Args:
203
+ token_ids_0 (`List[int]`):
204
+ List of IDs.
205
+ token_ids_1 (`List[int]`, *optional*):
206
+ Optional second list of IDs for sequence pairs.
207
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
208
+ Whether or not the token list is already formatted with special tokens for the model.
209
+
210
+ Returns:
211
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
212
+ """
213
+ if already_has_special_tokens:
214
+ return super().get_special_tokens_mask(
215
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
216
+ )
217
+
218
+ if token_ids_1 is None:
219
+ return [1] + ([0] * len(token_ids_0)) + [1]
220
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
221
+
222
+ def create_token_type_ids_from_sequences(
223
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
224
+ ) -> List[int]:
225
+ """
226
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
227
+ use of token type ids, therefore a list of zeros is returned.
228
+
229
+ Args:
230
+ token_ids_0 (`List[int]`):
231
+ List of IDs.
232
+ token_ids_1 (`List[int]`, *optional*):
233
+ Optional second list of IDs for sequence pairs.
234
+
235
+ Returns:
236
+ `List[int]`: List of zeros.
237
+ """
238
+ eos = [self.eos_token_id]
239
+
240
+ if token_ids_1 is None:
241
+ return len(token_ids_0 + eos) * [0]
242
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aab622d98c98677a1a51f969e25765154487bf3e85c7819db105db2fcacba83f
3
+ size 1658691
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoTokenizer": [
4
+ "tokenization_internlm.InternLMTokenizer",
5
+ null
6
+ ]
7
+ },
8
+ "bos_token": "<s>",
9
+ "clean_up_tokenization_spaces": false,
10
+ "eos_token": "</s>",
11
+ "model_max_length": 1000000000000000019884624838656,
12
+ "pad_token": "</s>",
13
+ "tokenizer_class": "InternLMTokenizer",
14
+ "unk_token": "<unk>"
15
+ }