sharpenb commited on
Commit
a8c1dec
1 Parent(s): c9bc586

Upload folder using huggingface_hub (#1)

Browse files

- 5e80312f4ae6251c43fcec800acb8031864ff89420b38d5aa332fde1ffac933d (ce82d5c2f6decb1bc70b3558c542386a94be235e)
- 3c6765f5cf05ac30ebdc3f66cb11a65588f3cb606a270b87001091ced300efa4 (e29e31286e91291bbb0161040e8ff184bb671d1f)

README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
3
+ base_model: llmware/bling-phi-3
4
+ metrics:
5
+ - memory_disk
6
+ - memory_inference
7
+ - inference_latency
8
+ - inference_throughput
9
+ - inference_CO2_emissions
10
+ - inference_energy_consumption
11
+ tags:
12
+ - pruna-ai
13
+ ---
14
+ <!-- header start -->
15
+ <!-- 200823 -->
16
+ <div style="width: auto; margin-left: auto; margin-right: auto">
17
+ <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
18
+ <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
+ </a>
20
+ </div>
21
+ <!-- header end -->
22
+
23
+ [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
24
+ [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
25
+ [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
26
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck)
27
+
28
+ # Simply make AI models cheaper, smaller, faster, and greener!
29
+
30
+ - Give a thumbs up if you like this model!
31
+ - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
32
+ - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
33
+ - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
34
+ - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
35
+
36
+ ## Results
37
+
38
+ ![image info](./plots.png)
39
+
40
+ **Frequently Asked Questions**
41
+ - ***How does the compression work?*** The model is compressed with llm-int8.
42
+ - ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
43
+ - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
44
+ - ***What is the model format?*** We use safetensors.
45
+ - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
46
+ - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
47
+ - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
48
+ - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
49
+ - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
50
+
51
+ ## Setup
52
+
53
+ You can run the smashed model with these steps:
54
+
55
+ 0. Check requirements from the original repo llmware/bling-phi-3 installed. In particular, check python, cuda, and transformers versions.
56
+ 1. Make sure that you have installed quantization related packages.
57
+ ```bash
58
+ pip install transformers accelerate bitsandbytes>0.37.0
59
+ ```
60
+ 2. Load & run the model.
61
+ ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+
64
+
65
+ model = AutoModelForCausalLM.from_pretrained("PrunaAI/llmware-bling-phi-3-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
66
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-phi-3")
67
+
68
+ input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
69
+
70
+ outputs = model.generate(input_ids, max_new_tokens=216)
71
+ tokenizer.decode(outputs[0])
72
+ ```
73
+
74
+ ## Configurations
75
+
76
+ The configuration info are in `smash_config.json`.
77
+
78
+ ## Credits & License
79
+
80
+ The license of the smashed model follows the license of the original model. Please check the license of the original model llmware/bling-phi-3 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
81
+
82
+ ## Want to compress other models?
83
+
84
+ - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
85
+ - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
config.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/ceph/hdd/staff/charpent/.cache/models_gnkj410qkpu9zao",
3
+ "architectures": [
4
+ "Phi3ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_phi3.Phi3Config",
9
+ "AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
10
+ },
11
+ "bos_token_id": 1,
12
+ "embd_pdrop": 0.0,
13
+ "eos_token_id": 32000,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 3072,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 8192,
18
+ "max_position_embeddings": 4096,
19
+ "model_type": "phi3",
20
+ "num_attention_heads": 32,
21
+ "num_hidden_layers": 32,
22
+ "num_key_value_heads": 32,
23
+ "original_max_position_embeddings": 4096,
24
+ "pad_token_id": 32000,
25
+ "quantization_config": {
26
+ "_load_in_4bit": true,
27
+ "_load_in_8bit": false,
28
+ "bnb_4bit_compute_dtype": "bfloat16",
29
+ "bnb_4bit_quant_storage": "uint8",
30
+ "bnb_4bit_quant_type": "fp4",
31
+ "bnb_4bit_use_double_quant": false,
32
+ "llm_int8_enable_fp32_cpu_offload": false,
33
+ "llm_int8_has_fp16_weight": false,
34
+ "llm_int8_skip_modules": [
35
+ "lm_head"
36
+ ],
37
+ "llm_int8_threshold": 6.0,
38
+ "load_in_4bit": true,
39
+ "load_in_8bit": false,
40
+ "quant_method": "bitsandbytes"
41
+ },
42
+ "resid_pdrop": 0.0,
43
+ "rms_norm_eps": 1e-05,
44
+ "rope_scaling": null,
45
+ "rope_theta": 10000.0,
46
+ "sliding_window": 2048,
47
+ "tie_word_embeddings": false,
48
+ "torch_dtype": "float16",
49
+ "transformers_version": "4.40.0",
50
+ "use_cache": true,
51
+ "vocab_size": 32064
52
+ }
configuration_phi3.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ Phi-3 model configuration"""
17
+
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+ PHI3_PRETRAINED_CONFIG_ARCHIVE_MAP = {
26
+ "microsoft/Phi-3-mini-4k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/config.json",
27
+ "microsoft/Phi-3-mini-128k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/config.json",
28
+ }
29
+
30
+
31
+ class Phi3Config(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the
36
+ [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
37
+
38
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
39
+ documentation from [`PretrainedConfig`] for more information.
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 32064):
43
+ Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`Phi3Model`].
45
+ hidden_size (`int`, *optional*, defaults to 3072):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 8192):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 32):
50
+ Number of hidden layers in the Transformer decoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 32):
52
+ Number of attention heads for each attention layer in the Transformer decoder.
53
+ num_key_value_heads (`int`, *optional*):
54
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
55
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
56
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
57
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
58
+ by meanpooling all the original heads within that group. For more details checkout [this
59
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
60
+ `num_attention_heads`.
61
+ resid_pdrop (`float`, *optional*, defaults to 0.0):
62
+ Dropout probability for mlp outputs.
63
+ embd_pdrop (`int`, *optional*, defaults to 0.0):
64
+ The dropout ratio for the embeddings.
65
+ attention_dropout (`float`, *optional*, defaults to 0.0):
66
+ The dropout ratio after computing the attention scores.
67
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
68
+ The non-linear activation function (function or string) in the decoder.
69
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
70
+ The maximum sequence length that this model might ever be used with.
71
+ original_max_position_embeddings (`int`, *optional*, defaults to 4096):
72
+ The maximum sequence length that this model was trained with. This is used to determine the size of the
73
+ original RoPE embeddings when using long scaling.
74
+ initializer_range (`float`, *optional*, defaults to 0.02):
75
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
77
+ The epsilon value used for the RMSNorm.
78
+ use_cache (`bool`, *optional*, defaults to `True`):
79
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
80
+ relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
81
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
82
+ Whether to tie weight embeddings
83
+ rope_theta (`float`, *optional*, defaults to 10000.0):
84
+ The base period of the RoPE embeddings.
85
+ rope_scaling (`dict`, *optional*):
86
+ The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
87
+ contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be either `su` or `yarn` and
88
+ the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
89
+ divided by the number of attention heads divided by 2.
90
+ bos_token_id (`int`, *optional*, defaults to 1):
91
+ The id of the "beginning-of-sequence" token.
92
+ eos_token_id (`int`, *optional*, defaults to 32000):
93
+ The id of the "end-of-sequence" token.
94
+ pad_token_id (`int`, *optional*, defaults to 32000):
95
+ The id of the padding token.
96
+ sliding_window (`int`, *optional*):
97
+ Sliding window attention window size. If `None`, no sliding window is applied.
98
+
99
+ Example:
100
+
101
+ ```python
102
+ >>> from transformers import Phi3Model, Phi3Config
103
+
104
+ >>> # Initializing a Phi-3 style configuration
105
+ >>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
106
+
107
+ >>> # Initializing a model from the configuration
108
+ >>> model = Phi3Model(configuration)
109
+
110
+ >>> # Accessing the model configuration
111
+ >>> configuration = model.config
112
+ ```"""
113
+
114
+ model_type = "phi3"
115
+ keys_to_ignore_at_inference = ["past_key_values"]
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_size=32064,
120
+ hidden_size=3072,
121
+ intermediate_size=8192,
122
+ num_hidden_layers=32,
123
+ num_attention_heads=32,
124
+ num_key_value_heads=None,
125
+ resid_pdrop=0.0,
126
+ embd_pdrop=0.0,
127
+ attention_dropout=0.0,
128
+ hidden_act="silu",
129
+ max_position_embeddings=4096,
130
+ original_max_position_embeddings=4096,
131
+ initializer_range=0.02,
132
+ rms_norm_eps=1e-5,
133
+ use_cache=True,
134
+ tie_word_embeddings=False,
135
+ rope_theta=10000.0,
136
+ rope_scaling=None,
137
+ bos_token_id=1,
138
+ eos_token_id=32000,
139
+ pad_token_id=32000,
140
+ sliding_window=None,
141
+ **kwargs,
142
+ ):
143
+ self.vocab_size = vocab_size
144
+ self.hidden_size = hidden_size
145
+ self.intermediate_size = intermediate_size
146
+ self.num_hidden_layers = num_hidden_layers
147
+ self.num_attention_heads = num_attention_heads
148
+
149
+ if num_key_value_heads is None:
150
+ num_key_value_heads = num_attention_heads
151
+
152
+ self.num_key_value_heads = num_key_value_heads
153
+ self.resid_pdrop = resid_pdrop
154
+ self.embd_pdrop = embd_pdrop
155
+ self.attention_dropout = attention_dropout
156
+ self.hidden_act = hidden_act
157
+ self.max_position_embeddings = max_position_embeddings
158
+ self.original_max_position_embeddings = original_max_position_embeddings
159
+ self.initializer_range = initializer_range
160
+ self.rms_norm_eps = rms_norm_eps
161
+ self.use_cache = use_cache
162
+ self.rope_theta = rope_theta
163
+ self.rope_scaling = rope_scaling
164
+ self._rope_scaling_validation()
165
+ self.sliding_window = sliding_window
166
+
167
+ super().__init__(
168
+ bos_token_id=bos_token_id,
169
+ eos_token_id=eos_token_id,
170
+ pad_token_id=pad_token_id,
171
+ tie_word_embeddings=tie_word_embeddings,
172
+ **kwargs,
173
+ )
174
+
175
+ def _rope_scaling_validation(self):
176
+ """
177
+ Validate the `rope_scaling` configuration.
178
+ """
179
+ if self.rope_scaling is None:
180
+ return
181
+
182
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 3:
183
+ raise ValueError(
184
+ "`rope_scaling` must be a dictionary with three fields, `type`, `short_factor` and `long_factor`, "
185
+ f"got {self.rope_scaling}"
186
+ )
187
+ rope_scaling_type = self.rope_scaling.get("type", None)
188
+ rope_scaling_short_factor = self.rope_scaling.get("short_factor", None)
189
+ rope_scaling_long_factor = self.rope_scaling.get("long_factor", None)
190
+ if rope_scaling_type is None or rope_scaling_type not in ["su", "yarn"]:
191
+ raise ValueError(f"`rope_scaling`'s type field must be one of ['su', 'yarn'], got {rope_scaling_type}")
192
+ if not (
193
+ isinstance(rope_scaling_short_factor, list)
194
+ and all(isinstance(x, (int, float)) for x in rope_scaling_short_factor)
195
+ ):
196
+ raise ValueError(
197
+ f"`rope_scaling`'s short_factor field must be a list of numbers, got {rope_scaling_short_factor}"
198
+ )
199
+ if not len(rope_scaling_short_factor) == self.hidden_size // self.num_attention_heads // 2:
200
+ raise ValueError(
201
+ f"`rope_scaling`'s short_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_short_factor)}"
202
+ )
203
+ if not (
204
+ isinstance(rope_scaling_long_factor, list)
205
+ and all(isinstance(x, (int, float)) for x in rope_scaling_long_factor)
206
+ ):
207
+ raise ValueError(
208
+ f"`rope_scaling`'s long_factor field must be a list of numbers, got {rope_scaling_long_factor}"
209
+ )
210
+ if not len(rope_scaling_long_factor) == self.hidden_size // self.num_attention_heads // 2:
211
+ raise ValueError(
212
+ f"`rope_scaling`'s long_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_long_factor)}"
213
+ )
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": [
5
+ 32000,
6
+ 32007
7
+ ],
8
+ "pad_token_id": 32000,
9
+ "transformers_version": "4.40.0"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13469ffff265a02cda55bc12f695985255739fa305f8ea29360adc6bdd1f351c
3
+ size 2432922120
modeling_phi3.py ADDED
@@ -0,0 +1,1645 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ PyTorch Phi-3 model."""
17
+
18
+ import inspect
19
+ import math
20
+ import warnings
21
+ from typing import List, Optional, Tuple, Union
22
+
23
+ import torch
24
+ import torch.nn.functional as F
25
+ import torch.utils.checkpoint
26
+ from torch import nn
27
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
28
+
29
+ from transformers.activations import ACT2FN
30
+ from transformers.cache_utils import Cache, DynamicCache
31
+ from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask
32
+ from transformers.modeling_outputs import (
33
+ BaseModelOutputWithPast,
34
+ CausalLMOutputWithPast,
35
+ SequenceClassifierOutputWithPast,
36
+ TokenClassifierOutput,
37
+ )
38
+ from transformers.modeling_utils import PreTrainedModel
39
+ from transformers.utils import (
40
+ add_code_sample_docstrings,
41
+ add_start_docstrings,
42
+ add_start_docstrings_to_model_forward,
43
+ is_flash_attn_2_available,
44
+ is_flash_attn_greater_or_equal_2_10,
45
+ logging,
46
+ replace_return_docstrings,
47
+ )
48
+ from .configuration_phi3 import Phi3Config
49
+
50
+
51
+ logger = logging.get_logger(__name__)
52
+
53
+ # Transformers scans dependencies in the modeling file, causing issues on conditional loading. The regex only ignores try/catch blocks, but not if statements
54
+ # if is_flash_attn_2_available():
55
+ _flash_supports_window_size = False
56
+ try:
57
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
58
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
59
+
60
+ _flash_supports_window_size = "window_size" in list(inspect.signature(flash_attn_func).parameters)
61
+ except ImportError as error:
62
+ logger.warning(
63
+ f"`flash-attention` package not found, consider installing for better performance: {error}."
64
+ )
65
+ if not _flash_supports_window_size:
66
+ logger.warning(
67
+ "Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`."
68
+ )
69
+
70
+ _CHECKPOINT_FOR_DOC = "microsoft/Phi-3-mini-4k-instruct"
71
+ _CONFIG_FOR_DOC = "Phi3Config"
72
+
73
+ PHI3_PRETRAINED_MODEL_ARCHIVE_LIST = [
74
+ "microsoft/Phi-3-mini-4k-instruct",
75
+ "microsoft/Phi-3-mini-128k-instruct",
76
+ # See all Phi-3 models at https://huggingface.co/models?filter=Phi-3
77
+ ]
78
+
79
+
80
+ # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->Phi3
81
+ class Phi3RMSNorm(nn.Module):
82
+ def __init__(self, hidden_size, eps=1e-6):
83
+ """
84
+ Phi3RMSNorm is equivalent to T5LayerNorm
85
+ """
86
+ super().__init__()
87
+ self.weight = nn.Parameter(torch.ones(hidden_size))
88
+ self.variance_epsilon = eps
89
+
90
+ def forward(self, hidden_states):
91
+ input_dtype = hidden_states.dtype
92
+ hidden_states = hidden_states.to(torch.float32)
93
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
94
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
95
+ return self.weight * hidden_states.to(input_dtype)
96
+
97
+
98
+ # Copied from transformers.models.llama.modeling_llama._get_unpad_data
99
+ def _get_unpad_data(attention_mask):
100
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
101
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
102
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
103
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.int32), (1, 0))
104
+ return (
105
+ indices,
106
+ cu_seqlens,
107
+ max_seqlen_in_batch,
108
+ )
109
+
110
+
111
+ # Copied from transformers.models.gemma.modeling_gemma.GemmaRotaryEmbedding with gemma->phi3, Gemma->Phi3
112
+ class Phi3RotaryEmbedding(nn.Module):
113
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
114
+ super().__init__()
115
+
116
+ self.dim = dim
117
+ self.max_position_embeddings = max_position_embeddings
118
+ self.base = base
119
+ self.register_buffer("inv_freq", None, persistent=False)
120
+
121
+ @torch.no_grad()
122
+ def forward(self, x, position_ids, seq_len=None):
123
+ # x: [bs, num_attention_heads, seq_len, head_size]
124
+ if self.inv_freq is None:
125
+ self.inv_freq = 1.0 / (
126
+ self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64, device=x.device).float() / self.dim)
127
+ )
128
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
129
+ position_ids_expanded = position_ids[:, None, :].float()
130
+ # Force float32 since bfloat16 loses precision on long contexts
131
+ # See https://github.com/huggingface/transformers/pull/29285
132
+ device_type = x.device.type
133
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
134
+ with torch.autocast(device_type=device_type, enabled=False):
135
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
136
+ emb = torch.cat((freqs, freqs), dim=-1)
137
+ cos = emb.cos()
138
+ sin = emb.sin()
139
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
140
+
141
+
142
+ class Phi3SuScaledRotaryEmbedding(Phi3RotaryEmbedding):
143
+ def __init__(
144
+ self,
145
+ dim,
146
+ short_factor,
147
+ long_factor,
148
+ original_max_position_embeddings=2048,
149
+ max_position_embeddings=2048,
150
+ base=10000,
151
+ device=None,
152
+ ):
153
+ super().__init__(dim, max_position_embeddings, base, device)
154
+
155
+ self.short_factor = short_factor
156
+ self.long_factor = long_factor
157
+ self.original_max_position_embeddings = original_max_position_embeddings
158
+
159
+ def _calc_scaling_factor(self, scale):
160
+ if scale <= 1.0:
161
+ return 1.0
162
+ return math.sqrt(1 + math.log(scale) / math.log(self.original_max_position_embeddings))
163
+
164
+ @torch.no_grad()
165
+ def forward(self, x, position_ids, seq_len=None):
166
+ seq_len = torch.max(position_ids) + 1
167
+ if seq_len > self.original_max_position_embeddings:
168
+ ext_factors = torch.tensor(self.long_factor, dtype=torch.float32, device=x.device)
169
+ else:
170
+ ext_factors = torch.tensor(self.short_factor, dtype=torch.float32, device=x.device)
171
+
172
+ self.inv_freq = 1.0 / (
173
+ ext_factors
174
+ * self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64, device=x.device).float() / self.dim)
175
+ )
176
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
177
+ position_ids_expanded = position_ids[:, None, :].float()
178
+
179
+ # Force float32 since bfloat16 loses precision on long contexts
180
+ # See https://github.com/huggingface/transformers/pull/29285
181
+ device_type = x.device.type
182
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
183
+ with torch.autocast(device_type=device_type, enabled=False):
184
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
185
+ scaling_factor = self._calc_scaling_factor(
186
+ self.max_position_embeddings / self.original_max_position_embeddings
187
+ )
188
+ emb = torch.cat((freqs, freqs), dim=-1)
189
+ cos = emb.cos() * scaling_factor
190
+ sin = emb.sin() * scaling_factor
191
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
192
+
193
+
194
+ class Phi3YarnScaledRotaryEmbedding(Phi3RotaryEmbedding):
195
+ def __init__(
196
+ self,
197
+ dim,
198
+ short_factor,
199
+ long_factor,
200
+ original_max_position_embeddings=2048,
201
+ max_position_embeddings=2048,
202
+ base=10000,
203
+ device=None,
204
+ ):
205
+ super().__init__(dim, max_position_embeddings, base, device)
206
+
207
+ self.short_factor = short_factor
208
+ self.long_factor = long_factor
209
+ self.original_max_position_embeddings = original_max_position_embeddings
210
+
211
+ def _calc_scaling_factor(self, scale):
212
+ if scale <= 1.0:
213
+ return 1.0
214
+ return 0.1 * math.log(scale) + 1.0
215
+
216
+ @torch.no_grad()
217
+ def forward(self, x, position_ids, seq_len=None):
218
+ seq_len = torch.max(position_ids) + 1
219
+ if seq_len > self.original_max_position_embeddings:
220
+ ext_factors = torch.tensor(self.long_factor, dtype=torch.float32, device=x.device)
221
+ else:
222
+ ext_factors = torch.tensor(self.short_factor, dtype=torch.float32, device=x.device)
223
+
224
+ self.inv_freq = 1.0 / (
225
+ ext_factors
226
+ * self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64, device=x.device).float() / self.dim)
227
+ )
228
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
229
+ position_ids_expanded = position_ids[:, None, :].float()
230
+
231
+ # Force float32 since bfloat16 loses precision on long contexts
232
+ # See https://github.com/huggingface/transformers/pull/29285
233
+ device_type = x.device.type
234
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
235
+ with torch.autocast(device_type=device_type, enabled=False):
236
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
237
+ scaling_factor = self._calc_scaling_factor(
238
+ self.max_position_embeddings / self.original_max_position_embeddings
239
+ )
240
+ emb = torch.cat((freqs, freqs), dim=-1)
241
+ cos = emb.cos() * scaling_factor
242
+ sin = emb.sin() * scaling_factor
243
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
244
+
245
+
246
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
247
+ def rotate_half(x):
248
+ """Rotates half the hidden dims of the input."""
249
+ x1 = x[..., : x.shape[-1] // 2]
250
+ x2 = x[..., x.shape[-1] // 2 :]
251
+ return torch.cat((-x2, x1), dim=-1)
252
+
253
+
254
+ # Copied from transformers.models.llama.modeling_llama.apply_rotary_pos_emb
255
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
256
+ """Applies Rotary Position Embedding to the query and key tensors.
257
+
258
+ Args:
259
+ q (`torch.Tensor`): The query tensor.
260
+ k (`torch.Tensor`): The key tensor.
261
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
262
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
263
+ position_ids (`torch.Tensor`, *optional*):
264
+ Deprecated and unused.
265
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
266
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
267
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
268
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
269
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
270
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
271
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
272
+ Returns:
273
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
274
+ """
275
+ cos = cos.unsqueeze(unsqueeze_dim)
276
+ sin = sin.unsqueeze(unsqueeze_dim)
277
+ q_embed = (q * cos) + (rotate_half(q) * sin)
278
+ k_embed = (k * cos) + (rotate_half(k) * sin)
279
+ return q_embed, k_embed
280
+
281
+
282
+ class Phi3MLP(nn.Module):
283
+ def __init__(self, config):
284
+ super().__init__()
285
+
286
+ self.config = config
287
+ self.gate_up_proj = nn.Linear(config.hidden_size, 2 * config.intermediate_size, bias=False)
288
+ self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
289
+
290
+ self.activation_fn = ACT2FN[config.hidden_act]
291
+
292
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
293
+ up_states = self.gate_up_proj(hidden_states)
294
+
295
+ gate, up_states = up_states.chunk(2, dim=-1)
296
+ up_states = up_states * self.activation_fn(gate)
297
+
298
+ return self.down_proj(up_states)
299
+
300
+
301
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv with llama->phi
302
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
303
+ """
304
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
305
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
306
+ """
307
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
308
+ if n_rep == 1:
309
+ return hidden_states
310
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
311
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
312
+
313
+
314
+ class Phi3Attention(nn.Module):
315
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
316
+
317
+ def __init__(self, config: Phi3Config, layer_idx: Optional[int] = None):
318
+ super().__init__()
319
+ self.config = config
320
+ self.layer_idx = layer_idx
321
+ if layer_idx is None:
322
+ logger.warning_once(
323
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
324
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
325
+ "when creating this class."
326
+ )
327
+
328
+ self.attention_dropout = config.attention_dropout
329
+ self.hidden_size = config.hidden_size
330
+ self.num_heads = config.num_attention_heads
331
+ self.head_dim = self.hidden_size // self.num_heads
332
+ self.num_key_value_heads = config.num_key_value_heads
333
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
334
+ self.max_position_embeddings = config.max_position_embeddings
335
+ self.original_max_position_embeddings = config.original_max_position_embeddings
336
+ self.rope_theta = config.rope_theta
337
+ self.rope_scaling = config.rope_scaling
338
+ self.is_causal = True
339
+
340
+ if (self.head_dim * self.num_heads) != self.hidden_size:
341
+ raise ValueError(
342
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
343
+ f" and `num_heads`: {self.num_heads})."
344
+ )
345
+
346
+ op_size = self.num_heads * self.head_dim + 2 * (self.num_key_value_heads * self.head_dim)
347
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
348
+ self.qkv_proj = nn.Linear(self.hidden_size, op_size, bias=False)
349
+ self._init_rope()
350
+
351
+ def _init_rope(self):
352
+ if self.rope_scaling is None:
353
+ self.rotary_emb = Phi3RotaryEmbedding(
354
+ self.head_dim,
355
+ max_position_embeddings=self.max_position_embeddings,
356
+ base=self.rope_theta,
357
+ )
358
+ else:
359
+ scaling_type = self.config.rope_scaling["type"]
360
+ short_factor = self.config.rope_scaling["short_factor"]
361
+ long_factor = self.config.rope_scaling["long_factor"]
362
+
363
+ if scaling_type == "su":
364
+ self.rotary_emb = Phi3SuScaledRotaryEmbedding(
365
+ self.head_dim,
366
+ short_factor,
367
+ long_factor,
368
+ max_position_embeddings=self.max_position_embeddings,
369
+ original_max_position_embeddings=self.original_max_position_embeddings,
370
+ base=self.rope_theta,
371
+ )
372
+ elif scaling_type == "yarn":
373
+ self.rotary_emb = Phi3YarnScaledRotaryEmbedding(
374
+ self.head_dim,
375
+ short_factor,
376
+ long_factor,
377
+ max_position_embeddings=self.max_position_embeddings,
378
+ original_max_position_embeddings=self.original_max_position_embeddings,
379
+ base=self.rope_theta,
380
+ )
381
+ else:
382
+ raise ValueError(f"Unknown RoPE scaling type {scaling_type}")
383
+
384
+ def forward(
385
+ self,
386
+ hidden_states: torch.Tensor,
387
+ attention_mask: Optional[torch.Tensor] = None,
388
+ position_ids: Optional[torch.LongTensor] = None,
389
+ past_key_value: Optional[Cache] = None,
390
+ output_attentions: bool = False,
391
+ use_cache: bool = False,
392
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
393
+ logger.warning_once("You are not running the flash-attention implementation, expect numerical differences.")
394
+
395
+ bsz, q_len, _ = hidden_states.size()
396
+
397
+ qkv = self.qkv_proj(hidden_states)
398
+ query_pos = self.num_heads * self.head_dim
399
+ query_states = qkv[..., :query_pos]
400
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
401
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
402
+
403
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
404
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
405
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
406
+
407
+ kv_seq_len = key_states.shape[-2]
408
+ if past_key_value is not None:
409
+ if self.layer_idx is None:
410
+ raise ValueError(
411
+ f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
412
+ "for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
413
+ "with a layer index."
414
+ )
415
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
416
+ cos, sin = self.rotary_emb(value_states, position_ids, seq_len=kv_seq_len)
417
+
418
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
419
+
420
+ if past_key_value is not None:
421
+ cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
422
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
423
+
424
+ # repeat k/v heads if n_kv_heads < n_heads
425
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
426
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
427
+
428
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
429
+
430
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
431
+ raise ValueError(
432
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
433
+ f" {attn_weights.size()}"
434
+ )
435
+
436
+ if attention_mask is not None:
437
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
438
+ raise ValueError(
439
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
440
+ )
441
+ attn_weights = attn_weights + attention_mask
442
+
443
+ # upcast attention to fp32
444
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(value_states.dtype)
445
+ attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
446
+
447
+ attn_output = torch.matmul(attn_weights, value_states)
448
+
449
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
450
+ raise ValueError(
451
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
452
+ f" {attn_output.size()}"
453
+ )
454
+
455
+ attn_output = attn_output.transpose(1, 2).contiguous()
456
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
457
+
458
+ attn_output = self.o_proj(attn_output)
459
+
460
+ if not output_attentions:
461
+ attn_weights = None
462
+
463
+ return attn_output, attn_weights, past_key_value
464
+
465
+
466
+ class Phi3FlashAttention2(Phi3Attention):
467
+ """
468
+ Phi-3 flash attention module. This module inherits from `Phi3Attention` as the weights of the module stays
469
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
470
+ flash attention and deal with padding tokens in case the input contains any of them.
471
+ """
472
+
473
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2.__init__
474
+ def __init__(self, *args, **kwargs):
475
+ super().__init__(*args, **kwargs)
476
+
477
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
478
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
479
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
480
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
481
+
482
+ def forward(
483
+ self,
484
+ hidden_states: torch.Tensor,
485
+ attention_mask: Optional[torch.LongTensor] = None,
486
+ position_ids: Optional[torch.LongTensor] = None,
487
+ past_key_value: Optional[Cache] = None,
488
+ output_attentions: bool = False,
489
+ use_cache: bool = False,
490
+ **kwargs,
491
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
492
+ # Phi3FlashAttention2 attention does not support output_attentions
493
+
494
+ if not _flash_supports_window_size:
495
+ logger.warning_once(
496
+ "The current flash attention version does not support sliding window attention. Please use `attn_implementation='eager'` or upgrade flash-attn library."
497
+ )
498
+ raise ValueError("The current flash attention version does not support sliding window attention.")
499
+
500
+ output_attentions = False
501
+
502
+ if "padding_mask" in kwargs:
503
+ warnings.warn(
504
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
505
+ )
506
+
507
+ # overwrite attention_mask with padding_mask
508
+ attention_mask = kwargs.pop("padding_mask")
509
+
510
+ bsz, q_len, _ = hidden_states.size()
511
+
512
+ qkv = self.qkv_proj(hidden_states)
513
+ query_pos = self.num_heads * self.head_dim
514
+ query_states = qkv[..., :query_pos]
515
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
516
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
517
+
518
+ # Flash attention requires the input to have the shape
519
+ # batch_size x seq_length x head_dim x hidden_dim
520
+ # therefore we just need to keep the original shape
521
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
522
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
523
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
524
+
525
+ kv_seq_len = key_states.shape[-2]
526
+ if past_key_value is not None:
527
+ if self.layer_idx is None:
528
+ raise ValueError(
529
+ f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
530
+ "for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
531
+ "with a layer index."
532
+ )
533
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
534
+
535
+ # Because the input can be padded, the absolute sequence length depends on the max position id.
536
+ rotary_seq_len = max(kv_seq_len, position_ids[:, -1].max().item()) + 1
537
+ cos, sin = self.rotary_emb(value_states, position_ids, seq_len=rotary_seq_len)
538
+
539
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
540
+
541
+ use_sliding_windows = (
542
+ _flash_supports_window_size
543
+ and getattr(self.config, "sliding_window", None) is not None
544
+ and kv_seq_len > self.config.sliding_window
545
+ )
546
+
547
+ if past_key_value is not None:
548
+ # Activate slicing cache only if the config has a value `sliding_windows` attribute
549
+ cache_has_contents = past_key_value.get_seq_length(self.layer_idx) > 0
550
+ if (
551
+ getattr(self.config, "sliding_window", None) is not None
552
+ and kv_seq_len > self.config.sliding_window
553
+ and cache_has_contents
554
+ ):
555
+ slicing_tokens = 1 - self.config.sliding_window
556
+
557
+ past_key = past_key_value[self.layer_idx][0]
558
+ past_value = past_key_value[self.layer_idx][1]
559
+
560
+ past_key = past_key[:, :, slicing_tokens:, :].contiguous()
561
+ past_value = past_value[:, :, slicing_tokens:, :].contiguous()
562
+
563
+ if past_key.shape[-2] != self.config.sliding_window - 1:
564
+ raise ValueError(
565
+ f"past key must have a shape of (`batch_size, num_heads, self.config.sliding_window-1, head_dim`), got"
566
+ f" {past_key.shape}"
567
+ )
568
+
569
+ if attention_mask is not None:
570
+ attention_mask = attention_mask[:, slicing_tokens:]
571
+ attention_mask = torch.cat([attention_mask, torch.ones_like(attention_mask[:, -1:])], dim=-1)
572
+
573
+ cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
574
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
575
+
576
+ # repeat k/v heads if n_kv_heads < n_heads
577
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
578
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
579
+
580
+ attn_dropout = self.attention_dropout if self.training else 0.0
581
+
582
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
583
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
584
+ # cast them back in the correct dtype just to be sure everything works as expected.
585
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
586
+ # in fp32.
587
+
588
+ if query_states.dtype == torch.float32:
589
+ if torch.is_autocast_enabled():
590
+ target_dtype = torch.get_autocast_gpu_dtype()
591
+ # Handle the case where the model is quantized
592
+ elif hasattr(self.config, "_pre_quantization_dtype"):
593
+ target_dtype = self.config._pre_quantization_dtype
594
+ else:
595
+ target_dtype = self.qkv_proj.weight.dtype
596
+
597
+ logger.warning_once(
598
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
599
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
600
+ f" {target_dtype}."
601
+ )
602
+
603
+ query_states = query_states.to(target_dtype)
604
+ key_states = key_states.to(target_dtype)
605
+ value_states = value_states.to(target_dtype)
606
+
607
+ # Reashape to the expected shape for Flash Attention
608
+ query_states = query_states.transpose(1, 2)
609
+ key_states = key_states.transpose(1, 2)
610
+ value_states = value_states.transpose(1, 2)
611
+
612
+ attn_output = self._flash_attention_forward(
613
+ query_states,
614
+ key_states,
615
+ value_states,
616
+ attention_mask,
617
+ q_len,
618
+ dropout=attn_dropout,
619
+ use_sliding_windows=use_sliding_windows,
620
+ )
621
+
622
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
623
+ attn_output = self.o_proj(attn_output)
624
+
625
+ if not output_attentions:
626
+ attn_weights = None
627
+
628
+ return attn_output, attn_weights, past_key_value
629
+
630
+ # Copied from transformers.models.mistral.modeling_mistral.MistralFlashAttention2._flash_attention_forward
631
+ def _flash_attention_forward(
632
+ self,
633
+ query_states,
634
+ key_states,
635
+ value_states,
636
+ attention_mask,
637
+ query_length,
638
+ dropout=0.0,
639
+ softmax_scale=None,
640
+ use_sliding_windows=False,
641
+ ):
642
+ """
643
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
644
+ first unpad the input, then computes the attention scores and pad the final attention scores.
645
+
646
+ Args:
647
+ query_states (`torch.Tensor`):
648
+ Input query states to be passed to Flash Attention API
649
+ key_states (`torch.Tensor`):
650
+ Input key states to be passed to Flash Attention API
651
+ value_states (`torch.Tensor`):
652
+ Input value states to be passed to Flash Attention API
653
+ attention_mask (`torch.Tensor`):
654
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
655
+ position of padding tokens and 1 for the position of non-padding tokens.
656
+ dropout (`float`):
657
+ Attention dropout
658
+ softmax_scale (`float`, *optional*):
659
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
660
+ use_sliding_windows (`bool`, *optional*):
661
+ Whether to activate sliding window attention.
662
+ """
663
+ if not self._flash_attn_uses_top_left_mask:
664
+ causal = self.is_causal
665
+ else:
666
+ # TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1. For details, please see the comment in LlamaFlashAttention2 __init__.
667
+ causal = self.is_causal and query_length != 1
668
+
669
+ # Contains at least one padding token in the sequence
670
+ if attention_mask is not None:
671
+ batch_size = query_states.shape[0]
672
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(
673
+ query_states, key_states, value_states, attention_mask, query_length
674
+ )
675
+
676
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
677
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
678
+
679
+ if not use_sliding_windows:
680
+ attn_output_unpad = flash_attn_varlen_func(
681
+ query_states,
682
+ key_states,
683
+ value_states,
684
+ cu_seqlens_q=cu_seqlens_q,
685
+ cu_seqlens_k=cu_seqlens_k,
686
+ max_seqlen_q=max_seqlen_in_batch_q,
687
+ max_seqlen_k=max_seqlen_in_batch_k,
688
+ dropout_p=dropout,
689
+ softmax_scale=softmax_scale,
690
+ causal=causal,
691
+ )
692
+ else:
693
+ attn_output_unpad = flash_attn_varlen_func(
694
+ query_states,
695
+ key_states,
696
+ value_states,
697
+ cu_seqlens_q=cu_seqlens_q,
698
+ cu_seqlens_k=cu_seqlens_k,
699
+ max_seqlen_q=max_seqlen_in_batch_q,
700
+ max_seqlen_k=max_seqlen_in_batch_k,
701
+ dropout_p=dropout,
702
+ softmax_scale=softmax_scale,
703
+ causal=causal,
704
+ window_size=(self.config.sliding_window, self.config.sliding_window),
705
+ )
706
+
707
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
708
+ else:
709
+ if not use_sliding_windows:
710
+ attn_output = flash_attn_func(
711
+ query_states,
712
+ key_states,
713
+ value_states,
714
+ dropout,
715
+ softmax_scale=softmax_scale,
716
+ causal=causal,
717
+ )
718
+ else:
719
+ attn_output = flash_attn_func(
720
+ query_states,
721
+ key_states,
722
+ value_states,
723
+ dropout,
724
+ softmax_scale=softmax_scale,
725
+ causal=causal,
726
+ window_size=(self.config.sliding_window, self.config.sliding_window),
727
+ )
728
+
729
+ return attn_output
730
+
731
+ # Copied from transformers.models.mistral.modeling_mistral.MistralFlashAttention2._upad_input
732
+ def _upad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
733
+ batch_size, kv_seq_len, num_heads, head_dim = key_layer.shape
734
+
735
+ # On the first iteration we need to properly re-create the padding mask
736
+ # by slicing it on the proper place
737
+ if kv_seq_len != attention_mask.shape[-1]:
738
+ attention_mask_num_tokens = attention_mask.shape[-1]
739
+ attention_mask = attention_mask[:, attention_mask_num_tokens - kv_seq_len :]
740
+
741
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
742
+
743
+ key_layer = index_first_axis(key_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k)
744
+ value_layer = index_first_axis(value_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k)
745
+
746
+ if query_length == kv_seq_len:
747
+ query_layer = index_first_axis(
748
+ query_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k
749
+ )
750
+ cu_seqlens_q = cu_seqlens_k
751
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
752
+ indices_q = indices_k
753
+ elif query_length == 1:
754
+ max_seqlen_in_batch_q = 1
755
+ cu_seqlens_q = torch.arange(
756
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
757
+ ) # There is a memcpy here, that is very bad.
758
+ indices_q = cu_seqlens_q[:-1]
759
+ query_layer = query_layer.squeeze(1)
760
+ else:
761
+ # The -q_len: slice assumes left padding.
762
+ attention_mask = attention_mask[:, -query_length:]
763
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
764
+
765
+ return (
766
+ query_layer,
767
+ key_layer,
768
+ value_layer,
769
+ indices_q,
770
+ (cu_seqlens_q, cu_seqlens_k),
771
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
772
+ )
773
+
774
+
775
+ # copied from transformers.models.llama.modeling_llama.LlamaSdpaAttention with Llama->Phi3
776
+ # TODO @Arthur no longer copied from LLama after static cache
777
+ class Phi3SdpaAttention(Phi3Attention):
778
+ """
779
+ Phi3 attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
780
+ `Phi3Attention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
781
+ SDPA API.
782
+ """
783
+
784
+ # Adapted from Phi3Attention.forward
785
+ def forward(
786
+ self,
787
+ hidden_states: torch.Tensor,
788
+ attention_mask: Optional[torch.Tensor] = None,
789
+ position_ids: Optional[torch.LongTensor] = None,
790
+ past_key_value: Optional[Cache] = None,
791
+ output_attentions: bool = False,
792
+ use_cache: bool = False,
793
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
794
+ if output_attentions:
795
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
796
+ logger.warning_once(
797
+ "Phi3Model is using Phi3SdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
798
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
799
+ )
800
+ return super().forward(
801
+ hidden_states=hidden_states,
802
+ attention_mask=attention_mask,
803
+ position_ids=position_ids,
804
+ past_key_value=past_key_value,
805
+ output_attentions=output_attentions,
806
+ use_cache=use_cache,
807
+ )
808
+
809
+ bsz, q_len, _ = hidden_states.size()
810
+
811
+ qkv = self.qkv_proj(hidden_states)
812
+ query_pos = self.num_heads * self.head_dim
813
+ query_states = qkv[..., :query_pos]
814
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
815
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
816
+
817
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
818
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
819
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
820
+
821
+ kv_seq_len = key_states.shape[-2]
822
+ if past_key_value is not None:
823
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
824
+ cos, sin = self.rotary_emb(value_states, position_ids, seq_len=kv_seq_len)
825
+
826
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
827
+
828
+ if past_key_value is not None:
829
+ cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
830
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
831
+
832
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
833
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
834
+
835
+ if attention_mask is not None:
836
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
837
+ raise ValueError(
838
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
839
+ )
840
+
841
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
842
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
843
+ if query_states.device.type == "cuda" and attention_mask is not None:
844
+ query_states = query_states.contiguous()
845
+ key_states = key_states.contiguous()
846
+ value_states = value_states.contiguous()
847
+
848
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
849
+ query_states,
850
+ key_states,
851
+ value_states,
852
+ attn_mask=attention_mask,
853
+ dropout_p=self.attention_dropout if self.training else 0.0,
854
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
855
+ is_causal=self.is_causal and attention_mask is None and q_len > 1,
856
+ )
857
+
858
+ attn_output = attn_output.transpose(1, 2).contiguous()
859
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
860
+
861
+ attn_output = self.o_proj(attn_output)
862
+
863
+ return attn_output, None, past_key_value
864
+
865
+
866
+ PHI3_ATTENTION_CLASSES = {
867
+ "eager": Phi3Attention,
868
+ "flash_attention_2": Phi3FlashAttention2,
869
+ "sdpa": Phi3SdpaAttention,
870
+ }
871
+
872
+
873
+ class Phi3DecoderLayer(nn.Module):
874
+ def __init__(self, config: Phi3Config, layer_idx: int):
875
+ super().__init__()
876
+
877
+ self.config = config
878
+ self.self_attn = PHI3_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx=layer_idx)
879
+
880
+ self.mlp = Phi3MLP(config)
881
+ self.input_layernorm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
882
+
883
+ self.resid_attn_dropout = nn.Dropout(config.resid_pdrop)
884
+ self.resid_mlp_dropout = nn.Dropout(config.resid_pdrop)
885
+ self.post_attention_layernorm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
886
+
887
+ def forward(
888
+ self,
889
+ hidden_states: torch.Tensor,
890
+ attention_mask: Optional[torch.Tensor] = None,
891
+ position_ids: Optional[torch.LongTensor] = None,
892
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
893
+ output_attentions: Optional[bool] = False,
894
+ use_cache: Optional[bool] = False,
895
+ **kwargs,
896
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
897
+ if "padding_mask" in kwargs:
898
+ warnings.warn(
899
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
900
+ )
901
+ """
902
+ Args:
903
+ hidden_states (`torch.FloatTensor`):
904
+ input to the layer of shape `(batch, seq_len, embed_dim)`
905
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
906
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
907
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
908
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
909
+ `[0, config.n_positions - 1]`. [What are position IDs?](../glossary#position-ids)
910
+ output_attentions (`bool`, *optional*):
911
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
912
+ returned tensors for more detail.
913
+ use_cache (`bool`, *optional*):
914
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
915
+ (see `past_key_values`).
916
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
917
+ """
918
+
919
+ residual = hidden_states
920
+
921
+ hidden_states = self.input_layernorm(hidden_states)
922
+
923
+ # Self Attention
924
+ attn_outputs, self_attn_weights, present_key_value = self.self_attn(
925
+ hidden_states=hidden_states,
926
+ attention_mask=attention_mask,
927
+ position_ids=position_ids,
928
+ past_key_value=past_key_value,
929
+ output_attentions=output_attentions,
930
+ use_cache=use_cache,
931
+ )
932
+
933
+ hidden_states = residual + self.resid_attn_dropout(attn_outputs)
934
+
935
+ residual = hidden_states
936
+ hidden_states = self.post_attention_layernorm(hidden_states)
937
+ hidden_states = self.mlp(hidden_states)
938
+ hidden_states = residual + self.resid_mlp_dropout(hidden_states)
939
+
940
+ outputs = (hidden_states,)
941
+
942
+ if output_attentions:
943
+ outputs += (self_attn_weights,)
944
+
945
+ if use_cache:
946
+ outputs += (present_key_value,)
947
+
948
+ return outputs
949
+
950
+
951
+ PHI3_START_DOCSTRING = r"""
952
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
953
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
954
+ etc.)
955
+
956
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
957
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
958
+ and behavior.
959
+
960
+ Parameters:
961
+ config ([`Phi3Config`]):
962
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
963
+ load the weights associated with the model, only the configuration. Check out the
964
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
965
+ """
966
+
967
+
968
+ @add_start_docstrings(
969
+ "The bare Phi-3 model outputting raw hidden-states without any specific head on top.",
970
+ PHI3_START_DOCSTRING,
971
+ )
972
+ class Phi3PreTrainedModel(PreTrainedModel):
973
+ config_class = Phi3Config
974
+ base_model_prefix = "model"
975
+ supports_gradient_checkpointing = True
976
+ _no_split_modules = ["Phi3DecoderLayer"]
977
+ _skip_keys_device_placement = "past_key_values"
978
+ _supports_flash_attn_2 = True
979
+ _supports_sdpa = False
980
+ _supports_cache_class = True
981
+
982
+ _version = "0.0.5"
983
+
984
+ def _init_weights(self, module):
985
+ std = self.config.initializer_range
986
+ if isinstance(module, nn.Linear):
987
+ module.weight.data.normal_(mean=0.0, std=std)
988
+ if module.bias is not None:
989
+ module.bias.data.zero_()
990
+ elif isinstance(module, nn.Embedding):
991
+ module.weight.data.normal_(mean=0.0, std=std)
992
+ if module.padding_idx is not None:
993
+ module.weight.data[module.padding_idx].zero_()
994
+
995
+
996
+ PHI3_INPUTS_DOCSTRING = r"""
997
+ Args:
998
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
999
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
1000
+ it.
1001
+
1002
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1003
+ [`PreTrainedTokenizer.__call__`] for details.
1004
+
1005
+ [What are input IDs?](../glossary#input-ids)
1006
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1007
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1008
+
1009
+ - 1 for tokens that are **not masked**,
1010
+ - 0 for tokens that are **masked**.
1011
+
1012
+ [What are attention masks?](../glossary#attention-mask)
1013
+
1014
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1015
+ [`PreTrainedTokenizer.__call__`] for details.
1016
+
1017
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
1018
+ `past_key_values`).
1019
+
1020
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
1021
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
1022
+ information on the default strategy.
1023
+
1024
+ - 1 indicates the head is **not masked**,
1025
+ - 0 indicates the head is **masked**.
1026
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1027
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
1028
+ config.n_positions - 1]`.
1029
+
1030
+ [What are position IDs?](../glossary#position-ids)
1031
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
1032
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
1033
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
1034
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
1035
+
1036
+ Two formats are allowed:
1037
+ - a [`~cache_utils.Cache`] instance;
1038
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
1039
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
1040
+ cache format.
1041
+
1042
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
1043
+ legacy cache format will be returned.
1044
+
1045
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
1046
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
1047
+ of shape `(batch_size, sequence_length)`.
1048
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
1049
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
1050
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
1051
+ model's internal embedding lookup matrix.
1052
+ use_cache (`bool`, *optional*):
1053
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
1054
+ `past_key_values`).
1055
+ output_attentions (`bool`, *optional*):
1056
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
1057
+ tensors for more detail.
1058
+ output_hidden_states (`bool`, *optional*):
1059
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1060
+ more detail.
1061
+ return_dict (`bool`, *optional*):
1062
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1063
+ """
1064
+
1065
+
1066
+ @add_start_docstrings(
1067
+ "The bare Phi-3 model outputting raw hidden-states without any specific head on top.",
1068
+ PHI3_START_DOCSTRING,
1069
+ )
1070
+ class Phi3Model(Phi3PreTrainedModel):
1071
+ """
1072
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Phi3DecoderLayer`]
1073
+
1074
+ Args:
1075
+ config: Phi3Config
1076
+ """
1077
+
1078
+ def __init__(self, config: Phi3Config):
1079
+ super().__init__(config)
1080
+ self.padding_idx = config.pad_token_id
1081
+ self.vocab_size = config.vocab_size
1082
+
1083
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
1084
+ self.embed_dropout = nn.Dropout(config.embd_pdrop)
1085
+ self.layers = nn.ModuleList(
1086
+ [Phi3DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
1087
+ )
1088
+ self._attn_implementation = config._attn_implementation
1089
+ self.norm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
1090
+
1091
+ self.gradient_checkpointing = False
1092
+ # Initialize weights and apply final processing
1093
+ self.post_init()
1094
+
1095
+ def get_input_embeddings(self):
1096
+ return self.embed_tokens
1097
+
1098
+ def set_input_embeddings(self, value):
1099
+ self.embed_tokens = value
1100
+
1101
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1102
+ def forward(
1103
+ self,
1104
+ input_ids: torch.LongTensor = None,
1105
+ attention_mask: Optional[torch.Tensor] = None,
1106
+ position_ids: Optional[torch.LongTensor] = None,
1107
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1108
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1109
+ use_cache: Optional[bool] = None,
1110
+ output_attentions: Optional[bool] = None,
1111
+ output_hidden_states: Optional[bool] = None,
1112
+ return_dict: Optional[bool] = None,
1113
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
1114
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1115
+ output_hidden_states = (
1116
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1117
+ )
1118
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1119
+
1120
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1121
+
1122
+ # retrieve input_ids and inputs_embeds
1123
+ if input_ids is not None and inputs_embeds is not None:
1124
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
1125
+ elif input_ids is not None:
1126
+ batch_size, seq_length = input_ids.shape[:2]
1127
+ elif inputs_embeds is not None:
1128
+ batch_size, seq_length = inputs_embeds.shape[:2]
1129
+ else:
1130
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
1131
+
1132
+ past_key_values_length = 0
1133
+
1134
+ if self.gradient_checkpointing and self.training:
1135
+ if use_cache:
1136
+ logger.warning_once(
1137
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
1138
+ )
1139
+ use_cache = False
1140
+
1141
+ if use_cache:
1142
+ use_legacy_cache = not isinstance(past_key_values, Cache)
1143
+ if use_legacy_cache:
1144
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
1145
+ past_key_values_length = past_key_values.get_usable_length(seq_length)
1146
+
1147
+ if position_ids is None:
1148
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
1149
+ position_ids = torch.arange(
1150
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
1151
+ )
1152
+ position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
1153
+ else:
1154
+ position_ids = position_ids.view(-1, seq_length).long()
1155
+
1156
+ if inputs_embeds is None:
1157
+ inputs_embeds = self.embed_tokens(input_ids)
1158
+
1159
+ if attention_mask is not None and self._attn_implementation == "flash_attention_2" and use_cache:
1160
+ is_padding_right = attention_mask[:, -1].sum().item() != batch_size
1161
+ if is_padding_right:
1162
+ raise ValueError(
1163
+ "You are attempting to perform batched generation with padding_side='right'"
1164
+ " this may lead to unexpected behaviour for Flash Attention version of Phi3. Make sure to "
1165
+ " call `tokenizer.padding_side = 'left'` before tokenizing the input. "
1166
+ )
1167
+
1168
+ if self._attn_implementation == "flash_attention_2":
1169
+ # 2d mask is passed through the layers
1170
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
1171
+ else:
1172
+ # 4d mask is passed through the layers
1173
+ attention_mask = _prepare_4d_causal_attention_mask(
1174
+ attention_mask,
1175
+ (batch_size, seq_length),
1176
+ inputs_embeds,
1177
+ past_key_values_length,
1178
+ sliding_window=self.config.sliding_window,
1179
+ )
1180
+
1181
+ hidden_states = inputs_embeds
1182
+
1183
+ # decoder layers
1184
+ all_hidden_states = () if output_hidden_states else None
1185
+ all_self_attns = () if output_attentions else None
1186
+ next_decoder_cache = None
1187
+
1188
+ for decoder_layer in self.layers:
1189
+ if output_hidden_states:
1190
+ all_hidden_states += (hidden_states,)
1191
+
1192
+ if self.gradient_checkpointing and self.training:
1193
+ layer_outputs = self._gradient_checkpointing_func(
1194
+ decoder_layer.__call__,
1195
+ hidden_states,
1196
+ attention_mask,
1197
+ position_ids,
1198
+ past_key_values,
1199
+ output_attentions,
1200
+ use_cache,
1201
+ )
1202
+ else:
1203
+ layer_outputs = decoder_layer(
1204
+ hidden_states,
1205
+ attention_mask=attention_mask,
1206
+ position_ids=position_ids,
1207
+ past_key_value=past_key_values,
1208
+ output_attentions=output_attentions,
1209
+ use_cache=use_cache,
1210
+ )
1211
+
1212
+ hidden_states = layer_outputs[0]
1213
+
1214
+ if use_cache:
1215
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
1216
+
1217
+ if output_attentions:
1218
+ all_self_attns += (layer_outputs[1],)
1219
+
1220
+ hidden_states = self.norm(hidden_states)
1221
+
1222
+ # add hidden states from the last decoder layer
1223
+ if output_hidden_states:
1224
+ all_hidden_states += (hidden_states,)
1225
+
1226
+ next_cache = None
1227
+ if use_cache:
1228
+ next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
1229
+ if not return_dict:
1230
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
1231
+ return BaseModelOutputWithPast(
1232
+ last_hidden_state=hidden_states,
1233
+ past_key_values=next_cache,
1234
+ hidden_states=all_hidden_states,
1235
+ attentions=all_self_attns,
1236
+ )
1237
+
1238
+
1239
+ class Phi3ForCausalLM(Phi3PreTrainedModel):
1240
+ _tied_weights_keys = ["lm_head.weight"]
1241
+
1242
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.__init__ with Llama->Phi3
1243
+ def __init__(self, config):
1244
+ super().__init__(config)
1245
+ self.model = Phi3Model(config)
1246
+ self.vocab_size = config.vocab_size
1247
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1248
+
1249
+ # Initialize weights and apply final processing
1250
+ self.post_init()
1251
+
1252
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_input_embeddings
1253
+ def get_input_embeddings(self):
1254
+ return self.model.embed_tokens
1255
+
1256
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_input_embeddings
1257
+ def set_input_embeddings(self, value):
1258
+ self.model.embed_tokens = value
1259
+
1260
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_output_embeddings
1261
+ def get_output_embeddings(self):
1262
+ return self.lm_head
1263
+
1264
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_output_embeddings
1265
+ def set_output_embeddings(self, new_embeddings):
1266
+ self.lm_head = new_embeddings
1267
+
1268
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_decoder
1269
+ def set_decoder(self, decoder):
1270
+ self.model = decoder
1271
+
1272
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_decoder
1273
+ def get_decoder(self):
1274
+ return self.model
1275
+
1276
+ # Ignore copy
1277
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1278
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1279
+ def forward(
1280
+ self,
1281
+ input_ids: torch.LongTensor = None,
1282
+ attention_mask: Optional[torch.Tensor] = None,
1283
+ position_ids: Optional[torch.LongTensor] = None,
1284
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1285
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1286
+ labels: Optional[torch.LongTensor] = None,
1287
+ use_cache: Optional[bool] = None,
1288
+ output_attentions: Optional[bool] = None,
1289
+ output_hidden_states: Optional[bool] = None,
1290
+ return_dict: Optional[bool] = None,
1291
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1292
+ r"""
1293
+ Args:
1294
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1295
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1296
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1297
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1298
+
1299
+ Returns:
1300
+
1301
+ Example:
1302
+
1303
+ ```python
1304
+ >>> from transformers import AutoTokenizer, Phi3ForCausalLM
1305
+
1306
+ >>> model = Phi3ForCausalLM.from_pretrained("microsoft/phi-3-mini-4k-instruct")
1307
+ >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-3-mini-4k-instruct")
1308
+
1309
+ >>> prompt = "This is an example script ."
1310
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1311
+
1312
+ >>> # Generate
1313
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1314
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1315
+ 'This is an example script .\n Certainly! Below is a sample script that demonstrates a simple task, such as calculating the sum'
1316
+ ```"""
1317
+
1318
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1319
+ output_hidden_states = (
1320
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1321
+ )
1322
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1323
+
1324
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1325
+ outputs = self.model(
1326
+ input_ids=input_ids,
1327
+ attention_mask=attention_mask,
1328
+ position_ids=position_ids,
1329
+ past_key_values=past_key_values,
1330
+ inputs_embeds=inputs_embeds,
1331
+ use_cache=use_cache,
1332
+ output_attentions=output_attentions,
1333
+ output_hidden_states=output_hidden_states,
1334
+ return_dict=return_dict,
1335
+ )
1336
+
1337
+ hidden_states = outputs[0]
1338
+ logits = self.lm_head(hidden_states)
1339
+ logits = logits.float()
1340
+
1341
+ loss = None
1342
+ if labels is not None:
1343
+ # Shift so that tokens < n predict n
1344
+ shift_logits = logits[..., :-1, :].contiguous()
1345
+ shift_labels = labels[..., 1:].contiguous()
1346
+ # Flatten the tokens
1347
+ loss_fct = CrossEntropyLoss()
1348
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
1349
+ shift_labels = shift_labels.view(-1)
1350
+ # Enable model parallelism
1351
+ shift_labels = shift_labels.to(shift_logits.device)
1352
+ loss = loss_fct(shift_logits, shift_labels)
1353
+
1354
+ if not return_dict:
1355
+ output = (logits,) + outputs[1:]
1356
+ return (loss,) + output if loss is not None else output
1357
+
1358
+ return CausalLMOutputWithPast(
1359
+ loss=loss,
1360
+ logits=logits,
1361
+ past_key_values=outputs.past_key_values,
1362
+ hidden_states=outputs.hidden_states,
1363
+ attentions=outputs.attentions,
1364
+ )
1365
+
1366
+ # Copied from transformers.models.persimmon.modeling_persimmon.PersimmonForCausalLM.prepare_inputs_for_generation
1367
+ def prepare_inputs_for_generation(
1368
+ self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
1369
+ ):
1370
+ if past_key_values is not None:
1371
+ if isinstance(past_key_values, Cache):
1372
+ cache_length = past_key_values.get_seq_length()
1373
+ past_length = past_key_values.seen_tokens
1374
+ max_cache_length = past_key_values.get_max_length()
1375
+ else:
1376
+ cache_length = past_length = past_key_values[0][0].shape[2]
1377
+ max_cache_length = None
1378
+
1379
+ # Keep only the unprocessed tokens:
1380
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
1381
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
1382
+ # input)
1383
+ if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
1384
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
1385
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
1386
+ # input_ids based on the past_length.
1387
+ elif past_length < input_ids.shape[1]:
1388
+ input_ids = input_ids[:, past_length:]
1389
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
1390
+
1391
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
1392
+ if (
1393
+ max_cache_length is not None
1394
+ and attention_mask is not None
1395
+ and cache_length + input_ids.shape[1] > max_cache_length
1396
+ ):
1397
+ attention_mask = attention_mask[:, -max_cache_length:]
1398
+
1399
+ position_ids = kwargs.get("position_ids", None)
1400
+ if attention_mask is not None and position_ids is None:
1401
+ # create position_ids on the fly for batch generation
1402
+ position_ids = attention_mask.long().cumsum(-1) - 1
1403
+ position_ids.masked_fill_(attention_mask == 0, 1)
1404
+ if past_key_values:
1405
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1406
+
1407
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1408
+ if inputs_embeds is not None and past_key_values is None:
1409
+ model_inputs = {"inputs_embeds": inputs_embeds}
1410
+ else:
1411
+ model_inputs = {"input_ids": input_ids}
1412
+
1413
+ model_inputs.update(
1414
+ {
1415
+ "position_ids": position_ids,
1416
+ "past_key_values": past_key_values,
1417
+ "use_cache": kwargs.get("use_cache"),
1418
+ "attention_mask": attention_mask,
1419
+ }
1420
+ )
1421
+ return model_inputs
1422
+
1423
+ @staticmethod
1424
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM._reorder_cache
1425
+ def _reorder_cache(past_key_values, beam_idx):
1426
+ reordered_past = ()
1427
+ for layer_past in past_key_values:
1428
+ reordered_past += (
1429
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
1430
+ )
1431
+ return reordered_past
1432
+
1433
+
1434
+ @add_start_docstrings(
1435
+ """
1436
+ The [`Phi3Model`] with a sequence classification head on top (linear layer).
1437
+
1438
+ [`Phi3ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1439
+ (e.g. GPT-2) do.
1440
+
1441
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1442
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1443
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1444
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1445
+ each row of the batch).
1446
+ """,
1447
+ PHI3_START_DOCSTRING,
1448
+ )
1449
+ # Copied from transformers.models.llama.modeling_llama.LlamaForSequenceClassification with Llama->Phi3, LLAMA->PHI3, self.transformer->self.model, transformer_outputs->model_outputs
1450
+ class Phi3ForSequenceClassification(Phi3PreTrainedModel):
1451
+ def __init__(self, config):
1452
+ super().__init__(config)
1453
+ self.num_labels = config.num_labels
1454
+ self.model = Phi3Model(config)
1455
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1456
+
1457
+ # Initialize weights and apply final processing
1458
+ self.post_init()
1459
+
1460
+ def get_input_embeddings(self):
1461
+ return self.model.embed_tokens
1462
+
1463
+ def set_input_embeddings(self, value):
1464
+ self.model.embed_tokens = value
1465
+
1466
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1467
+ def forward(
1468
+ self,
1469
+ input_ids: torch.LongTensor = None,
1470
+ attention_mask: Optional[torch.Tensor] = None,
1471
+ position_ids: Optional[torch.LongTensor] = None,
1472
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1473
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1474
+ labels: Optional[torch.LongTensor] = None,
1475
+ use_cache: Optional[bool] = None,
1476
+ output_attentions: Optional[bool] = None,
1477
+ output_hidden_states: Optional[bool] = None,
1478
+ return_dict: Optional[bool] = None,
1479
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1480
+ r"""
1481
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1482
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1483
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1484
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1485
+ """
1486
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1487
+
1488
+ model_outputs = self.model(
1489
+ input_ids,
1490
+ attention_mask=attention_mask,
1491
+ position_ids=position_ids,
1492
+ past_key_values=past_key_values,
1493
+ inputs_embeds=inputs_embeds,
1494
+ use_cache=use_cache,
1495
+ output_attentions=output_attentions,
1496
+ output_hidden_states=output_hidden_states,
1497
+ return_dict=return_dict,
1498
+ )
1499
+ hidden_states = model_outputs[0]
1500
+ logits = self.score(hidden_states)
1501
+
1502
+ if input_ids is not None:
1503
+ batch_size = input_ids.shape[0]
1504
+ else:
1505
+ batch_size = inputs_embeds.shape[0]
1506
+
1507
+ if self.config.pad_token_id is None and batch_size != 1:
1508
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1509
+ if self.config.pad_token_id is None:
1510
+ sequence_lengths = -1
1511
+ else:
1512
+ if input_ids is not None:
1513
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
1514
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
1515
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
1516
+ sequence_lengths = sequence_lengths.to(logits.device)
1517
+ else:
1518
+ sequence_lengths = -1
1519
+
1520
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1521
+
1522
+ loss = None
1523
+ if labels is not None:
1524
+ labels = labels.to(logits.device)
1525
+ if self.config.problem_type is None:
1526
+ if self.num_labels == 1:
1527
+ self.config.problem_type = "regression"
1528
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1529
+ self.config.problem_type = "single_label_classification"
1530
+ else:
1531
+ self.config.problem_type = "multi_label_classification"
1532
+
1533
+ if self.config.problem_type == "regression":
1534
+ loss_fct = MSELoss()
1535
+ if self.num_labels == 1:
1536
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1537
+ else:
1538
+ loss = loss_fct(pooled_logits, labels)
1539
+ elif self.config.problem_type == "single_label_classification":
1540
+ loss_fct = CrossEntropyLoss()
1541
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1542
+ elif self.config.problem_type == "multi_label_classification":
1543
+ loss_fct = BCEWithLogitsLoss()
1544
+ loss = loss_fct(pooled_logits, labels)
1545
+ if not return_dict:
1546
+ output = (pooled_logits,) + model_outputs[1:]
1547
+ return ((loss,) + output) if loss is not None else output
1548
+
1549
+ return SequenceClassifierOutputWithPast(
1550
+ loss=loss,
1551
+ logits=pooled_logits,
1552
+ past_key_values=model_outputs.past_key_values,
1553
+ hidden_states=model_outputs.hidden_states,
1554
+ attentions=model_outputs.attentions,
1555
+ )
1556
+
1557
+
1558
+ @add_start_docstrings(
1559
+ """
1560
+ [`Phi3Model`] with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
1561
+ Named-Entity-Recognition (NER) tasks.
1562
+ """,
1563
+ PHI3_START_DOCSTRING,
1564
+ )
1565
+ # Copied from transformers.models.mpt.modeling_mpt.MptForTokenClassification with Mpt->Phi3,MPT->PHI3,self.transformer->self.model,transformer_outputs->model_outputs
1566
+ class Phi3ForTokenClassification(Phi3PreTrainedModel):
1567
+ def __init__(self, config: Phi3Config):
1568
+ super().__init__(config)
1569
+ self.num_labels = config.num_labels
1570
+
1571
+ self.model = Phi3Model(config)
1572
+ if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None:
1573
+ classifier_dropout = config.classifier_dropout
1574
+ elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None:
1575
+ classifier_dropout = config.hidden_dropout
1576
+ else:
1577
+ classifier_dropout = 0.1
1578
+ self.dropout = nn.Dropout(classifier_dropout)
1579
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
1580
+
1581
+ # Initialize weights and apply final processing
1582
+ self.post_init()
1583
+
1584
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1585
+ @add_code_sample_docstrings(
1586
+ checkpoint=_CHECKPOINT_FOR_DOC,
1587
+ output_type=TokenClassifierOutput,
1588
+ config_class=_CONFIG_FOR_DOC,
1589
+ )
1590
+ def forward(
1591
+ self,
1592
+ input_ids: Optional[torch.LongTensor] = None,
1593
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
1594
+ attention_mask: Optional[torch.Tensor] = None,
1595
+ inputs_embeds: Optional[torch.Tensor] = None,
1596
+ labels: Optional[torch.Tensor] = None,
1597
+ use_cache: Optional[bool] = None,
1598
+ output_attentions: Optional[bool] = None,
1599
+ output_hidden_states: Optional[bool] = None,
1600
+ return_dict: Optional[bool] = None,
1601
+ **deprecated_arguments,
1602
+ ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]:
1603
+ r"""
1604
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1605
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1606
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1607
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1608
+ """
1609
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1610
+
1611
+ model_outputs = self.model(
1612
+ input_ids,
1613
+ past_key_values=past_key_values,
1614
+ attention_mask=attention_mask,
1615
+ inputs_embeds=inputs_embeds,
1616
+ use_cache=use_cache,
1617
+ output_attentions=output_attentions,
1618
+ output_hidden_states=output_hidden_states,
1619
+ return_dict=return_dict,
1620
+ )
1621
+
1622
+ hidden_states = model_outputs[0]
1623
+ hidden_states = self.dropout(hidden_states)
1624
+ logits = self.classifier(hidden_states)
1625
+
1626
+ loss = None
1627
+ if labels is not None:
1628
+ # move labels to correct device to enable model parallelism
1629
+ labels = labels.to(logits.device)
1630
+ batch_size, seq_length = labels.shape
1631
+ loss_fct = CrossEntropyLoss()
1632
+ loss = loss_fct(
1633
+ logits.view(batch_size * seq_length, self.num_labels), labels.view(batch_size * seq_length)
1634
+ )
1635
+
1636
+ if not return_dict:
1637
+ output = (logits,) + model_outputs[2:]
1638
+ return ((loss,) + output) if loss is not None else output
1639
+
1640
+ return TokenClassifierOutput(
1641
+ loss=loss,
1642
+ logits=logits,
1643
+ hidden_states=model_outputs.hidden_states,
1644
+ attentions=model_outputs.attentions,
1645
+ )
smash_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "api_key": null,
3
+ "verify_url": "http://johnrachwan.pythonanywhere.com",
4
+ "smash_config": {
5
+ "pruners": "None",
6
+ "pruning_ratio": 0.0,
7
+ "factorizers": "None",
8
+ "quantizers": "['llm-int8']",
9
+ "weight_quantization_bits": 4,
10
+ "output_deviation": 0.005,
11
+ "compilers": "None",
12
+ "static_batch": true,
13
+ "static_shape": true,
14
+ "controlnet": "None",
15
+ "unet_dim": 4,
16
+ "device": "cuda",
17
+ "cache_dir": "/ceph/hdd/staff/charpent/.cache/models_gnkj410",
18
+ "batch_size": 1,
19
+ "model_name": "llmware/bling-phi-3",
20
+ "task": "text_text_generation",
21
+ "max_batch_size": 1,
22
+ "qtype_weight": "torch.qint8",
23
+ "qtype_activation": "torch.quint8",
24
+ "qobserver": "<class 'torch.ao.quantization.observer.MinMaxObserver'>",
25
+ "qscheme": "torch.per_tensor_symmetric",
26
+ "qconfig": "x86",
27
+ "group_size": 128,
28
+ "damp_percent": 0.1,
29
+ "save_load_fn": "bitsandbytes"
30
+ }
31
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|/inst|>"
4
+ ],
5
+ "bos_token": {
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "eos_token": {
13
+ "content": "<|endoftext|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false
18
+ },
19
+ "pad_token": {
20
+ "content": "<|endoftext|>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": true,
26
+ "single_word": false,
27
+ "special": false
28
+ },
29
+ "32000": {
30
+ "content": "<|endoftext|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "<|assistant|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": true,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32002": {
46
+ "content": "<|step|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": true,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32003": {
54
+ "content": "<|function_output|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": true,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "32004": {
62
+ "content": "<|tag|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": true,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "32005": {
70
+ "content": "<|function_call|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": true,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "32006": {
78
+ "content": "<|system|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": true,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "32007": {
86
+ "content": "<|end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": true,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "32008": {
94
+ "content": "<|raw|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": true,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "32009": {
102
+ "content": "<|continue|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": true,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "32010": {
110
+ "content": "<|user|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": true,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "32011": {
118
+ "content": "<|function_list|>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": true,
122
+ "single_word": false,
123
+ "special": true
124
+ },
125
+ "32012": {
126
+ "content": "<|calc|>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": true,
130
+ "single_word": false,
131
+ "special": true
132
+ },
133
+ "32013": {
134
+ "content": "<|code|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": true,
138
+ "single_word": false,
139
+ "special": true
140
+ },
141
+ "32014": {
142
+ "content": "<|/code|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": true,
146
+ "single_word": false,
147
+ "special": true
148
+ },
149
+ "32015": {
150
+ "content": "<|summary|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": true,
154
+ "single_word": false,
155
+ "special": true
156
+ },
157
+ "32016": {
158
+ "content": "<|resource|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": true,
162
+ "single_word": false,
163
+ "special": true
164
+ },
165
+ "32017": {
166
+ "content": "<|assistant_mask|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": true,
170
+ "single_word": false,
171
+ "special": true
172
+ },
173
+ "32018": {
174
+ "content": "<|start|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": true,
178
+ "single_word": false,
179
+ "special": true
180
+ },
181
+ "32019": {
182
+ "content": "<|message|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": true,
186
+ "single_word": false,
187
+ "special": true
188
+ },
189
+ "32020": {
190
+ "content": "<|fim_prefix|>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": true,
194
+ "single_word": false,
195
+ "special": true
196
+ },
197
+ "32021": {
198
+ "content": "<|fim_middle|>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": true,
202
+ "single_word": false,
203
+ "special": true
204
+ },
205
+ "32022": {
206
+ "content": "<|fim_suffix|>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": true,
210
+ "single_word": false,
211
+ "special": true
212
+ },
213
+ "32023": {
214
+ "content": "<|meta_start|>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": true,
218
+ "single_word": false,
219
+ "special": true
220
+ },
221
+ "32024": {
222
+ "content": "<|ipynb_marker|>",
223
+ "lstrip": false,
224
+ "normalized": false,
225
+ "rstrip": true,
226
+ "single_word": false,
227
+ "special": true
228
+ },
229
+ "32025": {
230
+ "content": "<|diff_marker|>",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": true,
234
+ "single_word": false,
235
+ "special": true
236
+ },
237
+ "32026": {
238
+ "content": "<|ghissue|>",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": true,
242
+ "single_word": false,
243
+ "special": true
244
+ },
245
+ "32027": {
246
+ "content": "<|ghreview|>",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": true,
250
+ "single_word": false,
251
+ "special": true
252
+ },
253
+ "32028": {
254
+ "content": "<|disc_start|>",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": true,
258
+ "single_word": false,
259
+ "special": true
260
+ },
261
+ "32029": {
262
+ "content": "<|disc_sep|>",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": true,
266
+ "single_word": false,
267
+ "special": true
268
+ },
269
+ "32030": {
270
+ "content": "<|disc_thread|><|query|>",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": true,
274
+ "single_word": false,
275
+ "special": true
276
+ },
277
+ "32031": {
278
+ "content": "<|/query|>",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": true,
282
+ "single_word": false,
283
+ "special": true
284
+ },
285
+ "32032": {
286
+ "content": "<|data|>",
287
+ "lstrip": false,
288
+ "normalized": false,
289
+ "rstrip": true,
290
+ "single_word": false,
291
+ "special": true
292
+ },
293
+ "32033": {
294
+ "content": "<|/data|>",
295
+ "lstrip": false,
296
+ "normalized": false,
297
+ "rstrip": true,
298
+ "single_word": false,
299
+ "special": true
300
+ },
301
+ "32034": {
302
+ "content": "<|sys|>",
303
+ "lstrip": false,
304
+ "normalized": false,
305
+ "rstrip": true,
306
+ "single_word": false,
307
+ "special": true
308
+ },
309
+ "32035": {
310
+ "content": "<|/sys|>",
311
+ "lstrip": false,
312
+ "normalized": false,
313
+ "rstrip": true,
314
+ "single_word": false,
315
+ "special": true
316
+ },
317
+ "32036": {
318
+ "content": "<|inst|>",
319
+ "lstrip": false,
320
+ "normalized": false,
321
+ "rstrip": true,
322
+ "single_word": false,
323
+ "special": true
324
+ },
325
+ "32037": {
326
+ "content": "<|/inst|>",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": true,
330
+ "single_word": false,
331
+ "special": true
332
+ }
333
+ },
334
+ "additional_special_tokens": [
335
+ "<|/inst|>"
336
+ ],
337
+ "bos_token": "<s>",
338
+ "chat_template": "{{ bos_token }}{% for message in messages %}{{'<|' + message['role'] + '|>' + '\n' + message['content'] + '<|end|>\n' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
339
+ "clean_up_tokenization_spaces": false,
340
+ "eos_token": "<|endoftext|>",
341
+ "legacy": false,
342
+ "model_max_length": 4096,
343
+ "pad_token": "<|endoftext|>",
344
+ "padding_side": "left",
345
+ "sp_model_kwargs": {},
346
+ "tokenizer_class": "LlamaTokenizer",
347
+ "unk_token": "<unk>",
348
+ "use_default_system_prompt": false
349
+ }