boboliu commited on
Commit
f6d7b71
1 Parent(s): e0c5b99

Upload CostWiseGemmaForCausalLM

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./bge-reranker-v2.5-gemma2-lightweight",
3
+ "architectures": [
4
+ "CostWiseGemmaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "attn_logit_softcapping": 50.0,
9
+ "auto_map": {
10
+ "AutoConfig": "gemma_config.CostWiseGemmaConfig",
11
+ "AutoModel": "gemma_model.CostWiseGemmaModel",
12
+ "AutoModelForCausalLM": "gemma_model.CostWiseGemmaForCausalLM"
13
+ },
14
+ "bos_token_id": 2,
15
+ "cache_implementation": "hybrid",
16
+ "eos_token_id": 1,
17
+ "final_logit_softcapping": 30.0,
18
+ "head_dim": 256,
19
+ "hidden_act": "gelu_pytorch_tanh",
20
+ "hidden_activation": "gelu_pytorch_tanh",
21
+ "hidden_size": 3584,
22
+ "initializer_range": 0.02,
23
+ "intermediate_size": 14336,
24
+ "layer_sep": 1,
25
+ "layer_wise": true,
26
+ "max_position_embeddings": 8192,
27
+ "model_type": "cost_wise_gemma",
28
+ "num_attention_heads": 16,
29
+ "num_hidden_layers": 42,
30
+ "num_key_value_heads": 8,
31
+ "pad_token_id": 0,
32
+ "quantization_config": {
33
+ "batch_size": 1,
34
+ "bits": 4,
35
+ "block_name_to_quantize": null,
36
+ "cache_block_outputs": true,
37
+ "damp_percent": 0.1,
38
+ "dataset": "c4",
39
+ "desc_act": false,
40
+ "exllama_config": {
41
+ "version": 1
42
+ },
43
+ "group_size": 128,
44
+ "max_input_length": null,
45
+ "model_seqlen": null,
46
+ "module_name_preceding_first_block": null,
47
+ "modules_in_block_to_quantize": null,
48
+ "pad_token_id": null,
49
+ "quant_method": "gptq",
50
+ "sym": true,
51
+ "tokenizer": null,
52
+ "true_sequential": true,
53
+ "use_cuda_fp16": false,
54
+ "use_exllama": true
55
+ },
56
+ "query_pre_attn_scalar": 256,
57
+ "rms_norm_eps": 1e-06,
58
+ "rope_theta": 10000.0,
59
+ "sliding_window": 4096,
60
+ "sliding_window_size": 4096,
61
+ "start_layer": 8,
62
+ "torch_dtype": "float16",
63
+ "transformers_version": "4.44.2",
64
+ "use_cache": true,
65
+ "vocab_size": 256000
66
+ }
gemma_config.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from <path_to_diff_file.py>.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the diff. If any change should be done, please apply the change to the
5
+ # diff.py file directly.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2024 Google Inc. HuggingFace Inc. team. All rights reserved.
9
+ #
10
+ #
11
+ # Licensed under the Apache License, Version 2.0 (the "License");
12
+ # you may not use this file except in compliance with the License.
13
+ # You may obtain a copy of the License at
14
+ #
15
+ # http://www.apache.org/licenses/LICENSE-2.0
16
+ #
17
+ # Unless required by applicable law or agreed to in writing, software
18
+ # distributed under the License is distributed on an "AS IS" BASIS,
19
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
20
+ # See the License for the specific language governing permissions and
21
+ # limitations under the License.
22
+
23
+
24
+ from transformers.models.gemma2.configuration_gemma2 import Gemma2Config
25
+
26
+ class CostWiseGemmaConfig(Gemma2Config):
27
+ r"""
28
+ This is the configuration class to store the configuration of a [`GemmaModel`]. It is used to instantiate an Gemma
29
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
30
+ defaults will yield a similar configuration to that of the Gemma-7B.
31
+ e.g. [google/gemma-7b](https://huggingface.co/google/gemma-7b)
32
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
33
+ documentation from [`PretrainedConfig`] for more information.
34
+ Args:
35
+ start_layer (`int`, *optional*, defaults to 28):
36
+ The start layer to output score.
37
+ layer_sep (`int`, *optional*, defaults to 28):
38
+ The sep layer from the start layer to output score.
39
+ layer_wise (`bool`, *optional*, defaults to `False`):
40
+ Whether or not the model should be layerwise.
41
+ ```python
42
+ >>> from transformers import Gemma2Model, Gemma2Config
43
+ >>> # Initializing a Gemma2 gemma2-9b style configuration
44
+ >>> configuration = Gemma2Config()
45
+ >>> # Initializing a model from the gemma2-9b style configuration
46
+ >>> model = Gemma2Model(configuration)
47
+ >>> # Accessing the model configuration
48
+ >>> configuration = model.config
49
+ ```"""
50
+
51
+ model_type = "cost_wise_gemma"
52
+ keys_to_ignore_at_inference = ["past_key_values"]
53
+
54
+ def __init__(
55
+ self,
56
+ start_layer: int = 28,
57
+ layer_sep: int = 28,
58
+ layer_wise: bool = False,
59
+ **kwargs,
60
+ ):
61
+ self.start_layer = start_layer
62
+ self.layer_sep = layer_sep
63
+ self.layer_wise = layer_wise
64
+
65
+ super().__init__(
66
+ **kwargs,
67
+ )
gemma_model.py ADDED
@@ -0,0 +1,751 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from <path_to_diff_file.py>.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the diff. If any change should be done, please apply the change to the
5
+ # diff.py file directly.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2024 Google Inc. HuggingFace Inc. team. All rights reserved.
9
+ #
10
+ #
11
+ # Licensed under the Apache License, Version 2.0 (the "License");
12
+ # you may not use this file except in compliance with the License.
13
+ # You may obtain a copy of the License at
14
+ #
15
+ # http://www.apache.org/licenses/LICENSE-2.0
16
+ #
17
+ # Unless required by applicable law or agreed to in writing, software
18
+ # distributed under the License is distributed on an "AS IS" BASIS,
19
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
20
+ # See the License for the specific language governing permissions and
21
+ # limitations under the License.
22
+ from dataclasses import dataclass
23
+
24
+ import math
25
+ from typing import List, Optional, Tuple, Union
26
+
27
+ import inspect
28
+ import torch
29
+ import torch.nn.functional as F
30
+ import torch.utils.checkpoint
31
+ from torch import nn
32
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
33
+
34
+ from transformers.activations import ACT2FN
35
+ from transformers.cache_utils import Cache, DynamicCache, StaticCache
36
+ from transformers.modeling_attn_mask_utils import AttentionMaskConverter
37
+ from transformers.modeling_outputs import (
38
+ BaseModelOutputWithPast,
39
+ CausalLMOutputWithPast,
40
+ SequenceClassifierOutputWithPast,
41
+ TokenClassifierOutput,
42
+ )
43
+ from transformers.modeling_utils import PreTrainedModel
44
+ from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS
45
+ from transformers.utils import (
46
+ add_start_docstrings,
47
+ add_start_docstrings_to_model_forward,
48
+ is_flash_attn_2_available,
49
+ is_flash_attn_greater_or_equal_2_10,
50
+ logging,
51
+ replace_return_docstrings,
52
+ ModelOutput,
53
+ )
54
+ from .gemma_config import CostWiseGemmaConfig
55
+ from transformers.models.gemma2.modeling_gemma2 import Gemma2RMSNorm, Gemma2RotaryEmbedding, rotate_half, apply_rotary_pos_emb
56
+ from transformers.models.gemma2.modeling_gemma2 import Gemma2MLP, repeat_kv, Gemma2Attention, Gemma2FlashAttention2, Gemma2SdpaAttention, GEMMA2_ATTENTION_CLASSES, Gemma2DecoderLayer, GEMMA2_START_DOCSTRING
57
+ from transformers.models.gemma2.modeling_gemma2 import GEMMA2_INPUTS_DOCSTRING
58
+
59
+ if is_flash_attn_2_available():
60
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
61
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
62
+
63
+ _flash_supports_window_size = "window_size" in list(inspect.signature(flash_attn_func).parameters)
64
+
65
+
66
+ logger = logging.get_logger(__name__)
67
+
68
+
69
+ def _get_unpad_data(attention_mask):
70
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
71
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
72
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
73
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.int32), (1, 0))
74
+ return (
75
+ indices,
76
+ cu_seqlens,
77
+ max_seqlen_in_batch,
78
+ )
79
+
80
+ @add_start_docstrings(
81
+ "The bare Gemma2 Model outputting raw hidden-states without any specific head on top.",
82
+ GEMMA2_START_DOCSTRING,
83
+ )
84
+ class CostWiseGemma2PreTrainedModel(PreTrainedModel):
85
+ config_class = CostWiseGemmaConfig
86
+ base_model_prefix = "model"
87
+ supports_gradient_checkpointing = True
88
+ _no_split_modules = ["Gemma2DecoderLayer"]
89
+ _skip_keys_device_placement = ["past_key_values"]
90
+ _supports_flash_attn_2 = True
91
+ _supports_sdpa = True
92
+ _supports_cache_class = False
93
+ _supports_quantized_cache = False
94
+ _supports_static_cache = True
95
+ _is_stateful = True
96
+
97
+ def _init_weights(self, module):
98
+ std = self.config.initializer_range
99
+ if isinstance(module, nn.Linear):
100
+ module.weight.data.normal_(mean=0.0, std=std)
101
+ if module.bias is not None:
102
+ module.bias.data.zero_()
103
+ elif isinstance(module, nn.Embedding):
104
+ module.weight.data.normal_(mean=0.0, std=std)
105
+ if module.padding_idx is not None:
106
+ module.weight.data[module.padding_idx].zero_()
107
+
108
+ GEMMA2_ATTENTION_CLASSES = {
109
+ "eager": Gemma2Attention,
110
+ "flash_attention_2": Gemma2FlashAttention2,
111
+ "sdpa": Gemma2SdpaAttention,
112
+ }
113
+
114
+
115
+ _CONFIG_FOR_DOC = "CostWiseGemmaConfig"
116
+
117
+ @dataclass
118
+ class CostWiseModelOutputWithPast(ModelOutput):
119
+ last_hidden_state: torch.FloatTensor = None
120
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
121
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
122
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
123
+ attention_masks: Optional[Tuple[torch.FloatTensor]] = None
124
+
125
+ @dataclass
126
+ class CostWiseCausalLMOutputWithPast(ModelOutput):
127
+ loss: Optional[torch.FloatTensor] = None
128
+ logits: torch.FloatTensor = None
129
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
130
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
131
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
132
+ attention_masks: Optional[Tuple[torch.FloatTensor]] = None
133
+
134
+ def token_compress(compress_ratio,
135
+ hidden_states,
136
+ attention_mask,
137
+ query_lengths,
138
+ prompt_lengths):
139
+ """
140
+ compress_ratio: int
141
+ hidden_states: (b, s, h)
142
+ attention_mask: (b, s)
143
+ query_lengths: (b)
144
+ prompt_lengths: (b)
145
+ """
146
+ # get some specific parameters
147
+ passage_lengths = torch.sum(attention_mask, dim=1, dtype=torch.int) - query_lengths - prompt_lengths # the raw passage lengths (b)
148
+ retain_passage_lengths = (passage_lengths + compress_ratio - 1) // compress_ratio # the passage lengths need to be retained (b)
149
+ final_useful_lengths = query_lengths + prompt_lengths + retain_passage_lengths # the final useful length after compress (b)
150
+ max_passage_length = torch.max(passage_lengths) # the max passage lengths (1)
151
+ max_final_lengths = torch.max(final_useful_lengths) # the max useful lengths after compress (1)
152
+ # make new hidden states and new attention masks
153
+ new_hidden_states = torch.zeros((hidden_states.shape[0], max_final_lengths,
154
+ hidden_states.shape[-1]), dtype=hidden_states.dtype).to(hidden_states.device) # (b, s', h)
155
+ new_attention_mask = torch.ones((hidden_states.shape[0], max_final_lengths), dtype=attention_mask.dtype).to(attention_mask.device) # (b, s')
156
+ # get new attention mask
157
+ mask_attention_index = torch.arange(max_final_lengths, device=hidden_states.device).unsqueeze(0) >= final_useful_lengths[:, None]
158
+ new_attention_mask[mask_attention_index] = 0
159
+ # get new hidden states
160
+ # add query into new hidden states
161
+ query_index = torch.arange(max_final_lengths, device=hidden_states.device).unsqueeze(0)
162
+ mask_query_index = query_index < query_lengths[:, None]
163
+ new_hidden_states[mask_query_index] = hidden_states[:, : max_final_lengths, :][mask_query_index]
164
+ # add prompt into new hidden states
165
+ # get the index of the prompt in new hidden states
166
+ new_prompt_start_length = query_lengths + retain_passage_lengths
167
+ new_prompt_end_length = new_prompt_start_length + prompt_lengths
168
+ new_prompt_index = torch.arange(max_final_lengths, device=hidden_states.device).unsqueeze(0)
169
+ new_mask_prompt_index_start = new_prompt_index >= new_prompt_start_length[:, None]
170
+ new_mask_prompt_index_end = new_prompt_index < new_prompt_end_length[:, None]
171
+ new_mask_prompt_index = new_mask_prompt_index_start & new_mask_prompt_index_end
172
+ # get the index of the prompt in hidden states
173
+ raw_prompt_start_length = query_lengths + passage_lengths
174
+ raw_prompt_end_length = raw_prompt_start_length + prompt_lengths
175
+ raw_prompt_index = torch.arange(hidden_states.shape[1], device=hidden_states.device).unsqueeze(0)
176
+ raw_mask_prompt_index_start = raw_prompt_index >= raw_prompt_start_length[:, None]
177
+ raw_mask_prompt_index_end = raw_prompt_index < raw_prompt_end_length[:, None]
178
+ raw_mask_prompt_index = raw_mask_prompt_index_start & raw_mask_prompt_index_end
179
+ # replace the prompt hidden states
180
+ new_hidden_states[new_mask_prompt_index] = hidden_states[raw_mask_prompt_index]
181
+ # 以上均没问题
182
+
183
+ # print(new_hidden_states.view(len(new_hidden_states), -1))
184
+ # print(new_attention_mask)
185
+
186
+ # get the index of the passage in new hidden states
187
+ new_passage_start_length = query_lengths
188
+ new_passage_end_length = new_passage_start_length + retain_passage_lengths
189
+ new_passage_index = torch.arange(max_final_lengths, device=hidden_states.device).unsqueeze(0)
190
+ new_mask_passage_index_start = new_passage_index >= new_passage_start_length[:, None]
191
+ new_mask_passage_index_end = new_passage_index < new_passage_end_length[:, None]
192
+ new_mask_passage_index = new_mask_passage_index_start & new_mask_passage_index_end
193
+ # print(query_lengths, prompt_lengths, retain_passage_lengths, final_useful_lengths)
194
+ # add passage into new hidden states
195
+ # get mask hidden states
196
+ psg_start_length = query_lengths
197
+ psg_end_length = query_lengths + passage_lengths
198
+ psg_index = torch.arange(hidden_states.shape[1], device=hidden_states.device).unsqueeze(0)
199
+ mask_psg_index_start = psg_index >= psg_start_length[:, None]
200
+ mask_psg_index_end = psg_index < psg_end_length[:, None]
201
+ mask_psg_index = mask_psg_index_start & mask_psg_index_end
202
+
203
+ hidden_states = hidden_states * mask_psg_index.unsqueeze(-1)
204
+ passage_hidden_states = torch.zeros((hidden_states.shape[0],
205
+ (max_passage_length + compress_ratio - 1) // compress_ratio * compress_ratio,
206
+ hidden_states.shape[-1]), dtype=hidden_states.dtype).to(hidden_states.device)
207
+ passage_end_length = passage_lengths
208
+ passage_index = torch.arange(passage_hidden_states.shape[1], device=hidden_states.device).unsqueeze(0) # maybe exceed the max passage length
209
+ mask_passage_index = passage_index < passage_end_length[:, None]
210
+
211
+ raw_passage_end_length = query_lengths + passage_lengths
212
+ raw_passage_start_length = query_lengths
213
+ raw_passage_index = torch.arange(hidden_states.shape[1], device=hidden_states.device).unsqueeze(0)
214
+ raw_mask_passage_index_start = raw_passage_index >= raw_passage_start_length[:, None]
215
+ raw_mask_passage_index_end = raw_passage_index < raw_passage_end_length[:, None]
216
+ raw_mask_passage_index = raw_mask_passage_index_start & raw_mask_passage_index_end
217
+ passage_hidden_states[mask_passage_index] = hidden_states[raw_mask_passage_index]
218
+
219
+ passage_weights = torch.zeros((hidden_states.shape[0],
220
+ (max_passage_length + compress_ratio - 1) // compress_ratio * compress_ratio)
221
+ , dtype=hidden_states.dtype).to(hidden_states.device)
222
+ passage_weights[mask_passage_index] = 1
223
+ passage_weights = passage_weights.view(passage_weights.shape[0], -1, compress_ratio)
224
+ passage_weights = passage_weights / torch.sum(passage_weights, dim=-1
225
+ ).view(passage_weights.shape[0], -1, 1)
226
+ passage_weights = passage_weights.view(passage_weights.shape[0], -1)
227
+ # passage_weights = torch.where(passage_weights == torch.nan, 0, passage_weights)
228
+ passage_hidden_states = passage_hidden_states * passage_weights.unsqueeze(-1)
229
+ passage_hidden_states = passage_hidden_states.view(passage_hidden_states.shape[0], -1, compress_ratio,
230
+ passage_hidden_states.shape[-1])
231
+ passage_hidden_states = torch.sum(passage_hidden_states, dim=2)
232
+ passage_end_length = retain_passage_lengths
233
+ passage_index = torch.arange(passage_hidden_states.shape[1], device=hidden_states.device).unsqueeze(0)
234
+ mask_passage_index = passage_index < passage_end_length[:, None]
235
+ new_hidden_states[new_mask_passage_index] = passage_hidden_states[mask_passage_index]
236
+
237
+ return new_hidden_states, new_attention_mask
238
+
239
+ @add_start_docstrings(
240
+ "The bare Gemma2 Model outputting raw hidden-states without any specific head on top.",
241
+ GEMMA2_START_DOCSTRING,
242
+ )
243
+ class CostWiseGemmaModel(CostWiseGemma2PreTrainedModel):
244
+ """
245
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`GemmaDecoderLayer`]
246
+
247
+ Args:
248
+ config: GemmaConfig
249
+ """
250
+
251
+ def __init__(self, config: CostWiseGemmaConfig):
252
+ super().__init__(config)
253
+ self.padding_idx = config.pad_token_id
254
+ self.vocab_size = config.vocab_size
255
+
256
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
257
+ self.layers = nn.ModuleList(
258
+ [Gemma2DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
259
+ )
260
+ self.norm = Gemma2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
261
+ self.gradient_checkpointing = False
262
+
263
+ # Initialize weights and apply final processing
264
+ self.post_init()
265
+
266
+ def get_input_embeddings(self):
267
+ return self.embed_tokens
268
+
269
+ def set_input_embeddings(self, value):
270
+ self.embed_tokens = value
271
+
272
+ @add_start_docstrings_to_model_forward(GEMMA2_INPUTS_DOCSTRING)
273
+ def forward(
274
+ self,
275
+ input_ids: torch.LongTensor = None,
276
+ attention_mask: Optional[torch.Tensor] = None,
277
+ position_ids: Optional[torch.LongTensor] = None,
278
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
279
+ inputs_embeds: Optional[torch.FloatTensor] = None,
280
+ use_cache: Optional[bool] = None,
281
+ output_attentions: Optional[bool] = None,
282
+ output_hidden_states: Optional[bool] = None,
283
+ return_dict: Optional[bool] = None,
284
+ cache_position: Optional[torch.LongTensor] = None,
285
+ compress_layer: Optional[int] = None,
286
+ compress_ratio: Optional[int] = None,
287
+ cutoff_layers: Optional[List[int]] = None,
288
+ query_lengths: Optional[int] = None,
289
+ prompt_lengths: Optional[int] = None,
290
+ ) -> Union[Tuple, CostWiseModelOutputWithPast]:
291
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
292
+
293
+ compress_ratio = None if compress_ratio == 1 else compress_ratio
294
+
295
+ output_hidden_states = (
296
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
297
+ )
298
+ if self.config.layer_wise:
299
+ output_hidden_states = True
300
+
301
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
302
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
303
+
304
+ if (input_ids is None) ^ (inputs_embeds is not None):
305
+ raise ValueError(
306
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
307
+ )
308
+
309
+ if self.gradient_checkpointing and self.training and use_cache:
310
+ logger.warning_once(
311
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
312
+ )
313
+ use_cache = False
314
+
315
+ if compress_layer is not None and compress_ratio is not None:
316
+ logger.warning_once(
317
+ "`use_cache=True` is incompatible with reranker. Setting `use_cache=False`."
318
+ )
319
+ use_cache = False
320
+
321
+ if inputs_embeds is None:
322
+ inputs_embeds = self.embed_tokens(input_ids)
323
+
324
+ if cache_position is None:
325
+ cache_position = torch.arange(0, inputs_embeds.shape[1], device=inputs_embeds.device)
326
+
327
+ if position_ids is None:
328
+ position_ids = cache_position.unsqueeze(0)
329
+
330
+ causal_mask = self._update_causal_mask(
331
+ attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
332
+ )
333
+
334
+ # embed positions
335
+ hidden_states = inputs_embeds
336
+
337
+ # normalized
338
+ # Gemma downcasts the below to float16, causing sqrt(3072)=55.4256 to become 55.5
339
+ # See https://github.com/huggingface/transformers/pull/29402
340
+ normalizer = torch.tensor(self.config.hidden_size**0.5, dtype=hidden_states.dtype)
341
+ hidden_states = hidden_states * normalizer
342
+
343
+ # decoder layers
344
+ all_hidden_states = () if output_hidden_states else None
345
+ all_attention_masks = ()
346
+ all_self_attns = () if output_attentions else None
347
+ next_decoder_cache = None
348
+
349
+ is_padding_left = (attention_mask[:, -1].sum() == attention_mask.shape[0]) and (
350
+ torch.sum(attention_mask) != attention_mask.shape[0] * attention_mask.shape[1])
351
+ query_lengths = [0] * hidden_states.shape[0] if query_lengths is None else query_lengths
352
+ prompt_lengths = [0] * hidden_states.shape[0] if prompt_lengths is None else prompt_lengths
353
+ if not isinstance(query_lengths, torch.Tensor):
354
+ query_lengths = torch.tensor(query_lengths, device=hidden_states.device)
355
+ if not isinstance(prompt_lengths, torch.Tensor):
356
+ prompt_lengths = torch.tensor(prompt_lengths, device=hidden_states.device)
357
+
358
+ if cutoff_layers is None:
359
+ max_layer = self.config.num_hidden_layers
360
+ cutoff_layers = [max_layer]
361
+ if isinstance(cutoff_layers, int):
362
+ max_layer = cutoff_layers
363
+ cutoff_layers = [cutoff_layers]
364
+ else:
365
+ max_layer = max(cutoff_layers)
366
+
367
+ for idx, decoder_layer in enumerate(self.layers):
368
+ if self.config.layer_wise:
369
+ if idx in cutoff_layers and output_hidden_states:
370
+ all_hidden_states += (self.norm(hidden_states),)
371
+ all_attention_masks += (attention_mask,)
372
+ if idx == max_layer:
373
+ break
374
+ elif output_hidden_states:
375
+ all_hidden_states += (hidden_states,)
376
+
377
+ if compress_layer is not None and compress_ratio is not None and idx in compress_layer and idx != 0:
378
+ if is_padding_left:
379
+ raise ValueError('You must use right padding...')
380
+ hidden_states, attention_mask = token_compress(compress_ratio, hidden_states, attention_mask,
381
+ query_lengths, prompt_lengths)
382
+ seq_length = hidden_states.shape[1]
383
+ cache_position = torch.arange(0, seq_length, device=hidden_states.device)
384
+ position_ids = cache_position.unsqueeze(0)
385
+ causal_mask = self._update_causal_mask(
386
+ attention_mask, hidden_states, cache_position, past_key_values, output_attentions
387
+ )
388
+
389
+ if self.gradient_checkpointing and self.training:
390
+ layer_outputs = self._gradient_checkpointing_func(
391
+ decoder_layer.__call__,
392
+ hidden_states,
393
+ causal_mask,
394
+ position_ids,
395
+ past_key_values,
396
+ output_attentions,
397
+ use_cache,
398
+ cache_position,
399
+ )
400
+ else:
401
+ layer_outputs = decoder_layer(
402
+ hidden_states,
403
+ attention_mask=causal_mask,
404
+ position_ids=position_ids,
405
+ past_key_value=past_key_values,
406
+ output_attentions=output_attentions,
407
+ use_cache=use_cache,
408
+ cache_position=cache_position,
409
+ )
410
+
411
+ hidden_states = layer_outputs[0]
412
+
413
+ if output_attentions:
414
+ all_self_attns += (layer_outputs[1],)
415
+
416
+ hidden_states = self.norm(hidden_states)
417
+
418
+ # add hidden states from the last decoder layer
419
+ if not self.config.layer_wise:
420
+ if output_hidden_states:
421
+ all_hidden_states += (hidden_states,)
422
+ all_attention_masks += (attention_mask,)
423
+ else:
424
+ if output_hidden_states and self.config.num_hidden_layers == max_layer:
425
+ all_hidden_states += (hidden_states,)
426
+ all_attention_masks += (attention_mask,)
427
+
428
+ next_cache = next_decoder_cache if use_cache else None
429
+
430
+ if not return_dict:
431
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
432
+ return CostWiseModelOutputWithPast(
433
+ last_hidden_state=hidden_states,
434
+ past_key_values=next_cache,
435
+ hidden_states=all_hidden_states,
436
+ attentions=all_self_attns,
437
+ attention_masks=all_attention_masks
438
+ )
439
+
440
+ def _update_causal_mask(
441
+ self,
442
+ attention_mask: torch.Tensor,
443
+ input_tensor: torch.Tensor,
444
+ cache_position: torch.Tensor,
445
+ past_key_values: Cache,
446
+ output_attentions: bool,
447
+ ):
448
+ if self.config._attn_implementation == "flash_attention_2":
449
+ if attention_mask is not None and 0.0 in attention_mask:
450
+ return attention_mask
451
+ return None
452
+
453
+ dtype, device = input_tensor.dtype, input_tensor.device
454
+ min_dtype = torch.finfo(dtype).min
455
+ sequence_length = input_tensor.shape[1]
456
+ if past_key_values is not None:
457
+ target_length = past_key_values.get_max_length()
458
+ else:
459
+ target_length = attention_mask.shape[-1] if attention_mask is not None else input_tensor.shape[1]
460
+
461
+ if attention_mask is not None and attention_mask.dim() == 4:
462
+ # in this case we assume that the mask comes already in inverted form and requires no inversion or slicing
463
+ if attention_mask.max() != 0:
464
+ raise ValueError("Custom 4D attention mask should be passed in inverted form with max==0`")
465
+ causal_mask = attention_mask
466
+ else:
467
+ causal_mask = torch.full(
468
+ (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
469
+ )
470
+ if sequence_length != 1:
471
+ causal_mask = torch.triu(causal_mask, diagonal=1)
472
+ causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
473
+ causal_mask = causal_mask[None, None, :, :].expand(input_tensor.shape[0], 1, -1, -1)
474
+ if attention_mask is not None:
475
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
476
+ mask_length = attention_mask.shape[-1]
477
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
478
+ padding_mask = padding_mask == 0
479
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
480
+ padding_mask, min_dtype
481
+ )
482
+ return causal_mask
483
+
484
+
485
+ class CostWiseHead(nn.Module):
486
+ """Head for sentence-level classification tasks."""
487
+
488
+ def __init__(self, input_size, output_size):
489
+ super().__init__()
490
+ self.linear_head = nn.Linear(input_size, output_size, bias=False)
491
+
492
+ def forward(self, **kwargs):
493
+ return self.linear_head(**kwargs)
494
+
495
+
496
+ class CostWiseGemmaForCausalLM(CostWiseGemma2PreTrainedModel):
497
+ _tied_weights_keys = ["lm_head.weight"]
498
+
499
+ def __init__(self, config: CostWiseGemmaConfig):
500
+ super().__init__(config)
501
+ self.model = CostWiseGemmaModel(config)
502
+ self.vocab_size = config.vocab_size
503
+
504
+ if not config.layer_wise:
505
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
506
+ else:
507
+ self.lm_head = nn.ModuleList(
508
+ [CostWiseHead(config.hidden_size, 1) for _ in range(
509
+ config.start_layer, config.num_hidden_layers + 1, config.layer_sep
510
+ )]
511
+ )
512
+
513
+ # Initialize weights and apply final processing
514
+ self.post_init()
515
+
516
+ def get_input_embeddings(self):
517
+ return self.model.embed_tokens
518
+
519
+ def set_input_embeddings(self, value):
520
+ self.model.embed_tokens = value
521
+
522
+ def get_output_embeddings(self):
523
+ return self.lm_head
524
+
525
+ def set_output_embeddings(self, new_embeddings):
526
+ self.lm_head = new_embeddings
527
+
528
+ def set_decoder(self, decoder):
529
+ self.model = decoder
530
+
531
+ def get_decoder(self):
532
+ return self.model
533
+
534
+ @add_start_docstrings_to_model_forward(GEMMA2_INPUTS_DOCSTRING)
535
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
536
+ def forward(
537
+ self,
538
+ input_ids: torch.LongTensor = None,
539
+ attention_mask: Optional[torch.Tensor] = None,
540
+ position_ids: Optional[torch.LongTensor] = None,
541
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
542
+ inputs_embeds: Optional[torch.FloatTensor] = None,
543
+ labels: Optional[torch.LongTensor] = None,
544
+ use_cache: Optional[bool] = None,
545
+ output_attentions: Optional[bool] = None,
546
+ output_hidden_states: Optional[bool] = None,
547
+ return_dict: Optional[bool] = None,
548
+ cache_position: Optional[torch.LongTensor] = None,
549
+ compress_layer: Optional[int] = None,
550
+ compress_ratio: Optional[int] = None,
551
+ cutoff_layers: Optional[List[int]] = None,
552
+ query_lengths: Optional[int] = None,
553
+ prompt_lengths: Optional[int] = None,
554
+ ) -> Union[Tuple, CostWiseCausalLMOutputWithPast]:
555
+ r"""
556
+ Args:
557
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
558
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, transformers.,
559
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
560
+ (masked), the loss is only computed for the tokens with labels in `[0, transformers., config.vocab_size]`.
561
+
562
+ Returns:
563
+
564
+ Example:
565
+
566
+ ```python
567
+ >>> from transformers import AutoTokenizer, GemmaForCausalLM
568
+
569
+ >>> model = GemmaForCausalLM.from_pretrained("google/gemma-2-9b")
570
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
571
+
572
+ >>> prompt = "What is your favorite condiment?"
573
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
574
+
575
+ >>> # Generate
576
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
577
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
578
+ "What is your favorite condiment?"
579
+ ```"""
580
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
581
+ output_hidden_states = (
582
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
583
+ )
584
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
585
+
586
+ if compress_ratio is not None and compress_ratio == 1:
587
+ compress_ratio = None
588
+
589
+ if self.config.layer_wise:
590
+ if cutoff_layers is None:
591
+ cutoff_layers = [self.config.num_hidden_layers]
592
+ elif isinstance(cutoff_layers, int):
593
+ cutoff_layers = [cutoff_layers]
594
+ can_use_layers = list(range(self.config.start_layer, self.config.num_hidden_layers + 1, self.config.layer_sep))
595
+ remove_layers = [i for i in cutoff_layers if i not in can_use_layers]
596
+ if len(remove_layers) > 0:
597
+ logger.warning_once(
598
+ f"layers {remove_layers} are incompatible with the setting. They will be removed..."
599
+ )
600
+ cutoff_layers = [i for i in cutoff_layers if i not in remove_layers]
601
+ if len(cutoff_layers) == 0:
602
+ raise ValueError(f"Your cutoff layers must in [{self.config.start_layer}, {self.config.num_hidden_layers}]")
603
+
604
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
605
+ outputs = self.model(
606
+ input_ids=input_ids,
607
+ attention_mask=attention_mask,
608
+ position_ids=position_ids,
609
+ past_key_values=past_key_values,
610
+ inputs_embeds=inputs_embeds,
611
+ use_cache=use_cache,
612
+ output_attentions=output_attentions,
613
+ output_hidden_states=output_hidden_states,
614
+ return_dict=return_dict,
615
+ cache_position=cache_position,
616
+ compress_layer=compress_layer,
617
+ compress_ratio=compress_ratio,
618
+ query_lengths=query_lengths,
619
+ prompt_lengths=prompt_lengths,
620
+ cutoff_layers=cutoff_layers,
621
+ )
622
+
623
+ if not self.config.layer_wise:
624
+ hidden_states = outputs[0]
625
+ logits = self.lm_head(hidden_states)
626
+ if self.config.final_logit_softcapping is not None:
627
+ logits = logits / self.config.final_logit_softcapping
628
+ logits = torch.tanh(logits)
629
+ logits = logits * self.config.final_logit_softcapping
630
+ logits = logits.float()
631
+ loss = None
632
+ if labels is not None:
633
+ # Shift so that tokens < n predict n
634
+ shift_logits = logits[..., :-1, :].contiguous()
635
+ shift_labels = labels[..., 1:].contiguous()
636
+ # Flatten the tokens
637
+ loss_fct = CrossEntropyLoss()
638
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
639
+ shift_labels = shift_labels.view(-1)
640
+ # Enable model parallelism
641
+ shift_labels = shift_labels.to(shift_logits.device)
642
+ loss = loss_fct(shift_logits, shift_labels)
643
+ else:
644
+ hidden_states = outputs.hidden_states
645
+ logits = ()
646
+ for i in range(len(hidden_states)):
647
+ tmp_logits = self.lm_head[i].linear_head(hidden_states[i])
648
+ if self.config.final_logit_softcapping is not None:
649
+ tmp_logits = tmp_logits / self.config.final_logit_softcapping
650
+ tmp_logits = torch.tanh(tmp_logits)
651
+ tmp_logits = tmp_logits * self.config.final_logit_softcapping
652
+ tmp_logits = tmp_logits.float()
653
+ tmp_logits = tmp_logits.reshape(hidden_states[i].shape[0], -1)
654
+ logits = logits + (tmp_logits,)
655
+ loss = None
656
+
657
+ if not return_dict:
658
+ output = (logits,) + outputs[1:]
659
+ return (loss,) + output if loss is not None else output
660
+
661
+ return CostWiseCausalLMOutputWithPast(
662
+ loss=loss,
663
+ logits=logits,
664
+ past_key_values=outputs.past_key_values,
665
+ hidden_states=outputs.hidden_states,
666
+ attentions=outputs.attentions,
667
+ attention_masks=outputs[-1] if self.model.config.layer_wise else outputs[-1][-1]
668
+ )
669
+
670
+ def prepare_inputs_for_generation(
671
+ self,
672
+ input_ids,
673
+ past_key_values=None,
674
+ attention_mask=None,
675
+ inputs_embeds=None,
676
+ cache_position=None,
677
+ use_cache=True,
678
+ **kwargs,
679
+ ):
680
+ past_length = 0
681
+ if past_key_values is not None:
682
+ # Past key values are always initialized with a `Cache` object -> no need for if-else anymore
683
+ past_length = cache_position[0] if cache_position is not None else torch.tensor(0, device=input_ids.device)
684
+ max_cache_length = (
685
+ torch.tensor(past_key_values.get_max_length(), device=input_ids.device)
686
+ if past_key_values.get_max_length() is not None
687
+ else None
688
+ )
689
+ cache_length = past_length if max_cache_length is None else torch.min(max_cache_length, past_length)
690
+
691
+ # Keep only the unprocessed tokens:
692
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
693
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as input)
694
+ if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
695
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
696
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
697
+ # input_ids based on the past_length.
698
+ elif past_length < input_ids.shape[1]:
699
+ input_ids = input_ids[:, past_length:]
700
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
701
+
702
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
703
+ if (
704
+ max_cache_length is not None
705
+ and attention_mask is not None
706
+ and cache_length + input_ids.shape[1] > max_cache_length
707
+ ):
708
+ attention_mask = attention_mask[:, -max_cache_length:]
709
+
710
+ position_ids = kwargs.get("position_ids", None)
711
+ if attention_mask is not None and position_ids is None:
712
+ # create position_ids on the fly for batch generation
713
+ position_ids = attention_mask.long().cumsum(-1) - 1
714
+ position_ids.masked_fill_(attention_mask == 0, 1)
715
+ if past_key_values:
716
+ position_ids = position_ids[:, -input_ids.shape[1] :]
717
+
718
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
719
+ if inputs_embeds is not None and past_length == 0:
720
+ model_inputs = {"inputs_embeds": inputs_embeds}
721
+ else:
722
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
723
+ # recompiles graphs as the stride of the inputs is a guard. Ref: https://github.com/huggingface/transformers/pull/29114
724
+ # TODO: use `next_tokens` directly instead.
725
+ model_inputs = {"input_ids": input_ids.contiguous()}
726
+
727
+ input_length = position_ids.shape[-1] if position_ids is not None else input_ids.shape[-1]
728
+ if cache_position is None:
729
+ cache_position = torch.arange(past_length, past_length + input_length, device=input_ids.device)
730
+ elif use_cache:
731
+ cache_position = cache_position[-input_length:]
732
+
733
+ model_inputs.update(
734
+ {
735
+ "position_ids": position_ids,
736
+ "cache_position": cache_position,
737
+ "past_key_values": past_key_values,
738
+ "use_cache": use_cache,
739
+ "attention_mask": attention_mask,
740
+ }
741
+ )
742
+ return model_inputs
743
+
744
+ @staticmethod
745
+ def _reorder_cache(past_key_values, beam_idx):
746
+ reordered_past = ()
747
+ for layer_past in past_key_values:
748
+ reordered_past += (
749
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
750
+ )
751
+ return reordered_past
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "cache_implementation": "hybrid",
5
+ "eos_token_id": 1,
6
+ "pad_token_id": 0,
7
+ "transformers_version": "4.44.2"
8
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7feda9da3fa32eef1377ddc4dca9e0e1f568dce9d5c989fcff242e5559d1b6d
3
+ size 4978866392
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dda668980f729a65b2f4d7ffbcfdd359286749360211a97ebb6f6516ee91c4a3
3
+ size 1188234928
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff