berendg commited on
Commit
0a1ba96
1 Parent(s): 8d4f36a

Upload DebertaForMaskedLM

Browse files
Files changed (5) hide show
  1. README.md +199 -0
  2. config.json +36 -0
  3. configuration_deberta.py +191 -0
  4. model.safetensors +3 -0
  5. modeling_deberta.py +1500 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "babylm24_small_mlsm0.15_babylm_l12_1500_mlm0_not-reinit_10M_final/10000/",
3
+ "architectures": [
4
+ "DebertaForMaskedLM"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "auto_map": {
8
+ "AutoModelForMaskedLM": "modeling_deberta.DebertaForMaskedLM"
9
+ },
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-07,
16
+ "max_position_embeddings": 512,
17
+ "max_relative_positions": -1,
18
+ "model_type": "deberta",
19
+ "num_attention_heads": 12,
20
+ "num_concepts": 1500,
21
+ "num_hidden_layers": 12,
22
+ "pad_token_id": 0,
23
+ "pooler_dropout": 0,
24
+ "pooler_hidden_act": "gelu",
25
+ "pooler_hidden_size": 768,
26
+ "pos_att_type": [
27
+ "c2p",
28
+ "p2c"
29
+ ],
30
+ "position_biased_input": false,
31
+ "relative_attention": true,
32
+ "torch_dtype": "float32",
33
+ "transformers_version": "4.41.2",
34
+ "type_vocab_size": 0,
35
+ "vocab_size": 26500
36
+ }
configuration_deberta.py ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020, Microsoft and the HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """DeBERTa model configuration"""
16
+
17
+ from collections import OrderedDict
18
+ from typing import TYPE_CHECKING, Any, Mapping, Optional, Union
19
+
20
+ from transformers.configuration_utils import PretrainedConfig
21
+ from transformers.onnx import OnnxConfig
22
+ from transformers.utils import logging
23
+
24
+
25
+ if TYPE_CHECKING:
26
+ from transformers import FeatureExtractionMixin, PreTrainedTokenizerBase, TensorType
27
+
28
+
29
+ logger = logging.get_logger(__name__)
30
+
31
+
32
+ class DebertaConfig(PretrainedConfig):
33
+ r"""
34
+ This is the configuration class to store the configuration of a [`DebertaModel`] or a [`TFDebertaModel`]. It is
35
+ used to instantiate a DeBERTa model according to the specified arguments, defining the model architecture.
36
+ Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa
37
+ [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) architecture.
38
+
39
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
40
+ documentation from [`PretrainedConfig`] for more information.
41
+
42
+ Arguments:
43
+ vocab_size (`int`, *optional*, defaults to 30522):
44
+ Vocabulary size of the DeBERTa model. Defines the number of different tokens that can be represented by the
45
+ `inputs_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`].
46
+ hidden_size (`int`, *optional*, defaults to 768):
47
+ Dimensionality of the encoder layers and the pooler layer.
48
+ num_hidden_layers (`int`, *optional*, defaults to 12):
49
+ Number of hidden layers in the Transformer encoder.
50
+ num_attention_heads (`int`, *optional*, defaults to 12):
51
+ Number of attention heads for each attention layer in the Transformer encoder.
52
+ intermediate_size (`int`, *optional*, defaults to 3072):
53
+ Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
54
+ hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
55
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
56
+ `"relu"`, `"silu"`, `"gelu"`, `"tanh"`, `"gelu_fast"`, `"mish"`, `"linear"`, `"sigmoid"` and `"gelu_new"`
57
+ are supported.
58
+ hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
59
+ The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
60
+ attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
61
+ The dropout ratio for the attention probabilities.
62
+ max_position_embeddings (`int`, *optional*, defaults to 512):
63
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
64
+ just in case (e.g., 512 or 1024 or 2048).
65
+ type_vocab_size (`int`, *optional*, defaults to 2):
66
+ The vocabulary size of the `token_type_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`].
67
+ initializer_range (`float`, *optional*, defaults to 0.02):
68
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
69
+ layer_norm_eps (`float`, *optional*, defaults to 1e-12):
70
+ The epsilon used by the layer normalization layers.
71
+ relative_attention (`bool`, *optional*, defaults to `False`):
72
+ Whether use relative position encoding.
73
+ max_relative_positions (`int`, *optional*, defaults to 1):
74
+ The range of relative positions `[-max_position_embeddings, max_position_embeddings]`. Use the same value
75
+ as `max_position_embeddings`.
76
+ pad_token_id (`int`, *optional*, defaults to 0):
77
+ The value used to pad input_ids.
78
+ position_biased_input (`bool`, *optional*, defaults to `True`):
79
+ Whether add absolute position embedding to content embedding.
80
+ pos_att_type (`List[str]`, *optional*):
81
+ The type of relative position attention, it can be a combination of `["p2c", "c2p"]`, e.g. `["p2c"]`,
82
+ `["p2c", "c2p"]`.
83
+ layer_norm_eps (`float`, *optional*, defaults to 1e-12):
84
+ The epsilon used by the layer normalization layers.
85
+
86
+ Example:
87
+
88
+ ```python
89
+ >>> from transformers import DebertaConfig, DebertaModel
90
+
91
+ >>> # Initializing a DeBERTa microsoft/deberta-base style configuration
92
+ >>> configuration = DebertaConfig()
93
+
94
+ >>> # Initializing a model (with random weights) from the microsoft/deberta-base style configuration
95
+ >>> model = DebertaModel(configuration)
96
+
97
+ >>> # Accessing the model configuration
98
+ >>> configuration = model.config
99
+ ```"""
100
+
101
+ model_type = "deberta"
102
+
103
+ def __init__(
104
+ self,
105
+ vocab_size=50265,
106
+ hidden_size=768,
107
+ num_hidden_layers=12,
108
+ num_attention_heads=12,
109
+ intermediate_size=3072,
110
+ hidden_act="gelu",
111
+ hidden_dropout_prob=0.1,
112
+ attention_probs_dropout_prob=0.1,
113
+ max_position_embeddings=512,
114
+ type_vocab_size=0,
115
+ initializer_range=0.02,
116
+ layer_norm_eps=1e-7,
117
+ relative_attention=False,
118
+ max_relative_positions=-1,
119
+ pad_token_id=0,
120
+ position_biased_input=True,
121
+ pos_att_type=None,
122
+ pooler_dropout=0,
123
+ pooler_hidden_act="gelu",
124
+ **kwargs,
125
+ ):
126
+ super().__init__(**kwargs)
127
+
128
+ self.hidden_size = hidden_size
129
+ self.num_hidden_layers = num_hidden_layers
130
+ self.num_attention_heads = num_attention_heads
131
+ self.intermediate_size = intermediate_size
132
+ self.hidden_act = hidden_act
133
+ self.hidden_dropout_prob = hidden_dropout_prob
134
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
135
+ self.max_position_embeddings = max_position_embeddings
136
+ self.type_vocab_size = type_vocab_size
137
+ self.initializer_range = initializer_range
138
+ self.relative_attention = relative_attention
139
+ self.max_relative_positions = max_relative_positions
140
+ self.pad_token_id = pad_token_id
141
+ self.position_biased_input = position_biased_input
142
+
143
+ # Backwards compatibility
144
+ if isinstance(pos_att_type, str):
145
+ pos_att_type = [x.strip() for x in pos_att_type.lower().split("|")]
146
+
147
+ self.pos_att_type = pos_att_type
148
+ self.vocab_size = vocab_size
149
+ self.layer_norm_eps = layer_norm_eps
150
+
151
+ self.pooler_hidden_size = kwargs.get("pooler_hidden_size", hidden_size)
152
+ self.pooler_dropout = pooler_dropout
153
+ self.pooler_hidden_act = pooler_hidden_act
154
+
155
+
156
+ # Copied from transformers.models.deberta_v2.configuration_deberta_v2.DebertaV2OnnxConfig
157
+ class DebertaOnnxConfig(OnnxConfig):
158
+ @property
159
+ def inputs(self) -> Mapping[str, Mapping[int, str]]:
160
+ if self.task == "multiple-choice":
161
+ dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"}
162
+ else:
163
+ dynamic_axis = {0: "batch", 1: "sequence"}
164
+ if self._config.type_vocab_size > 0:
165
+ return OrderedDict(
166
+ [("input_ids", dynamic_axis), ("attention_mask", dynamic_axis), ("token_type_ids", dynamic_axis)]
167
+ )
168
+ else:
169
+ return OrderedDict([("input_ids", dynamic_axis), ("attention_mask", dynamic_axis)])
170
+
171
+ @property
172
+ def default_onnx_opset(self) -> int:
173
+ return 12
174
+
175
+ def generate_dummy_inputs(
176
+ self,
177
+ preprocessor: Union["PreTrainedTokenizerBase", "FeatureExtractionMixin"],
178
+ batch_size: int = -1,
179
+ seq_length: int = -1,
180
+ num_choices: int = -1,
181
+ is_pair: bool = False,
182
+ framework: Optional["TensorType"] = None,
183
+ num_channels: int = 3,
184
+ image_width: int = 40,
185
+ image_height: int = 40,
186
+ tokenizer: "PreTrainedTokenizerBase" = None,
187
+ ) -> Mapping[str, Any]:
188
+ dummy_inputs = super().generate_dummy_inputs(preprocessor=preprocessor, framework=framework)
189
+ if self._config.type_vocab_size == 0 and "token_type_ids" in dummy_inputs:
190
+ del dummy_inputs["token_type_ids"]
191
+ return dummy_inputs
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8a1b6e450889d22ab47122c4a7998498be289b152a98ea42b91733d4ac51019
3
+ size 633999824
modeling_deberta.py ADDED
@@ -0,0 +1,1500 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 Microsoft and the Hugging Face Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """PyTorch DeBERTa model."""
16
+
17
+ from collections.abc import Sequence
18
+ from typing import Optional, Tuple, Union
19
+
20
+ import os, pickle
21
+ import transformers
22
+ import torch
23
+ import torch.utils.checkpoint
24
+ from torch import nn
25
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
26
+
27
+ from transformers.activations import ACT2FN
28
+ from transformers.modeling_outputs import (
29
+ BaseModelOutput,
30
+ MaskedLMOutput,
31
+ QuestionAnsweringModelOutput,
32
+ SequenceClassifierOutput,
33
+ TokenClassifierOutput,
34
+ )
35
+ from transformers.modeling_utils import PreTrainedModel
36
+ from transformers.pytorch_utils import softmax_backward_data
37
+ from transformers.utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, logging
38
+ from .configuration_deberta import DebertaConfig
39
+
40
+
41
+ logger = logging.get_logger(__name__)
42
+ _CONFIG_FOR_DOC = "DebertaConfig"
43
+ _CHECKPOINT_FOR_DOC = "microsoft/deberta-base"
44
+
45
+ # Masked LM docstring
46
+ _CHECKPOINT_FOR_MASKED_LM = "lsanochkin/deberta-large-feedback"
47
+ _MASKED_LM_EXPECTED_OUTPUT = "' Paris'"
48
+ _MASKED_LM_EXPECTED_LOSS = "0.54"
49
+
50
+ # QuestionAnswering docstring
51
+ _CHECKPOINT_FOR_QA = "Palak/microsoft_deberta-large_squad"
52
+ _QA_EXPECTED_OUTPUT = "' a nice puppet'"
53
+ _QA_EXPECTED_LOSS = 0.14
54
+ _QA_TARGET_START_INDEX = 12
55
+ _QA_TARGET_END_INDEX = 14
56
+
57
+
58
+ class ContextPooler(nn.Module):
59
+ def __init__(self, config):
60
+ super().__init__()
61
+ self.dense = nn.Linear(config.pooler_hidden_size, config.pooler_hidden_size)
62
+ self.dropout = StableDropout(config.pooler_dropout)
63
+ self.config = config
64
+
65
+ def forward(self, hidden_states):
66
+ # We "pool" the model by simply taking the hidden state corresponding
67
+ # to the first token.
68
+
69
+ context_token = hidden_states[:, 0]
70
+ context_token = self.dropout(context_token)
71
+ pooled_output = self.dense(context_token)
72
+ pooled_output = ACT2FN[self.config.pooler_hidden_act](pooled_output)
73
+ return pooled_output
74
+
75
+ @property
76
+ def output_dim(self):
77
+ return self.config.hidden_size
78
+
79
+
80
+ class XSoftmax(torch.autograd.Function):
81
+ """
82
+ Masked Softmax which is optimized for saving memory
83
+
84
+ Args:
85
+ input (`torch.tensor`): The input tensor that will apply softmax.
86
+ mask (`torch.IntTensor`):
87
+ The mask matrix where 0 indicate that element will be ignored in the softmax calculation.
88
+ dim (int): The dimension that will apply softmax
89
+
90
+ Example:
91
+
92
+ ```python
93
+ >>> import torch
94
+ >>> from transformers.models.deberta.modeling_deberta import XSoftmax
95
+
96
+ >>> # Make a tensor
97
+ >>> x = torch.randn([4, 20, 100])
98
+
99
+ >>> # Create a mask
100
+ >>> mask = (x > 0).int()
101
+
102
+ >>> # Specify the dimension to apply softmax
103
+ >>> dim = -1
104
+
105
+ >>> y = XSoftmax.apply(x, mask, dim)
106
+ ```"""
107
+
108
+ @staticmethod
109
+ def forward(ctx, input, mask, dim):
110
+ ctx.dim = dim
111
+ rmask = ~(mask.to(torch.bool))
112
+
113
+ output = input.masked_fill(rmask, torch.tensor(torch.finfo(input.dtype).min))
114
+ output = torch.softmax(output, ctx.dim)
115
+ output.masked_fill_(rmask, 0)
116
+ ctx.save_for_backward(output)
117
+ return output
118
+
119
+ @staticmethod
120
+ def backward(ctx, grad_output):
121
+ (output,) = ctx.saved_tensors
122
+ inputGrad = softmax_backward_data(ctx, grad_output, output, ctx.dim, output)
123
+ return inputGrad, None, None
124
+
125
+ @staticmethod
126
+ def symbolic(g, self, mask, dim):
127
+ import torch.onnx.symbolic_helper as sym_help
128
+ from torch.onnx.symbolic_opset9 import masked_fill, softmax
129
+
130
+ mask_cast_value = g.op("Cast", mask, to_i=sym_help.cast_pytorch_to_onnx["Long"])
131
+ r_mask = g.op(
132
+ "Cast",
133
+ g.op("Sub", g.op("Constant", value_t=torch.tensor(1, dtype=torch.int64)), mask_cast_value),
134
+ to_i=sym_help.cast_pytorch_to_onnx["Bool"],
135
+ )
136
+ output = masked_fill(
137
+ g, self, r_mask, g.op("Constant", value_t=torch.tensor(torch.finfo(self.type().dtype()).min))
138
+ )
139
+ output = softmax(g, output, dim)
140
+ return masked_fill(g, output, r_mask, g.op("Constant", value_t=torch.tensor(0, dtype=torch.bool)))
141
+
142
+
143
+ class DropoutContext:
144
+ def __init__(self):
145
+ self.dropout = 0
146
+ self.mask = None
147
+ self.scale = 1
148
+ self.reuse_mask = True
149
+
150
+
151
+ def get_mask(input, local_context):
152
+ if not isinstance(local_context, DropoutContext):
153
+ dropout = local_context
154
+ mask = None
155
+ else:
156
+ dropout = local_context.dropout
157
+ dropout *= local_context.scale
158
+ mask = local_context.mask if local_context.reuse_mask else None
159
+
160
+ if dropout > 0 and mask is None:
161
+ mask = (1 - torch.empty_like(input).bernoulli_(1 - dropout)).to(torch.bool)
162
+
163
+ if isinstance(local_context, DropoutContext):
164
+ if local_context.mask is None:
165
+ local_context.mask = mask
166
+
167
+ return mask, dropout
168
+
169
+
170
+ class XDropout(torch.autograd.Function):
171
+ """Optimized dropout function to save computation and memory by using mask operation instead of multiplication."""
172
+
173
+ @staticmethod
174
+ def forward(ctx, input, local_ctx):
175
+ mask, dropout = get_mask(input, local_ctx)
176
+ ctx.scale = 1.0 / (1 - dropout)
177
+ if dropout > 0:
178
+ ctx.save_for_backward(mask)
179
+ return input.masked_fill(mask, 0) * ctx.scale
180
+ else:
181
+ return input
182
+
183
+ @staticmethod
184
+ def backward(ctx, grad_output):
185
+ if ctx.scale > 1:
186
+ (mask,) = ctx.saved_tensors
187
+ return grad_output.masked_fill(mask, 0) * ctx.scale, None
188
+ else:
189
+ return grad_output, None
190
+
191
+ @staticmethod
192
+ def symbolic(g: torch._C.Graph, input: torch._C.Value, local_ctx: Union[float, DropoutContext]) -> torch._C.Value:
193
+ from torch.onnx import symbolic_opset12
194
+
195
+ dropout_p = local_ctx
196
+ if isinstance(local_ctx, DropoutContext):
197
+ dropout_p = local_ctx.dropout
198
+ # StableDropout only calls this function when training.
199
+ train = True
200
+ # TODO: We should check if the opset_version being used to export
201
+ # is > 12 here, but there's no good way to do that. As-is, if the
202
+ # opset_version < 12, export will fail with a CheckerError.
203
+ # Once https://github.com/pytorch/pytorch/issues/78391 is fixed, do something like:
204
+ # if opset_version < 12:
205
+ # return torch.onnx.symbolic_opset9.dropout(g, input, dropout_p, train)
206
+ return symbolic_opset12.dropout(g, input, dropout_p, train)
207
+
208
+
209
+ class StableDropout(nn.Module):
210
+ """
211
+ Optimized dropout module for stabilizing the training
212
+
213
+ Args:
214
+ drop_prob (float): the dropout probabilities
215
+ """
216
+
217
+ def __init__(self, drop_prob):
218
+ super().__init__()
219
+ self.drop_prob = drop_prob
220
+ self.count = 0
221
+ self.context_stack = None
222
+
223
+ def forward(self, x):
224
+ """
225
+ Call the module
226
+
227
+ Args:
228
+ x (`torch.tensor`): The input tensor to apply dropout
229
+ """
230
+ if self.training and self.drop_prob > 0:
231
+ return XDropout.apply(x, self.get_context())
232
+ return x
233
+
234
+ def clear_context(self):
235
+ self.count = 0
236
+ self.context_stack = None
237
+
238
+ def init_context(self, reuse_mask=True, scale=1):
239
+ if self.context_stack is None:
240
+ self.context_stack = []
241
+ self.count = 0
242
+ for c in self.context_stack:
243
+ c.reuse_mask = reuse_mask
244
+ c.scale = scale
245
+
246
+ def get_context(self):
247
+ if self.context_stack is not None:
248
+ if self.count >= len(self.context_stack):
249
+ self.context_stack.append(DropoutContext())
250
+ ctx = self.context_stack[self.count]
251
+ ctx.dropout = self.drop_prob
252
+ self.count += 1
253
+ return ctx
254
+ else:
255
+ return self.drop_prob
256
+
257
+
258
+ class DebertaLayerNorm(nn.Module):
259
+ """LayerNorm module in the TF style (epsilon inside the square root)."""
260
+
261
+ def __init__(self, size, eps=1e-12):
262
+ super().__init__()
263
+ self.weight = nn.Parameter(torch.ones(size))
264
+ self.bias = nn.Parameter(torch.zeros(size))
265
+ self.variance_epsilon = eps
266
+
267
+ def forward(self, hidden_states):
268
+ input_type = hidden_states.dtype
269
+ hidden_states = hidden_states.float()
270
+ mean = hidden_states.mean(-1, keepdim=True)
271
+ variance = (hidden_states - mean).pow(2).mean(-1, keepdim=True)
272
+ hidden_states = (hidden_states - mean) / torch.sqrt(variance + self.variance_epsilon)
273
+ hidden_states = hidden_states.to(input_type)
274
+ y = self.weight * hidden_states + self.bias
275
+ return y
276
+
277
+
278
+ class DebertaSelfOutput(nn.Module):
279
+ def __init__(self, config):
280
+ super().__init__()
281
+ self.dense = nn.Linear(config.hidden_size, config.hidden_size)
282
+ self.LayerNorm = DebertaLayerNorm(config.hidden_size, config.layer_norm_eps)
283
+ self.dropout = StableDropout(config.hidden_dropout_prob)
284
+
285
+ def forward(self, hidden_states, input_tensor):
286
+ hidden_states = self.dense(hidden_states)
287
+ hidden_states = self.dropout(hidden_states)
288
+ hidden_states = self.LayerNorm(hidden_states + input_tensor)
289
+ return hidden_states
290
+
291
+
292
+ class DebertaAttention(nn.Module):
293
+ def __init__(self, config):
294
+ super().__init__()
295
+ self.self = DisentangledSelfAttention(config)
296
+ self.output = DebertaSelfOutput(config)
297
+ self.config = config
298
+
299
+ def forward(
300
+ self,
301
+ hidden_states,
302
+ attention_mask,
303
+ output_attentions=False,
304
+ query_states=None,
305
+ relative_pos=None,
306
+ rel_embeddings=None,
307
+ ):
308
+ self_output = self.self(
309
+ hidden_states,
310
+ attention_mask,
311
+ output_attentions,
312
+ query_states=query_states,
313
+ relative_pos=relative_pos,
314
+ rel_embeddings=rel_embeddings,
315
+ )
316
+ if output_attentions:
317
+ self_output, att_matrix = self_output
318
+ if query_states is None:
319
+ query_states = hidden_states
320
+ attention_output = self.output(self_output, query_states)
321
+
322
+ if output_attentions:
323
+ return (attention_output, att_matrix)
324
+ else:
325
+ return attention_output
326
+
327
+
328
+ # Copied from transformers.models.bert.modeling_bert.BertIntermediate with Bert->Deberta
329
+ class DebertaIntermediate(nn.Module):
330
+ def __init__(self, config):
331
+ super().__init__()
332
+ self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
333
+ if isinstance(config.hidden_act, str):
334
+ self.intermediate_act_fn = ACT2FN[config.hidden_act]
335
+ else:
336
+ self.intermediate_act_fn = config.hidden_act
337
+
338
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
339
+ hidden_states = self.dense(hidden_states)
340
+ hidden_states = self.intermediate_act_fn(hidden_states)
341
+ return hidden_states
342
+
343
+
344
+ class DebertaOutput(nn.Module):
345
+ def __init__(self, config):
346
+ super().__init__()
347
+ self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
348
+ self.LayerNorm = DebertaLayerNorm(config.hidden_size, config.layer_norm_eps)
349
+ self.dropout = StableDropout(config.hidden_dropout_prob)
350
+ self.config = config
351
+
352
+ def forward(self, hidden_states, input_tensor):
353
+ hidden_states = self.dense(hidden_states)
354
+ hidden_states = self.dropout(hidden_states)
355
+ hidden_states = self.LayerNorm(hidden_states + input_tensor)
356
+ return hidden_states
357
+
358
+
359
+ class DebertaLayer(nn.Module):
360
+ def __init__(self, config):
361
+ super().__init__()
362
+ self.attention = DebertaAttention(config)
363
+ self.intermediate = DebertaIntermediate(config)
364
+ self.output = DebertaOutput(config)
365
+
366
+ def forward(
367
+ self,
368
+ hidden_states,
369
+ attention_mask,
370
+ query_states=None,
371
+ relative_pos=None,
372
+ rel_embeddings=None,
373
+ output_attentions=False,
374
+ ):
375
+ attention_output = self.attention(
376
+ hidden_states,
377
+ attention_mask,
378
+ output_attentions=output_attentions,
379
+ query_states=query_states,
380
+ relative_pos=relative_pos,
381
+ rel_embeddings=rel_embeddings,
382
+ )
383
+ if output_attentions:
384
+ attention_output, att_matrix = attention_output
385
+ intermediate_output = self.intermediate(attention_output)
386
+ layer_output = self.output(intermediate_output, attention_output)
387
+ if output_attentions:
388
+ return (layer_output, att_matrix)
389
+ else:
390
+ return layer_output
391
+
392
+
393
+ class DebertaEncoder(nn.Module):
394
+ """Modified BertEncoder with relative position bias support"""
395
+
396
+ def __init__(self, config):
397
+ super().__init__()
398
+ self.layer = nn.ModuleList([DebertaLayer(config) for _ in range(config.num_hidden_layers)])
399
+ self.relative_attention = getattr(config, "relative_attention", False)
400
+ if self.relative_attention:
401
+ self.max_relative_positions = getattr(config, "max_relative_positions", -1)
402
+ if self.max_relative_positions < 1:
403
+ self.max_relative_positions = config.max_position_embeddings
404
+ self.rel_embeddings = nn.Embedding(self.max_relative_positions * 2, config.hidden_size)
405
+ self.gradient_checkpointing = False
406
+
407
+ def get_rel_embedding(self):
408
+ rel_embeddings = self.rel_embeddings.weight if self.relative_attention else None
409
+ return rel_embeddings
410
+
411
+ def get_attention_mask(self, attention_mask):
412
+ if attention_mask.dim() <= 2:
413
+ extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
414
+ attention_mask = extended_attention_mask * extended_attention_mask.squeeze(-2).unsqueeze(-1)
415
+ elif attention_mask.dim() == 3:
416
+ attention_mask = attention_mask.unsqueeze(1)
417
+
418
+ return attention_mask
419
+
420
+ def get_rel_pos(self, hidden_states, query_states=None, relative_pos=None):
421
+ if self.relative_attention and relative_pos is None:
422
+ q = query_states.size(-2) if query_states is not None else hidden_states.size(-2)
423
+ relative_pos = build_relative_position(q, hidden_states.size(-2), hidden_states.device)
424
+ return relative_pos
425
+
426
+ def forward(
427
+ self,
428
+ hidden_states,
429
+ attention_mask,
430
+ output_hidden_states=True,
431
+ output_attentions=False,
432
+ query_states=None,
433
+ relative_pos=None,
434
+ return_dict=True,
435
+ ):
436
+ attention_mask = self.get_attention_mask(attention_mask)
437
+ relative_pos = self.get_rel_pos(hidden_states, query_states, relative_pos)
438
+
439
+ all_hidden_states = () if output_hidden_states else None
440
+ all_attentions = () if output_attentions else None
441
+
442
+ if isinstance(hidden_states, Sequence):
443
+ next_kv = hidden_states[0]
444
+ else:
445
+ next_kv = hidden_states
446
+ rel_embeddings = self.get_rel_embedding()
447
+ for i, layer_module in enumerate(self.layer):
448
+ if output_hidden_states:
449
+ all_hidden_states = all_hidden_states + (hidden_states,)
450
+
451
+ if self.gradient_checkpointing and self.training:
452
+ hidden_states = self._gradient_checkpointing_func(
453
+ layer_module.__call__,
454
+ next_kv,
455
+ attention_mask,
456
+ query_states,
457
+ relative_pos,
458
+ rel_embeddings,
459
+ output_attentions,
460
+ )
461
+ else:
462
+ hidden_states = layer_module(
463
+ next_kv,
464
+ attention_mask,
465
+ query_states=query_states,
466
+ relative_pos=relative_pos,
467
+ rel_embeddings=rel_embeddings,
468
+ output_attentions=output_attentions,
469
+ )
470
+
471
+ if output_attentions:
472
+ hidden_states, att_m = hidden_states
473
+
474
+ if query_states is not None:
475
+ query_states = hidden_states
476
+ if isinstance(hidden_states, Sequence):
477
+ next_kv = hidden_states[i + 1] if i + 1 < len(self.layer) else None
478
+ else:
479
+ next_kv = hidden_states
480
+
481
+ if output_attentions:
482
+ all_attentions = all_attentions + (att_m,)
483
+
484
+ if output_hidden_states:
485
+ all_hidden_states = all_hidden_states + (hidden_states,)
486
+
487
+ if not return_dict:
488
+ return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None)
489
+ return BaseModelOutput(
490
+ last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions
491
+ )
492
+
493
+
494
+ def build_relative_position(query_size, key_size, device):
495
+ """
496
+ Build relative position according to the query and key
497
+
498
+ We assume the absolute position of query \\(P_q\\) is range from (0, query_size) and the absolute position of key
499
+ \\(P_k\\) is range from (0, key_size), The relative positions from query to key is \\(R_{q \\rightarrow k} = P_q -
500
+ P_k\\)
501
+
502
+ Args:
503
+ query_size (int): the length of query
504
+ key_size (int): the length of key
505
+
506
+ Return:
507
+ `torch.LongTensor`: A tensor with shape [1, query_size, key_size]
508
+
509
+ """
510
+
511
+ q_ids = torch.arange(query_size, dtype=torch.long, device=device)
512
+ k_ids = torch.arange(key_size, dtype=torch.long, device=device)
513
+ rel_pos_ids = q_ids[:, None] - k_ids.view(1, -1).repeat(query_size, 1)
514
+ rel_pos_ids = rel_pos_ids[:query_size, :]
515
+ rel_pos_ids = rel_pos_ids.unsqueeze(0)
516
+ return rel_pos_ids
517
+
518
+
519
+ @torch.jit.script
520
+ def c2p_dynamic_expand(c2p_pos, query_layer, relative_pos):
521
+ return c2p_pos.expand([query_layer.size(0), query_layer.size(1), query_layer.size(2), relative_pos.size(-1)])
522
+
523
+
524
+ @torch.jit.script
525
+ def p2c_dynamic_expand(c2p_pos, query_layer, key_layer):
526
+ return c2p_pos.expand([query_layer.size(0), query_layer.size(1), key_layer.size(-2), key_layer.size(-2)])
527
+
528
+
529
+ @torch.jit.script
530
+ def pos_dynamic_expand(pos_index, p2c_att, key_layer):
531
+ return pos_index.expand(p2c_att.size()[:2] + (pos_index.size(-2), key_layer.size(-2)))
532
+
533
+
534
+ class DisentangledSelfAttention(nn.Module):
535
+ """
536
+ Disentangled self-attention module
537
+
538
+ Parameters:
539
+ config (`str`):
540
+ A model config class instance with the configuration to build a new model. The schema is similar to
541
+ *BertConfig*, for more details, please refer [`DebertaConfig`]
542
+
543
+ """
544
+
545
+ def __init__(self, config):
546
+ super().__init__()
547
+ if config.hidden_size % config.num_attention_heads != 0:
548
+ raise ValueError(
549
+ f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
550
+ f"heads ({config.num_attention_heads})"
551
+ )
552
+ self.num_attention_heads = config.num_attention_heads
553
+ self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
554
+ self.all_head_size = self.num_attention_heads * self.attention_head_size
555
+ self.in_proj = nn.Linear(config.hidden_size, self.all_head_size * 3, bias=False)
556
+ self.q_bias = nn.Parameter(torch.zeros((self.all_head_size), dtype=torch.float))
557
+ self.v_bias = nn.Parameter(torch.zeros((self.all_head_size), dtype=torch.float))
558
+ self.pos_att_type = config.pos_att_type if config.pos_att_type is not None else []
559
+
560
+ self.relative_attention = getattr(config, "relative_attention", False)
561
+ self.talking_head = getattr(config, "talking_head", False)
562
+
563
+ if self.talking_head:
564
+ self.head_logits_proj = nn.Linear(config.num_attention_heads, config.num_attention_heads, bias=False)
565
+ self.head_weights_proj = nn.Linear(config.num_attention_heads, config.num_attention_heads, bias=False)
566
+
567
+ if self.relative_attention:
568
+ self.max_relative_positions = getattr(config, "max_relative_positions", -1)
569
+ if self.max_relative_positions < 1:
570
+ self.max_relative_positions = config.max_position_embeddings
571
+ self.pos_dropout = StableDropout(config.hidden_dropout_prob)
572
+
573
+ if "c2p" in self.pos_att_type:
574
+ self.pos_proj = nn.Linear(config.hidden_size, self.all_head_size, bias=False)
575
+ if "p2c" in self.pos_att_type:
576
+ self.pos_q_proj = nn.Linear(config.hidden_size, self.all_head_size)
577
+
578
+ self.dropout = StableDropout(config.attention_probs_dropout_prob)
579
+
580
+ def transpose_for_scores(self, x):
581
+ new_x_shape = x.size()[:-1] + (self.num_attention_heads, -1)
582
+ x = x.view(new_x_shape)
583
+ return x.permute(0, 2, 1, 3)
584
+
585
+ def forward(
586
+ self,
587
+ hidden_states,
588
+ attention_mask,
589
+ output_attentions=False,
590
+ query_states=None,
591
+ relative_pos=None,
592
+ rel_embeddings=None,
593
+ ):
594
+ """
595
+ Call the module
596
+
597
+ Args:
598
+ hidden_states (`torch.FloatTensor`):
599
+ Input states to the module usually the output from previous layer, it will be the Q,K and V in
600
+ *Attention(Q,K,V)*
601
+
602
+ attention_mask (`torch.BoolTensor`):
603
+ An attention mask matrix of shape [*B*, *N*, *N*] where *B* is the batch size, *N* is the maximum
604
+ sequence length in which element [i,j] = *1* means the *i* th token in the input can attend to the *j*
605
+ th token.
606
+
607
+ output_attentions (`bool`, *optional*):
608
+ Whether return the attention matrix.
609
+
610
+ query_states (`torch.FloatTensor`, *optional*):
611
+ The *Q* state in *Attention(Q,K,V)*.
612
+
613
+ relative_pos (`torch.LongTensor`):
614
+ The relative position encoding between the tokens in the sequence. It's of shape [*B*, *N*, *N*] with
615
+ values ranging in [*-max_relative_positions*, *max_relative_positions*].
616
+
617
+ rel_embeddings (`torch.FloatTensor`):
618
+ The embedding of relative distances. It's a tensor of shape [\\(2 \\times
619
+ \\text{max_relative_positions}\\), *hidden_size*].
620
+
621
+
622
+ """
623
+ if query_states is None:
624
+ qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
625
+ query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
626
+ else:
627
+
628
+ def linear(w, b, x):
629
+ if b is not None:
630
+ return torch.matmul(x, w.t()) + b.t()
631
+ else:
632
+ return torch.matmul(x, w.t()) # + b.t()
633
+
634
+ ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
635
+ qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
636
+ qkvb = [None] * 3
637
+
638
+ q = linear(qkvw[0], qkvb[0], query_states.to(dtype=qkvw[0].dtype))
639
+ k, v = [linear(qkvw[i], qkvb[i], hidden_states.to(dtype=qkvw[i].dtype)) for i in range(1, 3)]
640
+ query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
641
+
642
+ query_layer = query_layer + self.transpose_for_scores(self.q_bias[None, None, :])
643
+ value_layer = value_layer + self.transpose_for_scores(self.v_bias[None, None, :])
644
+
645
+ rel_att = None
646
+ # Take the dot product between "query" and "key" to get the raw attention scores.
647
+ scale_factor = 1 + len(self.pos_att_type)
648
+ scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor)
649
+ query_layer = query_layer / scale.to(dtype=query_layer.dtype)
650
+ attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
651
+ if self.relative_attention:
652
+ rel_embeddings = self.pos_dropout(rel_embeddings)
653
+ rel_att = self.disentangled_att_bias(query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
654
+
655
+ if rel_att is not None:
656
+ attention_scores = attention_scores + rel_att
657
+
658
+ # bxhxlxd
659
+ if self.talking_head:
660
+ attention_scores = self.head_logits_proj(attention_scores.permute(0, 2, 3, 1)).permute(0, 3, 1, 2)
661
+
662
+ attention_probs = XSoftmax.apply(attention_scores, attention_mask, -1)
663
+ attention_probs = self.dropout(attention_probs)
664
+ if self.talking_head:
665
+ attention_probs = self.head_weights_proj(attention_probs.permute(0, 2, 3, 1)).permute(0, 3, 1, 2)
666
+
667
+ context_layer = torch.matmul(attention_probs, value_layer)
668
+ context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
669
+ new_context_layer_shape = context_layer.size()[:-2] + (-1,)
670
+ context_layer = context_layer.view(new_context_layer_shape)
671
+ if output_attentions:
672
+ return (context_layer, attention_probs)
673
+ else:
674
+ return context_layer
675
+
676
+ def disentangled_att_bias(self, query_layer, key_layer, relative_pos, rel_embeddings, scale_factor):
677
+ if relative_pos is None:
678
+ q = query_layer.size(-2)
679
+ relative_pos = build_relative_position(q, key_layer.size(-2), query_layer.device)
680
+ if relative_pos.dim() == 2:
681
+ relative_pos = relative_pos.unsqueeze(0).unsqueeze(0)
682
+ elif relative_pos.dim() == 3:
683
+ relative_pos = relative_pos.unsqueeze(1)
684
+ # bxhxqxk
685
+ elif relative_pos.dim() != 4:
686
+ raise ValueError(f"Relative position ids must be of dim 2 or 3 or 4. {relative_pos.dim()}")
687
+
688
+ att_span = min(max(query_layer.size(-2), key_layer.size(-2)), self.max_relative_positions)
689
+ relative_pos = relative_pos.long().to(query_layer.device)
690
+ rel_embeddings = rel_embeddings[
691
+ self.max_relative_positions - att_span : self.max_relative_positions + att_span, :
692
+ ].unsqueeze(0)
693
+
694
+ score = 0
695
+
696
+ # content->position
697
+ if "c2p" in self.pos_att_type:
698
+ pos_key_layer = self.pos_proj(rel_embeddings)
699
+ pos_key_layer = self.transpose_for_scores(pos_key_layer)
700
+ c2p_att = torch.matmul(query_layer, pos_key_layer.transpose(-1, -2))
701
+ c2p_pos = torch.clamp(relative_pos + att_span, 0, att_span * 2 - 1)
702
+ c2p_att = torch.gather(c2p_att, dim=-1, index=c2p_dynamic_expand(c2p_pos, query_layer, relative_pos))
703
+ score += c2p_att
704
+
705
+ # position->content
706
+ if "p2c" in self.pos_att_type:
707
+ pos_query_layer = self.pos_q_proj(rel_embeddings)
708
+ pos_query_layer = self.transpose_for_scores(pos_query_layer)
709
+ pos_query_layer /= torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor)
710
+ if query_layer.size(-2) != key_layer.size(-2):
711
+ r_pos = build_relative_position(key_layer.size(-2), key_layer.size(-2), query_layer.device)
712
+ else:
713
+ r_pos = relative_pos
714
+ p2c_pos = torch.clamp(-r_pos + att_span, 0, att_span * 2 - 1)
715
+ p2c_att = torch.matmul(key_layer, pos_query_layer.transpose(-1, -2).to(dtype=key_layer.dtype))
716
+ p2c_att = torch.gather(
717
+ p2c_att, dim=-1, index=p2c_dynamic_expand(p2c_pos, query_layer, key_layer)
718
+ ).transpose(-1, -2)
719
+
720
+ if query_layer.size(-2) != key_layer.size(-2):
721
+ pos_index = relative_pos[:, :, :, 0].unsqueeze(-1)
722
+ p2c_att = torch.gather(p2c_att, dim=-2, index=pos_dynamic_expand(pos_index, p2c_att, key_layer))
723
+ score += p2c_att
724
+
725
+ return score
726
+
727
+
728
+ class DebertaEmbeddings(nn.Module):
729
+ """Construct the embeddings from word, position and token_type embeddings."""
730
+
731
+ def __init__(self, config):
732
+ super().__init__()
733
+ pad_token_id = getattr(config, "pad_token_id", 0)
734
+ self.embedding_size = getattr(config, "embedding_size", config.hidden_size)
735
+ self.word_embeddings = nn.Embedding(config.vocab_size, self.embedding_size, padding_idx=pad_token_id)
736
+
737
+ self.position_biased_input = getattr(config, "position_biased_input", True)
738
+ if not self.position_biased_input:
739
+ self.position_embeddings = None
740
+ else:
741
+ self.position_embeddings = nn.Embedding(config.max_position_embeddings, self.embedding_size)
742
+
743
+ if config.type_vocab_size > 0:
744
+ self.token_type_embeddings = nn.Embedding(config.type_vocab_size, self.embedding_size)
745
+
746
+ if self.embedding_size != config.hidden_size:
747
+ self.embed_proj = nn.Linear(self.embedding_size, config.hidden_size, bias=False)
748
+ self.LayerNorm = DebertaLayerNorm(config.hidden_size, config.layer_norm_eps)
749
+ self.dropout = StableDropout(config.hidden_dropout_prob)
750
+ self.config = config
751
+
752
+ # position_ids (1, len position emb) is contiguous in memory and exported when serialized
753
+ self.register_buffer(
754
+ "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False
755
+ )
756
+
757
+ def forward(self, input_ids=None, token_type_ids=None, position_ids=None, mask=None, inputs_embeds=None):
758
+ if input_ids is not None:
759
+ input_shape = input_ids.size()
760
+ else:
761
+ input_shape = inputs_embeds.size()[:-1]
762
+
763
+ seq_length = input_shape[1]
764
+
765
+ if position_ids is None:
766
+ position_ids = self.position_ids[:, :seq_length]
767
+
768
+ if token_type_ids is None:
769
+ token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
770
+
771
+ if inputs_embeds is None:
772
+ inputs_embeds = self.word_embeddings(input_ids)
773
+
774
+ if self.position_embeddings is not None:
775
+ position_embeddings = self.position_embeddings(position_ids.long())
776
+ else:
777
+ position_embeddings = torch.zeros_like(inputs_embeds)
778
+
779
+ embeddings = inputs_embeds
780
+ if self.position_biased_input:
781
+ embeddings += position_embeddings
782
+ if self.config.type_vocab_size > 0:
783
+ token_type_embeddings = self.token_type_embeddings(token_type_ids)
784
+ embeddings += token_type_embeddings
785
+
786
+ if self.embedding_size != self.config.hidden_size:
787
+ embeddings = self.embed_proj(embeddings)
788
+
789
+ embeddings = self.LayerNorm(embeddings)
790
+
791
+ if mask is not None:
792
+ if mask.dim() != embeddings.dim():
793
+ if mask.dim() == 4:
794
+ mask = mask.squeeze(1).squeeze(1)
795
+ mask = mask.unsqueeze(2)
796
+ mask = mask.to(embeddings.dtype)
797
+
798
+ embeddings = embeddings * mask
799
+
800
+ embeddings = self.dropout(embeddings)
801
+ return embeddings
802
+
803
+
804
+ class DebertaPreTrainedModel(PreTrainedModel):
805
+ """
806
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
807
+ models.
808
+ """
809
+
810
+ config_class = DebertaConfig
811
+ base_model_prefix = "deberta"
812
+ _keys_to_ignore_on_load_unexpected = ["position_embeddings"]
813
+ supports_gradient_checkpointing = True
814
+
815
+ def _init_weights(self, module):
816
+ """Initialize the weights."""
817
+ if isinstance(module, nn.Linear):
818
+ if module.weight.requires_grad==False: # a hack for skipping the nb params
819
+ return
820
+ # Slightly different from the TF version which uses truncated_normal for initialization
821
+ # cf https://github.com/pytorch/pytorch/pull/5617
822
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
823
+ if module.bias is not None:
824
+ module.bias.data.zero_()
825
+ elif isinstance(module, nn.Embedding):
826
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
827
+ if module.padding_idx is not None:
828
+ module.weight.data[module.padding_idx].zero_()
829
+
830
+
831
+ DEBERTA_START_DOCSTRING = r"""
832
+ The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
833
+ Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
834
+ on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
835
+ improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
836
+
837
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
838
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
839
+ and behavior.
840
+
841
+
842
+ Parameters:
843
+ config ([`DebertaConfig`]): Model configuration class with all the parameters of the model.
844
+ Initializing with a config file does not load the weights associated with the model, only the
845
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
846
+ """
847
+
848
+ DEBERTA_INPUTS_DOCSTRING = r"""
849
+ Args:
850
+ input_ids (`torch.LongTensor` of shape `({0})`):
851
+ Indices of input sequence tokens in the vocabulary.
852
+
853
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
854
+ [`PreTrainedTokenizer.__call__`] for details.
855
+
856
+ [What are input IDs?](../glossary#input-ids)
857
+ attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
858
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
859
+
860
+ - 1 for tokens that are **not masked**,
861
+ - 0 for tokens that are **masked**.
862
+
863
+ [What are attention masks?](../glossary#attention-mask)
864
+ token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*):
865
+ Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
866
+ 1]`:
867
+
868
+ - 0 corresponds to a *sentence A* token,
869
+ - 1 corresponds to a *sentence B* token.
870
+
871
+ [What are token type IDs?](../glossary#token-type-ids)
872
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
873
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
874
+ config.max_position_embeddings - 1]`.
875
+
876
+ [What are position IDs?](../glossary#position-ids)
877
+ inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*):
878
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
879
+ is useful if you want more control over how to convert *input_ids* indices into associated vectors than the
880
+ model's internal embedding lookup matrix.
881
+ output_attentions (`bool`, *optional*):
882
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
883
+ tensors for more detail.
884
+ output_hidden_states (`bool`, *optional*):
885
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
886
+ more detail.
887
+ return_dict (`bool`, *optional*):
888
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
889
+ """
890
+
891
+
892
+ @add_start_docstrings(
893
+ "The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.",
894
+ DEBERTA_START_DOCSTRING,
895
+ )
896
+ class DebertaModel(DebertaPreTrainedModel):
897
+ def __init__(self, config):
898
+ super().__init__(config)
899
+
900
+ self.embeddings = DebertaEmbeddings(config)
901
+ self.encoder = DebertaEncoder(config)
902
+ self.z_steps = 0
903
+ self.config = config
904
+ # Initialize weights and apply final processing
905
+ self.post_init()
906
+
907
+ def get_input_embeddings(self):
908
+ return self.embeddings.word_embeddings
909
+
910
+ def set_input_embeddings(self, new_embeddings):
911
+ self.embeddings.word_embeddings = new_embeddings
912
+
913
+ def _prune_heads(self, heads_to_prune):
914
+ """
915
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
916
+ class PreTrainedModel
917
+ """
918
+ raise NotImplementedError("The prune function is not implemented in DeBERTa model.")
919
+
920
+ @add_start_docstrings_to_model_forward(DEBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
921
+ @add_code_sample_docstrings(
922
+ checkpoint=_CHECKPOINT_FOR_DOC,
923
+ output_type=BaseModelOutput,
924
+ config_class=_CONFIG_FOR_DOC,
925
+ )
926
+ def forward(
927
+ self,
928
+ input_ids: Optional[torch.Tensor] = None,
929
+ attention_mask: Optional[torch.Tensor] = None,
930
+ token_type_ids: Optional[torch.Tensor] = None,
931
+ position_ids: Optional[torch.Tensor] = None,
932
+ inputs_embeds: Optional[torch.Tensor] = None,
933
+ output_attentions: Optional[bool] = None,
934
+ output_hidden_states: Optional[bool] = None,
935
+ return_dict: Optional[bool] = None,
936
+ ) -> Union[Tuple, BaseModelOutput]:
937
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
938
+ output_hidden_states = (
939
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
940
+ )
941
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
942
+
943
+ if input_ids is not None and inputs_embeds is not None:
944
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
945
+ elif input_ids is not None:
946
+ self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
947
+ input_shape = input_ids.size()
948
+ elif inputs_embeds is not None:
949
+ input_shape = inputs_embeds.size()[:-1]
950
+ else:
951
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
952
+
953
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
954
+
955
+ if attention_mask is None:
956
+ attention_mask = torch.ones(input_shape, device=device)
957
+ if token_type_ids is None:
958
+ token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
959
+
960
+ embedding_output = self.embeddings(
961
+ input_ids=input_ids,
962
+ token_type_ids=token_type_ids,
963
+ position_ids=position_ids,
964
+ mask=attention_mask,
965
+ inputs_embeds=inputs_embeds,
966
+ )
967
+
968
+ encoder_outputs = self.encoder(
969
+ embedding_output,
970
+ attention_mask,
971
+ output_hidden_states=True,
972
+ output_attentions=output_attentions,
973
+ return_dict=return_dict,
974
+ )
975
+ encoded_layers = encoder_outputs[1]
976
+
977
+ if self.z_steps > 1:
978
+ hidden_states = encoded_layers[-2]
979
+ layers = [self.encoder.layer[-1] for _ in range(self.z_steps)]
980
+ query_states = encoded_layers[-1]
981
+ rel_embeddings = self.encoder.get_rel_embedding()
982
+ attention_mask = self.encoder.get_attention_mask(attention_mask)
983
+ rel_pos = self.encoder.get_rel_pos(embedding_output)
984
+ for layer in layers[1:]:
985
+ query_states = layer(
986
+ hidden_states,
987
+ attention_mask,
988
+ output_attentions=False,
989
+ query_states=query_states,
990
+ relative_pos=rel_pos,
991
+ rel_embeddings=rel_embeddings,
992
+ )
993
+ encoded_layers.append(query_states)
994
+
995
+ sequence_output = encoded_layers[-1]
996
+
997
+ if not return_dict:
998
+ return (sequence_output,) + encoder_outputs[(1 if output_hidden_states else 2) :]
999
+
1000
+ return BaseModelOutput(
1001
+ last_hidden_state=sequence_output,
1002
+ hidden_states=encoder_outputs.hidden_states if output_hidden_states else None,
1003
+ attentions=encoder_outputs.attentions,
1004
+ )
1005
+
1006
+
1007
+ @add_start_docstrings("""DeBERTa Model with a `language modeling` head on top.""", DEBERTA_START_DOCSTRING)
1008
+ class DebertaForMaskedLM(DebertaPreTrainedModel):
1009
+ _tied_weights_keys = ["cls.predictions.decoder.weight", "cls.predictions.decoder.bias"]
1010
+
1011
+ def __init__(self, config):
1012
+ super().__init__(config)
1013
+
1014
+ self.deberta = DebertaModel(config)
1015
+ self.cls = DebertaOnlyMLMHead(config)
1016
+
1017
+ self.post_cls = DebertaFinalMLMHead(config)
1018
+ # Initialize weights and apply final processing
1019
+ self.post_init()
1020
+ self.num_concepts = config.num_concepts
1021
+ #self.nb = DebertaNB(config)
1022
+
1023
+ def get_output_embeddings(self):
1024
+ return self.cls.predictions.decoder
1025
+
1026
+ def set_output_embeddings(self, new_embeddings):
1027
+ self.cls.predictions.decoder = new_embeddings
1028
+ self.cls.predictions.bias = new_embeddings.bias
1029
+
1030
+ @add_start_docstrings_to_model_forward(DEBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
1031
+ @add_code_sample_docstrings(
1032
+ checkpoint=_CHECKPOINT_FOR_MASKED_LM,
1033
+ output_type=MaskedLMOutput,
1034
+ config_class=_CONFIG_FOR_DOC,
1035
+ mask="[MASK]",
1036
+ expected_output=_MASKED_LM_EXPECTED_OUTPUT,
1037
+ expected_loss=_MASKED_LM_EXPECTED_LOSS,
1038
+ )
1039
+ def forward(
1040
+ self,
1041
+ input_ids: Optional[torch.Tensor] = None,
1042
+ attention_mask: Optional[torch.Tensor] = None,
1043
+ token_type_ids: Optional[torch.Tensor] = None,
1044
+ position_ids: Optional[torch.Tensor] = None,
1045
+ inputs_embeds: Optional[torch.Tensor] = None,
1046
+ labels: Optional[torch.Tensor] = None,
1047
+ output_attentions: Optional[bool] = None,
1048
+ output_hidden_states: Optional[bool] = None,
1049
+ return_dict: Optional[bool] = None,
1050
+ ) -> Union[Tuple, MaskedLMOutput]:
1051
+ r"""
1052
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1053
+ Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
1054
+ config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
1055
+ loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
1056
+ """
1057
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1058
+
1059
+ outputs = self.deberta(
1060
+ input_ids,
1061
+ attention_mask=attention_mask,
1062
+ token_type_ids=token_type_ids,
1063
+ position_ids=position_ids,
1064
+ inputs_embeds=inputs_embeds,
1065
+ output_attentions=output_attentions,
1066
+ output_hidden_states=output_hidden_states,
1067
+ return_dict=return_dict,
1068
+ )
1069
+
1070
+
1071
+ sequence_output = outputs[0]
1072
+ prediction_scores = self.cls(sequence_output)
1073
+ #prediction_scores = self.nb(prediction_scores)
1074
+ prediction_scores = self.post_cls(prediction_scores)
1075
+
1076
+ masked_lm_loss = None
1077
+ if labels is not None:
1078
+ loss_fct = CrossEntropyLoss() # -100 index = padding token
1079
+ masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size - self.num_concepts), labels.view(-1))
1080
+
1081
+ if not return_dict:
1082
+ output = (prediction_scores,) + outputs[1:]
1083
+ return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
1084
+
1085
+ return MaskedLMOutput(
1086
+ loss=masked_lm_loss,
1087
+ logits=prediction_scores,
1088
+ hidden_states=outputs.hidden_states,
1089
+ attentions=outputs.attentions,
1090
+ )
1091
+
1092
+ class DebertaFinalMLMHead(nn.Module):
1093
+
1094
+ def __init__(self, config):
1095
+ super().__init__()
1096
+ self.num_concepts = config.num_concepts
1097
+ self.head = torch.nn.Linear(self.num_concepts, config.vocab_size - self.num_concepts)
1098
+
1099
+ def forward(self, pre_logits):
1100
+ concept_scores = pre_logits[:,:,-self.num_concepts:]
1101
+ return self.head(concept_scores)
1102
+
1103
+ class DebertaNB(nn.Module):
1104
+ def __init__(self, config):
1105
+ super().__init__()
1106
+ self.top_k = config.top_k
1107
+ self.prob_threshold = config.prob_threshold
1108
+ print(self.top_k, self.prob_threshold)
1109
+ nb = pickle.load(open(f'{os.path.dirname(os.path.abspath(__file__))}/nb_final_multinomial_{self.top_k}_{self.prob_threshold}.pickle', 'rb'))
1110
+ #nb = pickle.load(open(f'nb_final_multinomial_{self.top_k}_1.0.pickle', 'rb'))
1111
+ self.effective_vocab, self.num_concepts = nb.feature_count_.shape
1112
+ #self.nb = torch.nn.Linear(self.num_concepts, self.effective_vocab)
1113
+ with torch.no_grad():
1114
+ class_log_prior = torch.from_numpy(nb.class_log_prior_).float()
1115
+ #smallest_non_inf_prior = torch.min(class_log_prior[class_log_prior!=-torch.inf])
1116
+ class_log_prior[class_log_prior==-torch.inf] = -1000 #5 * smallest_non_inf_prior
1117
+
1118
+ self.nb_features_log_prob = torch.from_numpy(nb.feature_log_prob_.T).float()
1119
+ self.nb_class_log_prior = class_log_prior
1120
+
1121
+ #self.nb.bias.copy_(class_log_prior)
1122
+ #self.nb.weight.copy_(torch.from_numpy(nb.feature_log_prob_))
1123
+ #for param in self.nb.parameters():
1124
+ # param.requires_grad = False
1125
+
1126
+ def forward(self, prediction_scores):
1127
+ #print(self.nb_class_log_prior.max(), self.nb_class_log_prior.min())
1128
+ #print(self.nb.bias.max(), self.nb.bias.min())
1129
+ #import sys
1130
+ #sys.exit(2)
1131
+ num_sequences, num_tokens, _ = prediction_scores.shape
1132
+ concept_scores = prediction_scores[:,:,self.effective_vocab:]
1133
+ batch_size, token_num, _ = concept_scores.shape
1134
+ concept_probs = torch.nn.functional.softmax(concept_scores, dim=-1).view(-1, self.num_concepts)
1135
+ probs, relevant_features = torch.topk(concept_probs, self.top_k, dim=-1)
1136
+
1137
+ thresholds = torch.tensor([[self.prob_threshold] for _ in range(batch_size * token_num)])
1138
+ limits = torch.searchsorted(torch.cumsum(probs, dim=-1),
1139
+ torch.tensor([[self.prob_threshold] for _ in range(batch_size*token_num)], device=probs.device))
1140
+
1141
+ filtered_relevant_features = []
1142
+ for feats, lims in zip(relevant_features, limits):
1143
+ limit = min(self.top_k, lims[0].item())
1144
+ filtered_relevant_features.append(torch.nn.functional.pad(feats[0:limit], pad=[0, self.top_k - limit], value=feats[0]))
1145
+ relevant_features = torch.vstack(filtered_relevant_features).view(batch_size, token_num, -1)
1146
+ device = concept_scores.device
1147
+
1148
+ features = torch.zeros((num_sequences, num_tokens, self.num_concepts), device=device, dtype=self.nb_features_log_prob.dtype)
1149
+ features.scatter_(dim=2, index=relevant_features, src=torch.ones_like(relevant_features, device=features.device, dtype=features.dtype))
1150
+
1151
+ #modified_prediction_scores = self.nb(features)
1152
+ modified_prediction_scores = features @ self.nb_features_log_prob.to(features.device) + self.nb_class_log_prior.to(features.device)
1153
+ #print(modified_prediction_scores.shape, modified_prediction_scores[0], "\n\n==============\n\n", modified_prediction_scores2.shape, modified_prediction_scores2[0], "\n\n~~~~~~~~~~~~~~~~~~~~~~")
1154
+ #import sys
1155
+ #sys.exit(2)
1156
+ return modified_prediction_scores
1157
+
1158
+ class DebertaPredictionHeadTransform(nn.Module):
1159
+ def __init__(self, config):
1160
+ super().__init__()
1161
+ self.embedding_size = getattr(config, "embedding_size", config.hidden_size)
1162
+
1163
+ self.dense = nn.Linear(config.hidden_size, self.embedding_size)
1164
+ if isinstance(config.hidden_act, str):
1165
+ self.transform_act_fn = ACT2FN[config.hidden_act]
1166
+ else:
1167
+ self.transform_act_fn = config.hidden_act
1168
+ self.LayerNorm = nn.LayerNorm(self.embedding_size, eps=config.layer_norm_eps)
1169
+
1170
+ def forward(self, hidden_states):
1171
+ hidden_states = self.dense(hidden_states)
1172
+ hidden_states = self.transform_act_fn(hidden_states)
1173
+ hidden_states = self.LayerNorm(hidden_states)
1174
+ return hidden_states
1175
+
1176
+
1177
+ class DebertaLMPredictionHead(nn.Module):
1178
+ def __init__(self, config):
1179
+ super().__init__()
1180
+ self.transform = DebertaPredictionHeadTransform(config)
1181
+
1182
+ self.embedding_size = getattr(config, "embedding_size", config.hidden_size)
1183
+ # The output weights are the same as the input embeddings, but there is
1184
+ # an output-only bias for each token.
1185
+ self.decoder = nn.Linear(self.embedding_size, config.vocab_size, bias=False)
1186
+
1187
+ self.bias = nn.Parameter(torch.zeros(config.vocab_size))
1188
+
1189
+ # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
1190
+ self.decoder.bias = self.bias
1191
+
1192
+ def _tie_weights(self):
1193
+ self.decoder.bias = self.bias
1194
+
1195
+ def forward(self, hidden_states):
1196
+ hidden_states = self.transform(hidden_states)
1197
+ hidden_states = self.decoder(hidden_states)
1198
+ return hidden_states
1199
+
1200
+ # copied from transformers.models.bert.BertOnlyMLMHead with bert -> deberta
1201
+ class DebertaOnlyMLMHead(nn.Module):
1202
+ def __init__(self, config):
1203
+ super().__init__()
1204
+ self.predictions = DebertaLMPredictionHead(config)
1205
+
1206
+ def forward(self, sequence_output):
1207
+ prediction_scores = self.predictions(sequence_output)
1208
+ return prediction_scores
1209
+
1210
+
1211
+ @add_start_docstrings(
1212
+ """
1213
+ DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
1214
+ pooled output) e.g. for GLUE tasks.
1215
+ """,
1216
+ DEBERTA_START_DOCSTRING,
1217
+ )
1218
+ class DebertaForSequenceClassification(DebertaPreTrainedModel):
1219
+ def __init__(self, config):
1220
+ super().__init__(config)
1221
+
1222
+ num_labels = getattr(config, "num_labels", 2)
1223
+ self.num_labels = num_labels
1224
+
1225
+ self.deberta = DebertaModel(config)
1226
+ self.pooler = ContextPooler(config)
1227
+ output_dim = self.pooler.output_dim
1228
+
1229
+ self.classifier = nn.Linear(output_dim, num_labels)
1230
+ drop_out = getattr(config, "cls_dropout", None)
1231
+ drop_out = self.config.hidden_dropout_prob if drop_out is None else drop_out
1232
+ self.dropout = StableDropout(drop_out)
1233
+
1234
+ # Initialize weights and apply final processing
1235
+ self.post_init()
1236
+
1237
+ def get_input_embeddings(self):
1238
+ return self.deberta.get_input_embeddings()
1239
+
1240
+ def set_input_embeddings(self, new_embeddings):
1241
+ self.deberta.set_input_embeddings(new_embeddings)
1242
+
1243
+ @add_start_docstrings_to_model_forward(DEBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
1244
+ @add_code_sample_docstrings(
1245
+ checkpoint=_CHECKPOINT_FOR_DOC,
1246
+ output_type=SequenceClassifierOutput,
1247
+ config_class=_CONFIG_FOR_DOC,
1248
+ )
1249
+ def forward(
1250
+ self,
1251
+ input_ids: Optional[torch.Tensor] = None,
1252
+ attention_mask: Optional[torch.Tensor] = None,
1253
+ token_type_ids: Optional[torch.Tensor] = None,
1254
+ position_ids: Optional[torch.Tensor] = None,
1255
+ inputs_embeds: Optional[torch.Tensor] = None,
1256
+ labels: Optional[torch.Tensor] = None,
1257
+ output_attentions: Optional[bool] = None,
1258
+ output_hidden_states: Optional[bool] = None,
1259
+ return_dict: Optional[bool] = None,
1260
+ ) -> Union[Tuple, SequenceClassifierOutput]:
1261
+ r"""
1262
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1263
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1264
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1265
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1266
+ """
1267
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1268
+
1269
+ outputs = self.deberta(
1270
+ input_ids,
1271
+ token_type_ids=token_type_ids,
1272
+ attention_mask=attention_mask,
1273
+ position_ids=position_ids,
1274
+ inputs_embeds=inputs_embeds,
1275
+ output_attentions=output_attentions,
1276
+ output_hidden_states=output_hidden_states,
1277
+ return_dict=return_dict,
1278
+ )
1279
+
1280
+ encoder_layer = outputs[0]
1281
+ pooled_output = self.pooler(encoder_layer)
1282
+ pooled_output = self.dropout(pooled_output)
1283
+ logits = self.classifier(pooled_output)
1284
+
1285
+ loss = None
1286
+ if labels is not None:
1287
+ if self.config.problem_type is None:
1288
+ if self.num_labels == 1:
1289
+ # regression task
1290
+ loss_fn = nn.MSELoss()
1291
+ logits = logits.view(-1).to(labels.dtype)
1292
+ loss = loss_fn(logits, labels.view(-1))
1293
+ elif labels.dim() == 1 or labels.size(-1) == 1:
1294
+ label_index = (labels >= 0).nonzero()
1295
+ labels = labels.long()
1296
+ if label_index.size(0) > 0:
1297
+ labeled_logits = torch.gather(
1298
+ logits, 0, label_index.expand(label_index.size(0), logits.size(1))
1299
+ )
1300
+ labels = torch.gather(labels, 0, label_index.view(-1))
1301
+ loss_fct = CrossEntropyLoss()
1302
+ loss = loss_fct(labeled_logits.view(-1, self.num_labels).float(), labels.view(-1))
1303
+ else:
1304
+ loss = torch.tensor(0).to(logits)
1305
+ else:
1306
+ log_softmax = nn.LogSoftmax(-1)
1307
+ loss = -((log_softmax(logits) * labels).sum(-1)).mean()
1308
+ elif self.config.problem_type == "regression":
1309
+ loss_fct = MSELoss()
1310
+ if self.num_labels == 1:
1311
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
1312
+ else:
1313
+ loss = loss_fct(logits, labels)
1314
+ elif self.config.problem_type == "single_label_classification":
1315
+ loss_fct = CrossEntropyLoss()
1316
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1317
+ elif self.config.problem_type == "multi_label_classification":
1318
+ loss_fct = BCEWithLogitsLoss()
1319
+ loss = loss_fct(logits, labels)
1320
+ if not return_dict:
1321
+ output = (logits,) + outputs[1:]
1322
+ return ((loss,) + output) if loss is not None else output
1323
+
1324
+ return SequenceClassifierOutput(
1325
+ loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
1326
+ )
1327
+
1328
+
1329
+ @add_start_docstrings(
1330
+ """
1331
+ DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
1332
+ Named-Entity-Recognition (NER) tasks.
1333
+ """,
1334
+ DEBERTA_START_DOCSTRING,
1335
+ )
1336
+ class DebertaForTokenClassification(DebertaPreTrainedModel):
1337
+ def __init__(self, config):
1338
+ super().__init__(config)
1339
+ self.num_labels = config.num_labels
1340
+
1341
+ self.deberta = DebertaModel(config)
1342
+ self.dropout = nn.Dropout(config.hidden_dropout_prob)
1343
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
1344
+
1345
+ # Initialize weights and apply final processing
1346
+ self.post_init()
1347
+
1348
+ @add_start_docstrings_to_model_forward(DEBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
1349
+ @add_code_sample_docstrings(
1350
+ checkpoint=_CHECKPOINT_FOR_DOC,
1351
+ output_type=TokenClassifierOutput,
1352
+ config_class=_CONFIG_FOR_DOC,
1353
+ )
1354
+ def forward(
1355
+ self,
1356
+ input_ids: Optional[torch.Tensor] = None,
1357
+ attention_mask: Optional[torch.Tensor] = None,
1358
+ token_type_ids: Optional[torch.Tensor] = None,
1359
+ position_ids: Optional[torch.Tensor] = None,
1360
+ inputs_embeds: Optional[torch.Tensor] = None,
1361
+ labels: Optional[torch.Tensor] = None,
1362
+ output_attentions: Optional[bool] = None,
1363
+ output_hidden_states: Optional[bool] = None,
1364
+ return_dict: Optional[bool] = None,
1365
+ ) -> Union[Tuple, TokenClassifierOutput]:
1366
+ r"""
1367
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1368
+ Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
1369
+ """
1370
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1371
+
1372
+ outputs = self.deberta(
1373
+ input_ids,
1374
+ attention_mask=attention_mask,
1375
+ token_type_ids=token_type_ids,
1376
+ position_ids=position_ids,
1377
+ inputs_embeds=inputs_embeds,
1378
+ output_attentions=output_attentions,
1379
+ output_hidden_states=output_hidden_states,
1380
+ return_dict=return_dict,
1381
+ )
1382
+
1383
+ sequence_output = outputs[0]
1384
+
1385
+ sequence_output = self.dropout(sequence_output)
1386
+ logits = self.classifier(sequence_output)
1387
+
1388
+ loss = None
1389
+ if labels is not None:
1390
+ loss_fct = CrossEntropyLoss()
1391
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1392
+
1393
+ if not return_dict:
1394
+ output = (logits,) + outputs[1:]
1395
+ return ((loss,) + output) if loss is not None else output
1396
+
1397
+ return TokenClassifierOutput(
1398
+ loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
1399
+ )
1400
+
1401
+
1402
+ @add_start_docstrings(
1403
+ """
1404
+ DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
1405
+ layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
1406
+ """,
1407
+ DEBERTA_START_DOCSTRING,
1408
+ )
1409
+ class DebertaForQuestionAnswering(DebertaPreTrainedModel):
1410
+ def __init__(self, config):
1411
+ super().__init__(config)
1412
+ self.num_labels = config.num_labels
1413
+
1414
+ self.deberta = DebertaModel(config)
1415
+ self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
1416
+
1417
+ # Initialize weights and apply final processing
1418
+ self.post_init()
1419
+
1420
+ @add_start_docstrings_to_model_forward(DEBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
1421
+ @add_code_sample_docstrings(
1422
+ checkpoint=_CHECKPOINT_FOR_QA,
1423
+ output_type=QuestionAnsweringModelOutput,
1424
+ config_class=_CONFIG_FOR_DOC,
1425
+ expected_output=_QA_EXPECTED_OUTPUT,
1426
+ expected_loss=_QA_EXPECTED_LOSS,
1427
+ qa_target_start_index=_QA_TARGET_START_INDEX,
1428
+ qa_target_end_index=_QA_TARGET_END_INDEX,
1429
+ )
1430
+ def forward(
1431
+ self,
1432
+ input_ids: Optional[torch.Tensor] = None,
1433
+ attention_mask: Optional[torch.Tensor] = None,
1434
+ token_type_ids: Optional[torch.Tensor] = None,
1435
+ position_ids: Optional[torch.Tensor] = None,
1436
+ inputs_embeds: Optional[torch.Tensor] = None,
1437
+ start_positions: Optional[torch.Tensor] = None,
1438
+ end_positions: Optional[torch.Tensor] = None,
1439
+ output_attentions: Optional[bool] = None,
1440
+ output_hidden_states: Optional[bool] = None,
1441
+ return_dict: Optional[bool] = None,
1442
+ ) -> Union[Tuple, QuestionAnsweringModelOutput]:
1443
+ r"""
1444
+ start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1445
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1446
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1447
+ are not taken into account for computing the loss.
1448
+ end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1449
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1450
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1451
+ are not taken into account for computing the loss.
1452
+ """
1453
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1454
+
1455
+ outputs = self.deberta(
1456
+ input_ids,
1457
+ attention_mask=attention_mask,
1458
+ token_type_ids=token_type_ids,
1459
+ position_ids=position_ids,
1460
+ inputs_embeds=inputs_embeds,
1461
+ output_attentions=output_attentions,
1462
+ output_hidden_states=output_hidden_states,
1463
+ return_dict=return_dict,
1464
+ )
1465
+
1466
+ sequence_output = outputs[0]
1467
+
1468
+ logits = self.qa_outputs(sequence_output)
1469
+ start_logits, end_logits = logits.split(1, dim=-1)
1470
+ start_logits = start_logits.squeeze(-1).contiguous()
1471
+ end_logits = end_logits.squeeze(-1).contiguous()
1472
+
1473
+ total_loss = None
1474
+ if start_positions is not None and end_positions is not None:
1475
+ # If we are on multi-GPU, split add a dimension
1476
+ if len(start_positions.size()) > 1:
1477
+ start_positions = start_positions.squeeze(-1)
1478
+ if len(end_positions.size()) > 1:
1479
+ end_positions = end_positions.squeeze(-1)
1480
+ # sometimes the start/end positions are outside our model inputs, we ignore these terms
1481
+ ignored_index = start_logits.size(1)
1482
+ start_positions = start_positions.clamp(0, ignored_index)
1483
+ end_positions = end_positions.clamp(0, ignored_index)
1484
+
1485
+ loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
1486
+ start_loss = loss_fct(start_logits, start_positions)
1487
+ end_loss = loss_fct(end_logits, end_positions)
1488
+ total_loss = (start_loss + end_loss) / 2
1489
+
1490
+ if not return_dict:
1491
+ output = (start_logits, end_logits) + outputs[1:]
1492
+ return ((total_loss,) + output) if total_loss is not None else output
1493
+
1494
+ return QuestionAnsweringModelOutput(
1495
+ loss=total_loss,
1496
+ start_logits=start_logits,
1497
+ end_logits=end_logits,
1498
+ hidden_states=outputs.hidden_states,
1499
+ attentions=outputs.attentions,
1500
+ )