Lora commited on
Commit
78e27f8
1 Parent(s): 82ab6af

upload files

Browse files
README.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ tags:
4
+ - text-generation-inference
5
+ library_name: transformers
6
+ ---
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BackpackGPT2LMHeadModel"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_backpack_gpt2.BackpackGPT2Config",
7
+ "AutoModelForCausalLM": "modeling_backpack_gpt2.BackpackGPT2LMHeadModel"
8
+ },
9
+ "activation_function": "gelu_new",
10
+ "attn_pdrop": 0.1,
11
+ "bos_token_id": 50256,
12
+ "embd_pdrop": 0.1,
13
+ "eos_token_id": 50256,
14
+ "initializer_range": 0.02,
15
+ "layer_norm_epsilon": 1e-05,
16
+ "model_type": "gpt2",
17
+ "n_embd": 768,
18
+ "n_head": 12,
19
+ "n_inner": null,
20
+ "n_layer": 12,
21
+ "n_positions": 512,
22
+ "num_senses": 16,
23
+ "reorder_and_upcast_attn": false,
24
+ "resid_pdrop": 0.1,
25
+ "scale_attn_by_inverse_layer_idx": true,
26
+ "scale_attn_weights": true,
27
+ "sense_intermediate_scale": 4,
28
+ "summary_activation": null,
29
+ "summary_first_dropout": 0.1,
30
+ "summary_proj_to_labels": true,
31
+ "summary_type": "cls_index",
32
+ "summary_use_proj": true,
33
+ "transformers_version": "4.29.0.dev0",
34
+ "use_cache": true,
35
+ "vocab_size": 50257
36
+ }
configuration_backpack_gpt2.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.models.gpt2.configuration_gpt2 import GPT2Config
2
+
3
+ class BackpackGPT2Config(GPT2Config):
4
+ """
5
+ This is the configuration class to store the configuration of a [`GPT2Model`] or a [`TFGPT2Model`]. It is used to
6
+ instantiate a Backpack GPT-2 model according to the specified arguments, defining the model architecture.
7
+
8
+ Configuration objects inherit from [`GPT2Config`] and can be used to control the model outputs. Read the
9
+ documentation from [`GPT2Config`] for more information.
10
+
11
+ Args:
12
+ num_senses (`int`, *optional*, defaults to 16):
13
+ The number of sense vectors to define for each word.
14
+ sense_intermediate_scale (`int`, *optional*, defaults ot 4):
15
+ The hidden dimensionality of the sense vector network.
16
+
17
+ Example:
18
+
19
+ ```python
20
+ >>> from transformers import BackpackGPT2Config, BackpackGPT2Model
21
+
22
+ >>> # Initializing a GPT2 configuration
23
+ >>> configuration = BackpackGPT2Config()
24
+
25
+ >>> # Initializing a model (with random weights) from the configuration
26
+ >>> model = BackpackGPT2Model(configuration)
27
+
28
+ >>> # Accessing the model configuration
29
+ >>> configuration = model.config
30
+ """
31
+
32
+ def __init__(self,
33
+ num_senses=16,
34
+ sense_intermediate_scale=4,
35
+ n_positions=512,
36
+ scale_attn_by_inverse_layer_idx=True,
37
+ **kwargs,
38
+ ):
39
+ self.num_senses = num_senses
40
+ self.sense_intermediate_scale = sense_intermediate_scale
41
+ super().__init__(n_positions=n_positions, scale_attn_by_inverse_layer_idx=scale_attn_by_inverse_layer_idx, **kwargs)
modeling_backpack_gpt2.py ADDED
@@ -0,0 +1,1799 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team.
3
+ # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """PyTorch OpenAI GPT-2 model."""
17
+
18
+ import math
19
+ import os
20
+ import warnings
21
+ from dataclasses import dataclass
22
+ from typing import Optional, Tuple, Union
23
+
24
+ import torch
25
+ import torch.utils.checkpoint
26
+ from torch import nn
27
+ from torch.cuda.amp import autocast
28
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
29
+
30
+ from transformers.activations import ACT2FN
31
+ from transformers.modeling_outputs import (
32
+ BaseModelOutputWithPastAndCrossAttentions,
33
+ CausalLMOutputWithCrossAttentions,
34
+ SequenceClassifierOutputWithPast,
35
+ TokenClassifierOutput,
36
+ )
37
+ from transformers.modeling_utils import PreTrainedModel, SequenceSummary
38
+ from transformers.pytorch_utils import Conv1D, find_pruneable_heads_and_indices, prune_conv1d_layer
39
+ from transformers.utils import (
40
+ ModelOutput,
41
+ add_code_sample_docstrings,
42
+ add_start_docstrings,
43
+ add_start_docstrings_to_model_forward,
44
+ logging,
45
+ replace_return_docstrings,
46
+ )
47
+ from transformers.utils.model_parallel_utils import assert_device_map, get_device_map
48
+ from .configuration_backpack_gpt2 import BackpackGPT2Config, GPT2Config
49
+
50
+ from collections import namedtuple
51
+
52
+
53
+ logger = logging.get_logger(__name__)
54
+
55
+ _CHECKPOINT_FOR_DOC = "gpt2"
56
+ _CONFIG_FOR_DOC = "GPT2Config"
57
+
58
+ GPT2_PRETRAINED_MODEL_ARCHIVE_LIST = [
59
+ "gpt2",
60
+ "gpt2-medium",
61
+ "gpt2-large",
62
+ "gpt2-xl",
63
+ "distilgpt2",
64
+ # See all GPT-2 models at https://huggingface.co/models?filter=gpt2
65
+ ]
66
+
67
+
68
+ def load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path):
69
+ """Load tf checkpoints in a pytorch model"""
70
+ try:
71
+ import re
72
+
73
+ import tensorflow as tf
74
+ except ImportError:
75
+ logger.error(
76
+ "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
77
+ "https://www.tensorflow.org/install/ for installation instructions."
78
+ )
79
+ raise
80
+ tf_path = os.path.abspath(gpt2_checkpoint_path)
81
+ logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
82
+ # Load weights from TF model
83
+ init_vars = tf.train.list_variables(tf_path)
84
+ names = []
85
+ arrays = []
86
+ for name, shape in init_vars:
87
+ logger.info(f"Loading TF weight {name} with shape {shape}")
88
+ array = tf.train.load_variable(tf_path, name)
89
+ names.append(name)
90
+ arrays.append(array.squeeze())
91
+
92
+ for name, array in zip(names, arrays):
93
+ name = name[6:] # skip "model/"
94
+ name = name.split("/")
95
+ pointer = model
96
+ for m_name in name:
97
+ if re.fullmatch(r"[A-Za-z]+\d+", m_name):
98
+ scope_names = re.split(r"(\d+)", m_name)
99
+ else:
100
+ scope_names = [m_name]
101
+ if scope_names[0] == "w" or scope_names[0] == "g":
102
+ pointer = getattr(pointer, "weight")
103
+ elif scope_names[0] == "b":
104
+ pointer = getattr(pointer, "bias")
105
+ elif scope_names[0] == "wpe" or scope_names[0] == "wte":
106
+ pointer = getattr(pointer, scope_names[0])
107
+ pointer = getattr(pointer, "weight")
108
+ else:
109
+ pointer = getattr(pointer, scope_names[0])
110
+ if len(scope_names) >= 2:
111
+ num = int(scope_names[1])
112
+ pointer = pointer[num]
113
+ try:
114
+ assert (
115
+ pointer.shape == array.shape
116
+ ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
117
+ except AssertionError as e:
118
+ e.args += (pointer.shape, array.shape)
119
+ raise
120
+ logger.info(f"Initialize PyTorch weight {name}")
121
+ pointer.data = torch.from_numpy(array)
122
+ return model
123
+
124
+
125
+ class GPT2Attention(nn.Module):
126
+ def __init__(self, config, is_cross_attention=False, layer_idx=None):
127
+ super().__init__()
128
+
129
+ max_positions = config.max_position_embeddings
130
+ self.register_buffer(
131
+ "bias",
132
+ torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view(
133
+ 1, 1, max_positions, max_positions
134
+ ),
135
+ )
136
+ self.register_buffer("masked_bias", torch.tensor(-1e4))
137
+
138
+ self.embed_dim = config.hidden_size
139
+ self.num_heads = config.num_attention_heads
140
+ self.head_dim = self.embed_dim // self.num_heads
141
+ self.split_size = self.embed_dim
142
+ if self.head_dim * self.num_heads != self.embed_dim:
143
+ raise ValueError(
144
+ f"`embed_dim` must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
145
+ f" {self.num_heads})."
146
+ )
147
+
148
+ self.scale_attn_weights = config.scale_attn_weights
149
+ self.is_cross_attention = is_cross_attention
150
+
151
+ # Layer-wise attention scaling, reordering, and upcasting
152
+ self.scale_attn_by_inverse_layer_idx = config.scale_attn_by_inverse_layer_idx
153
+ self.layer_idx = layer_idx
154
+ self.reorder_and_upcast_attn = config.reorder_and_upcast_attn
155
+
156
+ if self.is_cross_attention:
157
+ self.c_attn = Conv1D(2 * self.embed_dim, self.embed_dim)
158
+ self.q_attn = Conv1D(self.embed_dim, self.embed_dim)
159
+ else:
160
+ self.c_attn = Conv1D(3 * self.embed_dim, self.embed_dim)
161
+ self.c_proj = Conv1D(self.embed_dim, self.embed_dim)
162
+
163
+ self.attn_dropout = nn.Dropout(config.attn_pdrop)
164
+ self.resid_dropout = nn.Dropout(config.resid_pdrop)
165
+
166
+ self.pruned_heads = set()
167
+
168
+ def prune_heads(self, heads):
169
+ if len(heads) == 0:
170
+ return
171
+ heads, index = find_pruneable_heads_and_indices(heads, self.num_heads, self.head_dim, self.pruned_heads)
172
+ index_attn = torch.cat([index, index + self.split_size, index + (2 * self.split_size)])
173
+
174
+ # Prune conv1d layers
175
+ self.c_attn = prune_conv1d_layer(self.c_attn, index_attn, dim=1)
176
+ self.c_proj = prune_conv1d_layer(self.c_proj, index, dim=0)
177
+
178
+ # Update hyper params
179
+ self.split_size = (self.split_size // self.num_heads) * (self.num_heads - len(heads))
180
+ self.num_heads = self.num_heads - len(heads)
181
+ self.pruned_heads = self.pruned_heads.union(heads)
182
+
183
+ def _attn(self, query, key, value, attention_mask=None, head_mask=None):
184
+ attn_weights = torch.matmul(query, key.transpose(-1, -2))
185
+
186
+ if self.scale_attn_weights:
187
+ attn_weights = attn_weights / torch.full(
188
+ [], value.size(-1) ** 0.5, dtype=attn_weights.dtype, device=attn_weights.device
189
+ )
190
+
191
+ # Layer-wise attention scaling
192
+ if self.scale_attn_by_inverse_layer_idx:
193
+ attn_weights = attn_weights / float(self.layer_idx + 1)
194
+
195
+ if not self.is_cross_attention:
196
+ # if only "normal" attention layer implements causal mask
197
+ query_length, key_length = query.size(-2), key.size(-2)
198
+ causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]
199
+ mask_value = torch.finfo(attn_weights.dtype).min
200
+ # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`.
201
+ # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device`
202
+ mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(attn_weights.device)
203
+ attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)
204
+
205
+ if attention_mask is not None:
206
+ # Apply the attention mask
207
+ attn_weights = attn_weights + attention_mask
208
+
209
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
210
+
211
+ # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op otherwise
212
+ attn_weights = attn_weights.type(value.dtype)
213
+ attn_weights = self.attn_dropout(attn_weights)
214
+
215
+ # Mask heads if we want to
216
+ if head_mask is not None:
217
+ attn_weights = attn_weights * head_mask
218
+
219
+ attn_output = torch.matmul(attn_weights, value)
220
+
221
+ return attn_output, attn_weights
222
+
223
+ def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, head_mask=None):
224
+ # Use `torch.baddbmm` (a bit more efficient w/ alpha param for scaling -- from Megatron-LM)
225
+ bsz, num_heads, q_seq_len, dk = query.size()
226
+ _, _, k_seq_len, _ = key.size()
227
+
228
+ # Preallocate attn_weights for `baddbmm`
229
+ attn_weights = torch.empty(bsz * num_heads, q_seq_len, k_seq_len, dtype=torch.float32, device=query.device)
230
+
231
+ # Compute Scale Factor
232
+ scale_factor = 1.0
233
+ if self.scale_attn_weights:
234
+ scale_factor /= float(value.size(-1)) ** 0.5
235
+
236
+ if self.scale_attn_by_inverse_layer_idx:
237
+ scale_factor /= float(self.layer_idx + 1)
238
+
239
+ # Upcast (turn off autocast) and reorder (Scale K by 1 / root(dk))
240
+ with autocast(enabled=False):
241
+ q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len)
242
+ attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor)
243
+ attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len)
244
+
245
+ if not self.is_cross_attention:
246
+ # if only "normal" attention layer implements causal mask
247
+ query_length, key_length = query.size(-2), key.size(-2)
248
+ causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]
249
+ mask_value = torch.finfo(attn_weights.dtype).min
250
+ # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`.
251
+ # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device`
252
+ mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device)
253
+ attn_weights = torch.where(causal_mask, attn_weights, mask_value)
254
+
255
+ if attention_mask is not None:
256
+ # Apply the attention mask
257
+ attn_weights = attn_weights + attention_mask
258
+
259
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
260
+
261
+ # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op if otherwise
262
+ if attn_weights.dtype != torch.float32:
263
+ raise RuntimeError("Error with upcasting, attn_weights does not have dtype torch.float32")
264
+ attn_weights = attn_weights.type(value.dtype)
265
+ attn_weights = self.attn_dropout(attn_weights)
266
+
267
+ # Mask heads if we want to
268
+ if head_mask is not None:
269
+ attn_weights = attn_weights * head_mask
270
+
271
+ attn_output = torch.matmul(attn_weights, value)
272
+
273
+ return attn_output, attn_weights
274
+
275
+ def _split_heads(self, tensor, num_heads, attn_head_size):
276
+ """
277
+ Splits hidden_size dim into attn_head_size and num_heads
278
+ """
279
+ new_shape = tensor.size()[:-1] + (num_heads, attn_head_size)
280
+ tensor = tensor.view(new_shape)
281
+ return tensor.permute(0, 2, 1, 3) # (batch, head, seq_length, head_features)
282
+
283
+ def _merge_heads(self, tensor, num_heads, attn_head_size):
284
+ """
285
+ Merges attn_head_size dim and num_attn_heads dim into hidden_size
286
+ """
287
+ tensor = tensor.permute(0, 2, 1, 3).contiguous()
288
+ new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,)
289
+ return tensor.view(new_shape)
290
+
291
+ def forward(
292
+ self,
293
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
294
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
295
+ attention_mask: Optional[torch.FloatTensor] = None,
296
+ head_mask: Optional[torch.FloatTensor] = None,
297
+ encoder_hidden_states: Optional[torch.Tensor] = None,
298
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
299
+ use_cache: Optional[bool] = False,
300
+ output_attentions: Optional[bool] = False,
301
+ ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor]], ...]:
302
+ if encoder_hidden_states is not None:
303
+ if not hasattr(self, "q_attn"):
304
+ raise ValueError(
305
+ "If class is used as cross attention, the weights `q_attn` have to be defined. "
306
+ "Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`."
307
+ )
308
+
309
+ query = self.q_attn(hidden_states)
310
+ key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2)
311
+ attention_mask = encoder_attention_mask
312
+ else:
313
+ query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
314
+
315
+ query = self._split_heads(query, self.num_heads, self.head_dim)
316
+ key = self._split_heads(key, self.num_heads, self.head_dim)
317
+ value = self._split_heads(value, self.num_heads, self.head_dim)
318
+
319
+ if layer_past is not None:
320
+ past_key, past_value = layer_past
321
+ key = torch.cat((past_key, key), dim=-2)
322
+ value = torch.cat((past_value, value), dim=-2)
323
+
324
+ if use_cache is True:
325
+ present = (key, value)
326
+ else:
327
+ present = None
328
+
329
+ if self.reorder_and_upcast_attn:
330
+ attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask)
331
+ else:
332
+ attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
333
+
334
+ attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim)
335
+ attn_output = self.c_proj(attn_output)
336
+ attn_output = self.resid_dropout(attn_output)
337
+
338
+ outputs = (attn_output, present)
339
+ if output_attentions:
340
+ outputs += (attn_weights,)
341
+
342
+ return outputs # a, present, (attentions)
343
+
344
+
345
+ class GPT2MLP(nn.Module):
346
+ def __init__(self, intermediate_size, config):
347
+ super().__init__()
348
+ embed_dim = config.hidden_size
349
+ self.c_fc = Conv1D(intermediate_size, embed_dim)
350
+ self.c_proj = Conv1D(embed_dim, intermediate_size)
351
+ self.act = ACT2FN[config.activation_function]
352
+ self.dropout = nn.Dropout(config.resid_pdrop)
353
+
354
+ def forward(self, hidden_states: Optional[Tuple[torch.FloatTensor]]) -> torch.FloatTensor:
355
+ hidden_states = self.c_fc(hidden_states)
356
+ hidden_states = self.act(hidden_states)
357
+ hidden_states = self.c_proj(hidden_states)
358
+ hidden_states = self.dropout(hidden_states)
359
+ return hidden_states
360
+
361
+
362
+ class GPT2Block(nn.Module):
363
+ def __init__(self, config, layer_idx=None):
364
+ super().__init__()
365
+ hidden_size = config.hidden_size
366
+ inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size
367
+
368
+ self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
369
+ self.attn = GPT2Attention(config, layer_idx=layer_idx)
370
+ self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
371
+
372
+ if config.add_cross_attention:
373
+ self.crossattention = GPT2Attention(config, is_cross_attention=True, layer_idx=layer_idx)
374
+ self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
375
+
376
+ self.mlp = GPT2MLP(inner_dim, config)
377
+
378
+ def forward(
379
+ self,
380
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
381
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
382
+ attention_mask: Optional[torch.FloatTensor] = None,
383
+ head_mask: Optional[torch.FloatTensor] = None,
384
+ encoder_hidden_states: Optional[torch.Tensor] = None,
385
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
386
+ use_cache: Optional[bool] = False,
387
+ output_attentions: Optional[bool] = False,
388
+ ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]:
389
+ residual = hidden_states
390
+ hidden_states = self.ln_1(hidden_states)
391
+ attn_outputs = self.attn(
392
+ hidden_states,
393
+ layer_past=layer_past,
394
+ attention_mask=attention_mask,
395
+ head_mask=head_mask,
396
+ use_cache=use_cache,
397
+ output_attentions=output_attentions,
398
+ )
399
+ attn_output = attn_outputs[0] # output_attn: a, present, (attentions)
400
+ outputs = attn_outputs[1:]
401
+ # residual connection
402
+ hidden_states = attn_output + residual
403
+
404
+ if encoder_hidden_states is not None:
405
+ # add one self-attention block for cross-attention
406
+ if not hasattr(self, "crossattention"):
407
+ raise ValueError(
408
+ f"If `encoder_hidden_states` are passed, {self} has to be instantiated with "
409
+ "cross-attention layers by setting `config.add_cross_attention=True`"
410
+ )
411
+ residual = hidden_states
412
+ hidden_states = self.ln_cross_attn(hidden_states)
413
+ cross_attn_outputs = self.crossattention(
414
+ hidden_states,
415
+ attention_mask=attention_mask,
416
+ head_mask=head_mask,
417
+ encoder_hidden_states=encoder_hidden_states,
418
+ encoder_attention_mask=encoder_attention_mask,
419
+ output_attentions=output_attentions,
420
+ )
421
+ attn_output = cross_attn_outputs[0]
422
+ # residual connection
423
+ hidden_states = residual + attn_output
424
+ outputs = outputs + cross_attn_outputs[2:] # add cross attentions if we output attention weights
425
+
426
+ residual = hidden_states
427
+ hidden_states = self.ln_2(hidden_states)
428
+ feed_forward_hidden_states = self.mlp(hidden_states)
429
+ # residual connection
430
+ hidden_states = residual + feed_forward_hidden_states
431
+
432
+ if use_cache:
433
+ outputs = (hidden_states,) + outputs
434
+ else:
435
+ outputs = (hidden_states,) + outputs[1:]
436
+
437
+ return outputs # hidden_states, present, (attentions, cross_attentions)
438
+
439
+
440
+ class GPT2PreTrainedModel(PreTrainedModel):
441
+ """
442
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
443
+ models.
444
+ """
445
+
446
+ config_class = GPT2Config
447
+ load_tf_weights = load_tf_weights_in_gpt2
448
+ base_model_prefix = "transformer"
449
+ is_parallelizable = True
450
+ supports_gradient_checkpointing = True
451
+ _no_split_modules = ["GPT2Block"]
452
+
453
+ def __init__(self, *inputs, **kwargs):
454
+ super().__init__(*inputs, **kwargs)
455
+
456
+ def _init_weights(self, module):
457
+ """Initialize the weights."""
458
+ if isinstance(module, (nn.Linear, Conv1D)):
459
+ # Slightly different from the TF version which uses truncated_normal for initialization
460
+ # cf https://github.com/pytorch/pytorch/pull/5617
461
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
462
+ if module.bias is not None:
463
+ module.bias.data.zero_()
464
+ elif isinstance(module, nn.Embedding):
465
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
466
+ if module.padding_idx is not None:
467
+ module.weight.data[module.padding_idx].zero_()
468
+ elif isinstance(module, nn.LayerNorm):
469
+ module.bias.data.zero_()
470
+ module.weight.data.fill_(1.0)
471
+
472
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
473
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
474
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
475
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
476
+ #
477
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
478
+ for name, p in module.named_parameters():
479
+ if name == "c_proj.weight":
480
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
481
+ p.data.normal_(mean=0.0, std=(self.config.initializer_range / math.sqrt(2 * self.config.n_layer)))
482
+
483
+ def _set_gradient_checkpointing(self, module, value=False):
484
+ if isinstance(module, GPT2Model):
485
+ module.gradient_checkpointing = value
486
+
487
+
488
+ @dataclass
489
+ class GPT2DoubleHeadsModelOutput(ModelOutput):
490
+ """
491
+ Base class for outputs of models predicting if two sentences are consecutive or not.
492
+
493
+ Args:
494
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
495
+ Language modeling loss.
496
+ mc_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `mc_labels` is provided):
497
+ Multiple choice classification loss.
498
+ logits (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`):
499
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
500
+ mc_logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`):
501
+ Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
502
+ past_key_values (`Tuple[Tuple[torch.Tensor]]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
503
+ Tuple of length `config.n_layers`, containing tuples of tensors of shape `(batch_size, num_heads,
504
+ sequence_length, embed_size_per_head)`).
505
+
506
+ Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
507
+ `past_key_values` input) to speed up sequential decoding.
508
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
509
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
510
+ shape `(batch_size, sequence_length, hidden_size)`.
511
+
512
+ Hidden-states of the model at the output of each layer plus the initial embedding outputs.
513
+ attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
514
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
515
+ sequence_length)`.
516
+
517
+ GPT2Attentions weights after the attention softmax, used to compute the weighted average in the
518
+ self-attention heads.
519
+ """
520
+
521
+ loss: Optional[torch.FloatTensor] = None
522
+ mc_loss: Optional[torch.FloatTensor] = None
523
+ logits: torch.FloatTensor = None
524
+ mc_logits: torch.FloatTensor = None
525
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
526
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
527
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
528
+
529
+
530
+ GPT2_START_DOCSTRING = r"""
531
+
532
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
533
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
534
+ etc.)
535
+
536
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
537
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
538
+ and behavior.
539
+
540
+ Parameters:
541
+ config ([`GPT2Config`]): Model configuration class with all the parameters of the model.
542
+ Initializing with a config file does not load the weights associated with the model, only the
543
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
544
+ """
545
+
546
+ GPT2_INPUTS_DOCSTRING = r"""
547
+ Args:
548
+ input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`):
549
+ `input_ids_length` = `sequence_length` if `past_key_values` is `None` else
550
+ `past_key_values[0][0].shape[-2]` (`sequence_length` of input past key value states). Indices of input
551
+ sequence tokens in the vocabulary.
552
+
553
+ If `past_key_values` is used, only `input_ids` that do not have their past calculated should be passed as
554
+ `input_ids`.
555
+
556
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
557
+ [`PreTrainedTokenizer.__call__`] for details.
558
+
559
+ [What are input IDs?](../glossary#input-ids)
560
+ past_key_values (`Tuple[Tuple[torch.Tensor]]` of length `config.n_layers`):
561
+ Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
562
+ `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have
563
+ their past given to this model should not be passed as `input_ids` as they have already been computed.
564
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
565
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
566
+
567
+ - 1 for tokens that are **not masked**,
568
+ - 0 for tokens that are **masked**.
569
+
570
+ If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for
571
+ `past_key_values`. In other words, the `attention_mask` always has to have the length:
572
+ `len(past_key_values) + len(input_ids)`
573
+
574
+ [What are attention masks?](../glossary#attention-mask)
575
+ token_type_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*):
576
+ Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
577
+ 1]`:
578
+
579
+ - 0 corresponds to a *sentence A* token,
580
+ - 1 corresponds to a *sentence B* token.
581
+
582
+ [What are token type IDs?](../glossary#token-type-ids)
583
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
584
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
585
+ config.max_position_embeddings - 1]`.
586
+
587
+ [What are position IDs?](../glossary#position-ids)
588
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
589
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
590
+
591
+ - 1 indicates the head is **not masked**,
592
+ - 0 indicates the head is **masked**.
593
+
594
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
595
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
596
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
597
+ model's internal embedding lookup matrix.
598
+
599
+ If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see
600
+ `past_key_values`).
601
+ use_cache (`bool`, *optional*):
602
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
603
+ `past_key_values`).
604
+ output_attentions (`bool`, *optional*):
605
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
606
+ tensors for more detail.
607
+ output_hidden_states (`bool`, *optional*):
608
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
609
+ more detail.
610
+ return_dict (`bool`, *optional*):
611
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
612
+ """
613
+ PARALLELIZE_DOCSTRING = r"""
614
+ This is an experimental feature and is a subject to change at a moment's notice.
615
+
616
+ Uses a device map to distribute attention modules of the model across several devices. If no device map is given,
617
+ it will evenly distribute blocks across all devices.
618
+
619
+ Args:
620
+ device_map (`Dict[int, list]`, optional, defaults to None):
621
+ A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always
622
+ automatically mapped to the first device (for esoteric reasons). That means that the first device should
623
+ have fewer attention modules mapped to it than other devices. For reference, the gpt2 models have the
624
+ following number of attention modules:
625
+
626
+ - gpt2: 12
627
+ - gpt2-medium: 24
628
+ - gpt2-large: 36
629
+ - gpt2-xl: 48
630
+
631
+ Example:
632
+
633
+ ```python
634
+ # Here is an example of a device map on a machine with 4 GPUs using gpt2-xl, which has a total of 48 attention modules:
635
+ model = GPT2LMHeadModel.from_pretrained("gpt2-xl")
636
+ device_map = {
637
+ 0: [0, 1, 2, 3, 4, 5, 6, 7, 8],
638
+ 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
639
+ 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],
640
+ 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47],
641
+ }
642
+ model.parallelize(device_map)
643
+ ```
644
+ """
645
+ DEPARALLELIZE_DOCSTRING = r"""
646
+ Moves the model to cpu from a model parallel state.
647
+
648
+ Example:
649
+
650
+ ```python
651
+ # On a 4 GPU machine with gpt2-large:
652
+ model = GPT2LMHeadModel.from_pretrained("gpt2-large")
653
+ device_map = {
654
+ 0: [0, 1, 2, 3, 4, 5, 6, 7],
655
+ 1: [8, 9, 10, 11, 12, 13, 14, 15],
656
+ 2: [16, 17, 18, 19, 20, 21, 22, 23],
657
+ 3: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35],
658
+ }
659
+ model.parallelize(device_map) # Splits the model across several devices
660
+ model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
661
+ ```
662
+ """
663
+
664
+
665
+ @add_start_docstrings(
666
+ "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.",
667
+ GPT2_START_DOCSTRING,
668
+ )
669
+ class GPT2Model(GPT2PreTrainedModel):
670
+ _keys_to_ignore_on_load_missing = ["attn.masked_bias"]
671
+
672
+ def __init__(self, config):
673
+ super().__init__(config)
674
+
675
+ self.embed_dim = config.hidden_size
676
+
677
+ self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
678
+ self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
679
+
680
+ self.drop = nn.Dropout(config.embd_pdrop)
681
+ self.h = nn.ModuleList([GPT2Block(config, layer_idx=i) for i in range(config.num_hidden_layers)])
682
+ self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon)
683
+
684
+ # Model parallel
685
+ self.model_parallel = False
686
+ self.device_map = None
687
+ self.gradient_checkpointing = False
688
+
689
+ # Initialize weights and apply final processing
690
+ self.post_init()
691
+
692
+ @add_start_docstrings(PARALLELIZE_DOCSTRING)
693
+ def parallelize(self, device_map=None):
694
+ # Check validity of device_map
695
+ warnings.warn(
696
+ "`GPT2Model.parallelize` is deprecated and will be removed in v5 of Transformers, you should load your"
697
+ " model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own"
698
+ " `device_map` but it needs to be a dictionary module_name to device, so for instance {'h.0': 0, 'h.1': 1,"
699
+ " ...}",
700
+ FutureWarning,
701
+ )
702
+ self.device_map = (
703
+ get_device_map(len(self.h), range(torch.cuda.device_count())) if device_map is None else device_map
704
+ )
705
+ assert_device_map(self.device_map, len(self.h))
706
+ self.model_parallel = True
707
+ self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + str(min(self.device_map.keys()))
708
+ self.last_device = "cuda:" + str(max(self.device_map.keys()))
709
+ self.wte = self.wte.to(self.first_device)
710
+ self.wpe = self.wpe.to(self.first_device)
711
+ # Load onto devices
712
+ for k, v in self.device_map.items():
713
+ for block in v:
714
+ cuda_device = "cuda:" + str(k)
715
+ self.h[block] = self.h[block].to(cuda_device)
716
+ # ln_f to last
717
+ self.ln_f = self.ln_f.to(self.last_device)
718
+
719
+ @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
720
+ def deparallelize(self):
721
+ warnings.warn(
722
+ "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.",
723
+ FutureWarning,
724
+ )
725
+ self.model_parallel = False
726
+ self.device_map = None
727
+ self.first_device = "cpu"
728
+ self.last_device = "cpu"
729
+ self.wte = self.wte.to("cpu")
730
+ self.wpe = self.wpe.to("cpu")
731
+ for index in range(len(self.h)):
732
+ self.h[index] = self.h[index].to("cpu")
733
+ self.ln_f = self.ln_f.to("cpu")
734
+ torch.cuda.empty_cache()
735
+
736
+ def get_input_embeddings(self):
737
+ return self.wte
738
+
739
+ def set_input_embeddings(self, new_embeddings):
740
+ self.wte = new_embeddings
741
+
742
+ def _prune_heads(self, heads_to_prune):
743
+ """
744
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
745
+ """
746
+ for layer, heads in heads_to_prune.items():
747
+ self.h[layer].attn.prune_heads(heads)
748
+
749
+ @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING)
750
+ @add_code_sample_docstrings(
751
+ checkpoint=_CHECKPOINT_FOR_DOC,
752
+ output_type=BaseModelOutputWithPastAndCrossAttentions,
753
+ config_class=_CONFIG_FOR_DOC,
754
+ )
755
+ def forward(
756
+ self,
757
+ input_ids: Optional[torch.LongTensor] = None,
758
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
759
+ attention_mask: Optional[torch.FloatTensor] = None,
760
+ token_type_ids: Optional[torch.LongTensor] = None,
761
+ position_ids: Optional[torch.LongTensor] = None,
762
+ head_mask: Optional[torch.FloatTensor] = None,
763
+ inputs_embeds: Optional[torch.FloatTensor] = None,
764
+ encoder_hidden_states: Optional[torch.Tensor] = None,
765
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
766
+ use_cache: Optional[bool] = None,
767
+ output_attentions: Optional[bool] = None,
768
+ output_hidden_states: Optional[bool] = None,
769
+ return_dict: Optional[bool] = None,
770
+ ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]:
771
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
772
+ output_hidden_states = (
773
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
774
+ )
775
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
776
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
777
+
778
+ if input_ids is not None and inputs_embeds is not None:
779
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
780
+ elif input_ids is not None:
781
+ input_shape = input_ids.size()
782
+ input_ids = input_ids.view(-1, input_shape[-1])
783
+ batch_size = input_ids.shape[0]
784
+ elif inputs_embeds is not None:
785
+ input_shape = inputs_embeds.size()[:-1]
786
+ batch_size = inputs_embeds.shape[0]
787
+ else:
788
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
789
+
790
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
791
+
792
+ if token_type_ids is not None:
793
+ token_type_ids = token_type_ids.view(-1, input_shape[-1])
794
+ if position_ids is not None:
795
+ position_ids = position_ids.view(-1, input_shape[-1])
796
+
797
+ if past_key_values is None:
798
+ past_length = 0
799
+ past_key_values = tuple([None] * len(self.h))
800
+ else:
801
+ past_length = past_key_values[0][0].size(-2)
802
+ if position_ids is None:
803
+ position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)
804
+ position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
805
+
806
+ # GPT2Attention mask.
807
+ if attention_mask is not None:
808
+ if batch_size <= 0:
809
+ raise ValueError("batch_size has to be defined and > 0")
810
+ attention_mask = attention_mask.view(batch_size, -1)
811
+ # We create a 3D attention mask from a 2D tensor mask.
812
+ # Sizes are [batch_size, 1, 1, to_seq_length]
813
+ # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length]
814
+ # this attention mask is more simple than the triangular masking of causal attention
815
+ # used in OpenAI GPT, we just need to prepare the broadcast dimension here.
816
+ attention_mask = attention_mask[:, None, None, :]
817
+
818
+ # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
819
+ # masked positions, this operation will create a tensor which is 0.0 for
820
+ # positions we want to attend and the dtype's smallest value for masked positions.
821
+ # Since we are adding it to the raw scores before the softmax, this is
822
+ # effectively the same as removing these entirely.
823
+ attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility
824
+ attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min
825
+
826
+ # If a 2D or 3D attention mask is provided for the cross-attention
827
+ # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
828
+ if self.config.add_cross_attention and encoder_hidden_states is not None:
829
+ encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
830
+ encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
831
+ if encoder_attention_mask is None:
832
+ encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
833
+ encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask)
834
+ else:
835
+ encoder_attention_mask = None
836
+
837
+ # Prepare head mask if needed
838
+ # 1.0 in head_mask indicate we keep the head
839
+ # attention_probs has shape bsz x n_heads x N x N
840
+ # head_mask has shape n_layer x batch x n_heads x N x N
841
+ head_mask = self.get_head_mask(head_mask, self.config.n_layer)
842
+
843
+ if inputs_embeds is None:
844
+ inputs_embeds = self.wte(input_ids)
845
+ position_embeds = self.wpe(position_ids)
846
+ hidden_states = inputs_embeds + position_embeds
847
+
848
+ if token_type_ids is not None:
849
+ token_type_embeds = self.wte(token_type_ids)
850
+ hidden_states = hidden_states + token_type_embeds
851
+
852
+ hidden_states = self.drop(hidden_states)
853
+
854
+ output_shape = input_shape + (hidden_states.size(-1),)
855
+
856
+ if self.gradient_checkpointing and self.training:
857
+ if use_cache:
858
+ logger.warning_once(
859
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
860
+ )
861
+ use_cache = False
862
+
863
+ presents = () if use_cache else None
864
+ all_self_attentions = () if output_attentions else None
865
+ all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
866
+ all_hidden_states = () if output_hidden_states else None
867
+ for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
868
+ # Model parallel
869
+ if self.model_parallel:
870
+ torch.cuda.set_device(hidden_states.device)
871
+ # Ensure layer_past is on same device as hidden_states (might not be correct)
872
+ if layer_past is not None:
873
+ layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past)
874
+ # Ensure that attention_mask is always on the same device as hidden_states
875
+ if attention_mask is not None:
876
+ attention_mask = attention_mask.to(hidden_states.device)
877
+ if isinstance(head_mask, torch.Tensor):
878
+ head_mask = head_mask.to(hidden_states.device)
879
+ if output_hidden_states:
880
+ all_hidden_states = all_hidden_states + (hidden_states,)
881
+
882
+ if self.gradient_checkpointing and self.training:
883
+
884
+ def create_custom_forward(module):
885
+ def custom_forward(*inputs):
886
+ # None for past_key_value
887
+ return module(*inputs, use_cache, output_attentions)
888
+
889
+ return custom_forward
890
+
891
+ outputs = torch.utils.checkpoint.checkpoint(
892
+ create_custom_forward(block),
893
+ hidden_states,
894
+ None,
895
+ attention_mask,
896
+ head_mask[i],
897
+ encoder_hidden_states,
898
+ encoder_attention_mask,
899
+ )
900
+ else:
901
+ outputs = block(
902
+ hidden_states,
903
+ layer_past=layer_past,
904
+ attention_mask=attention_mask,
905
+ head_mask=head_mask[i],
906
+ encoder_hidden_states=encoder_hidden_states,
907
+ encoder_attention_mask=encoder_attention_mask,
908
+ use_cache=use_cache,
909
+ output_attentions=output_attentions,
910
+ )
911
+
912
+ hidden_states = outputs[0]
913
+ if use_cache is True:
914
+ presents = presents + (outputs[1],)
915
+
916
+ if output_attentions:
917
+ all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
918
+ if self.config.add_cross_attention:
919
+ all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],)
920
+
921
+ # Model Parallel: If it's the last layer for that device, put things on the next device
922
+ if self.model_parallel:
923
+ for k, v in self.device_map.items():
924
+ if i == v[-1] and "cuda:" + str(k) != self.last_device:
925
+ hidden_states = hidden_states.to("cuda:" + str(k + 1))
926
+
927
+ hidden_states = self.ln_f(hidden_states)
928
+
929
+ hidden_states = hidden_states.view(output_shape)
930
+ # Add last hidden state
931
+ if output_hidden_states:
932
+ all_hidden_states = all_hidden_states + (hidden_states,)
933
+
934
+ if not return_dict:
935
+ return tuple(
936
+ v
937
+ for v in [hidden_states, presents, all_hidden_states, all_self_attentions, all_cross_attentions]
938
+ if v is not None
939
+ )
940
+
941
+ return BaseModelOutputWithPastAndCrossAttentions(
942
+ last_hidden_state=hidden_states,
943
+ past_key_values=presents,
944
+ hidden_states=all_hidden_states,
945
+ attentions=all_self_attentions,
946
+ cross_attentions=all_cross_attentions,
947
+ )
948
+
949
+
950
+ @add_start_docstrings(
951
+ """
952
+ The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input
953
+ embeddings).
954
+ """,
955
+ GPT2_START_DOCSTRING,
956
+ )
957
+ class GPT2LMHeadModel(GPT2PreTrainedModel):
958
+ _keys_to_ignore_on_load_missing = [r"attn.masked_bias", r"attn.bias", r"lm_head.weight"]
959
+
960
+ def __init__(self, config):
961
+ super().__init__(config)
962
+ self.transformer = GPT2Model(config)
963
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
964
+
965
+ # Model parallel
966
+ self.model_parallel = False
967
+ self.device_map = None
968
+
969
+ # Initialize weights and apply final processing
970
+ self.post_init()
971
+
972
+ @add_start_docstrings(PARALLELIZE_DOCSTRING)
973
+ def parallelize(self, device_map=None):
974
+ warnings.warn(
975
+ "`GPT2LMHeadModel.parallelize` is deprecated and will be removed in v5 of Transformers, you should load"
976
+ " your model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own"
977
+ " `device_map` but it needs to be a dictionary module_name to device, so for instance {'transformer.h.0':"
978
+ " 0, 'transformer.h.1': 1, ...}",
979
+ FutureWarning,
980
+ )
981
+ self.device_map = (
982
+ get_device_map(len(self.transformer.h), range(torch.cuda.device_count()))
983
+ if device_map is None
984
+ else device_map
985
+ )
986
+ assert_device_map(self.device_map, len(self.transformer.h))
987
+ self.transformer.parallelize(self.device_map)
988
+ self.lm_head = self.lm_head.to(self.transformer.first_device)
989
+ self.model_parallel = True
990
+
991
+ @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
992
+ def deparallelize(self):
993
+ warnings.warn(
994
+ "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.",
995
+ FutureWarning,
996
+ )
997
+ self.transformer.deparallelize()
998
+ self.transformer = self.transformer.to("cpu")
999
+ self.lm_head = self.lm_head.to("cpu")
1000
+ self.model_parallel = False
1001
+ torch.cuda.empty_cache()
1002
+
1003
+ def get_output_embeddings(self):
1004
+ return self.lm_head
1005
+
1006
+ def set_output_embeddings(self, new_embeddings):
1007
+ self.lm_head = new_embeddings
1008
+
1009
+ def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs):
1010
+ token_type_ids = kwargs.get("token_type_ids", None)
1011
+ # only last token for inputs_ids if past is defined in kwargs
1012
+ if past_key_values:
1013
+ input_ids = input_ids[:, -1].unsqueeze(-1)
1014
+ if token_type_ids is not None:
1015
+ token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
1016
+
1017
+ attention_mask = kwargs.get("attention_mask", None)
1018
+ position_ids = kwargs.get("position_ids", None)
1019
+
1020
+ if attention_mask is not None and position_ids is None:
1021
+ # create position_ids on the fly for batch generation
1022
+ position_ids = attention_mask.long().cumsum(-1) - 1
1023
+ position_ids.masked_fill_(attention_mask == 0, 1)
1024
+ if past_key_values:
1025
+ position_ids = position_ids[:, -1].unsqueeze(-1)
1026
+ else:
1027
+ position_ids = None
1028
+
1029
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1030
+ if inputs_embeds is not None and past_key_values is None:
1031
+ model_inputs = {"inputs_embeds": inputs_embeds}
1032
+ else:
1033
+ model_inputs = {"input_ids": input_ids}
1034
+
1035
+ model_inputs.update(
1036
+ {
1037
+ "past_key_values": past_key_values,
1038
+ "use_cache": kwargs.get("use_cache"),
1039
+ "position_ids": position_ids,
1040
+ "attention_mask": attention_mask,
1041
+ "token_type_ids": token_type_ids,
1042
+ }
1043
+ )
1044
+ return model_inputs
1045
+
1046
+ @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING)
1047
+ @add_code_sample_docstrings(
1048
+ checkpoint=_CHECKPOINT_FOR_DOC,
1049
+ output_type=CausalLMOutputWithCrossAttentions,
1050
+ config_class=_CONFIG_FOR_DOC,
1051
+ )
1052
+ def forward(
1053
+ self,
1054
+ input_ids: Optional[torch.LongTensor] = None,
1055
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1056
+ attention_mask: Optional[torch.FloatTensor] = None,
1057
+ token_type_ids: Optional[torch.LongTensor] = None,
1058
+ position_ids: Optional[torch.LongTensor] = None,
1059
+ head_mask: Optional[torch.FloatTensor] = None,
1060
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1061
+ encoder_hidden_states: Optional[torch.Tensor] = None,
1062
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
1063
+ labels: Optional[torch.LongTensor] = None,
1064
+ use_cache: Optional[bool] = None,
1065
+ output_attentions: Optional[bool] = None,
1066
+ output_hidden_states: Optional[bool] = None,
1067
+ return_dict: Optional[bool] = None,
1068
+ ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
1069
+ r"""
1070
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1071
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1072
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
1073
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
1074
+ """
1075
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1076
+
1077
+ transformer_outputs = self.transformer(
1078
+ input_ids,
1079
+ past_key_values=past_key_values,
1080
+ attention_mask=attention_mask,
1081
+ token_type_ids=token_type_ids,
1082
+ position_ids=position_ids,
1083
+ head_mask=head_mask,
1084
+ inputs_embeds=inputs_embeds,
1085
+ encoder_hidden_states=encoder_hidden_states,
1086
+ encoder_attention_mask=encoder_attention_mask,
1087
+ use_cache=use_cache,
1088
+ output_attentions=output_attentions,
1089
+ output_hidden_states=output_hidden_states,
1090
+ return_dict=return_dict,
1091
+ )
1092
+ hidden_states = transformer_outputs[0]
1093
+
1094
+ # Set device for model parallelism
1095
+ if self.model_parallel:
1096
+ torch.cuda.set_device(self.transformer.first_device)
1097
+ hidden_states = hidden_states.to(self.lm_head.weight.device)
1098
+
1099
+ lm_logits = self.lm_head(hidden_states)
1100
+
1101
+ loss = None
1102
+ if labels is not None:
1103
+ # move labels to correct device to enable model parallelism
1104
+ labels = labels.to(lm_logits.device)
1105
+ # Shift so that tokens < n predict n
1106
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1107
+ shift_labels = labels[..., 1:].contiguous()
1108
+ # Flatten the tokens
1109
+ loss_fct = CrossEntropyLoss()
1110
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1111
+
1112
+ if not return_dict:
1113
+ output = (lm_logits,) + transformer_outputs[1:]
1114
+ return ((loss,) + output) if loss is not None else output
1115
+
1116
+ return CausalLMOutputWithCrossAttentions(
1117
+ loss=loss,
1118
+ logits=lm_logits,
1119
+ past_key_values=transformer_outputs.past_key_values,
1120
+ hidden_states=transformer_outputs.hidden_states,
1121
+ attentions=transformer_outputs.attentions,
1122
+ cross_attentions=transformer_outputs.cross_attentions,
1123
+ )
1124
+
1125
+ @staticmethod
1126
+ def _reorder_cache(
1127
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
1128
+ ) -> Tuple[Tuple[torch.Tensor]]:
1129
+ """
1130
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
1131
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
1132
+ beam_idx at every generation step.
1133
+ """
1134
+ return tuple(
1135
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
1136
+ for layer_past in past_key_values
1137
+ )
1138
+
1139
+
1140
+ @add_start_docstrings(
1141
+ """
1142
+ The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for
1143
+ RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the
1144
+ input embeddings, the classification head takes as input the input of a specified classification token index in the
1145
+ input sequence).
1146
+ """,
1147
+ GPT2_START_DOCSTRING,
1148
+ )
1149
+ class GPT2DoubleHeadsModel(GPT2PreTrainedModel):
1150
+ _keys_to_ignore_on_load_missing = [r".*attn.masked_bias", r".*attn.bias", r"lm_head.weight"]
1151
+
1152
+ def __init__(self, config):
1153
+ super().__init__(config)
1154
+ config.num_labels = 1
1155
+ self.transformer = GPT2Model(config)
1156
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
1157
+ self.multiple_choice_head = SequenceSummary(config)
1158
+
1159
+ # Model parallel
1160
+ self.model_parallel = False
1161
+ self.device_map = None
1162
+
1163
+ # Initialize weights and apply final processing
1164
+ self.post_init()
1165
+
1166
+ @add_start_docstrings(PARALLELIZE_DOCSTRING)
1167
+ def parallelize(self, device_map=None):
1168
+ warnings.warn(
1169
+ "`GPT2DoubleHeadsModel.parallelize` is deprecated and will be removed in v5 of Transformers, you should"
1170
+ " load your model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your"
1171
+ " own `device_map` but it needs to be a dictionary module_name to device, so for instance"
1172
+ " {'transformer.h.0': 0, 'transformer.h.1': 1, ...}",
1173
+ FutureWarning,
1174
+ )
1175
+ self.device_map = (
1176
+ get_device_map(len(self.transformer.h), range(torch.cuda.device_count()))
1177
+ if device_map is None
1178
+ else device_map
1179
+ )
1180
+ assert_device_map(self.device_map, len(self.transformer.h))
1181
+ self.transformer.parallelize(self.device_map)
1182
+ self.lm_head = self.lm_head.to(self.transformer.first_device)
1183
+ self.multiple_choice_head = self.multiple_choice_head.to(self.transformer.first_device)
1184
+ self.model_parallel = True
1185
+
1186
+ @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
1187
+ def deparallelize(self):
1188
+ warnings.warn(
1189
+ "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.",
1190
+ FutureWarning,
1191
+ )
1192
+ self.transformer.deparallelize()
1193
+ self.transformer = self.transformer.to("cpu")
1194
+ self.lm_head = self.lm_head.to("cpu")
1195
+ self.multiple_choice_head = self.multiple_choice_head.to("cpu")
1196
+ self.model_parallel = False
1197
+ torch.cuda.empty_cache()
1198
+
1199
+ def get_output_embeddings(self):
1200
+ return self.lm_head
1201
+
1202
+ def set_output_embeddings(self, new_embeddings):
1203
+ self.lm_head = new_embeddings
1204
+
1205
+ def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
1206
+ token_type_ids = kwargs.get("token_type_ids", None)
1207
+ # only last token for inputs_ids if past is defined in kwargs
1208
+ if past_key_values:
1209
+ input_ids = input_ids[:, -1].unsqueeze(-1)
1210
+ if token_type_ids is not None:
1211
+ token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
1212
+
1213
+ attention_mask = kwargs.get("attention_mask", None)
1214
+ position_ids = kwargs.get("position_ids", None)
1215
+
1216
+ if attention_mask is not None and position_ids is None:
1217
+ # create position_ids on the fly for batch generation
1218
+ position_ids = attention_mask.long().cumsum(-1) - 1
1219
+ position_ids.masked_fill_(attention_mask == 0, 1)
1220
+ if past_key_values:
1221
+ position_ids = position_ids[:, -1].unsqueeze(-1)
1222
+ else:
1223
+ position_ids = None
1224
+
1225
+ return {
1226
+ "input_ids": input_ids,
1227
+ "past_key_values": past_key_values,
1228
+ "use_cache": kwargs.get("use_cache"),
1229
+ "position_ids": position_ids,
1230
+ "attention_mask": attention_mask,
1231
+ "token_type_ids": token_type_ids,
1232
+ }
1233
+
1234
+ @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING)
1235
+ @replace_return_docstrings(output_type=GPT2DoubleHeadsModelOutput, config_class=_CONFIG_FOR_DOC)
1236
+ def forward(
1237
+ self,
1238
+ input_ids: Optional[torch.LongTensor] = None,
1239
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1240
+ attention_mask: Optional[torch.FloatTensor] = None,
1241
+ token_type_ids: Optional[torch.LongTensor] = None,
1242
+ position_ids: Optional[torch.LongTensor] = None,
1243
+ head_mask: Optional[torch.FloatTensor] = None,
1244
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1245
+ mc_token_ids: Optional[torch.LongTensor] = None,
1246
+ labels: Optional[torch.LongTensor] = None,
1247
+ mc_labels: Optional[torch.LongTensor] = None,
1248
+ use_cache: Optional[bool] = None,
1249
+ output_attentions: Optional[bool] = None,
1250
+ output_hidden_states: Optional[bool] = None,
1251
+ return_dict: Optional[bool] = None,
1252
+ **kwargs,
1253
+ ) -> Union[Tuple, GPT2DoubleHeadsModelOutput]:
1254
+ r"""
1255
+ mc_token_ids (`torch.LongTensor` of shape `(batch_size, num_choices)`, *optional*, default to index of the last token of the input):
1256
+ Index of the classification token in each input sequence. Selected in the range `[0, input_ids.size(-1) -
1257
+ 1]`.
1258
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1259
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1260
+ `labels = input_ids`. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to
1261
+ `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size - 1]`
1262
+ mc_labels (`torch.LongTensor` of shape `(batch_size)`, *optional*):
1263
+ Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]`
1264
+ where *num_choices* is the size of the second dimension of the input tensors. (see *input_ids* above)
1265
+
1266
+ Return:
1267
+
1268
+ Example:
1269
+
1270
+ ```python
1271
+ >>> import torch
1272
+ >>> from transformers import AutoTokenizer, GPT2DoubleHeadsModel
1273
+
1274
+ >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
1275
+ >>> model = GPT2DoubleHeadsModel.from_pretrained("gpt2")
1276
+
1277
+ >>> # Add a [CLS] to the vocabulary (we should train it also!)
1278
+ >>> num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"})
1279
+ >>> # Update the model embeddings with the new vocabulary size
1280
+ >>> embedding_layer = model.resize_token_embeddings(len(tokenizer))
1281
+
1282
+ >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
1283
+ >>> encoded_choices = [tokenizer.encode(s) for s in choices]
1284
+ >>> cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]
1285
+
1286
+ >>> input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2
1287
+ >>> mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1
1288
+
1289
+ >>> outputs = model(input_ids, mc_token_ids=mc_token_ids)
1290
+ >>> lm_logits = outputs.logits
1291
+ >>> mc_logits = outputs.mc_logits
1292
+ ```"""
1293
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1294
+
1295
+ transformer_outputs = self.transformer(
1296
+ input_ids,
1297
+ past_key_values=past_key_values,
1298
+ attention_mask=attention_mask,
1299
+ token_type_ids=token_type_ids,
1300
+ position_ids=position_ids,
1301
+ head_mask=head_mask,
1302
+ inputs_embeds=inputs_embeds,
1303
+ use_cache=use_cache,
1304
+ output_attentions=output_attentions,
1305
+ output_hidden_states=output_hidden_states,
1306
+ return_dict=return_dict,
1307
+ )
1308
+
1309
+ hidden_states = transformer_outputs[0]
1310
+
1311
+ # Set device for model parallelism
1312
+ if self.model_parallel:
1313
+ torch.cuda.set_device(self.transformer.first_device)
1314
+ hidden_states = hidden_states.to(self.lm_head.weight.device)
1315
+
1316
+ lm_logits = self.lm_head(hidden_states)
1317
+ mc_logits = self.multiple_choice_head(hidden_states, mc_token_ids).squeeze(-1)
1318
+
1319
+ mc_loss = None
1320
+ if mc_labels is not None:
1321
+ loss_fct = CrossEntropyLoss()
1322
+ mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1))
1323
+ lm_loss = None
1324
+ if labels is not None:
1325
+ labels = labels.to(lm_logits.device)
1326
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1327
+ shift_labels = labels[..., 1:].contiguous()
1328
+ loss_fct = CrossEntropyLoss()
1329
+ lm_loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1330
+
1331
+ if not return_dict:
1332
+ output = (lm_logits, mc_logits) + transformer_outputs[1:]
1333
+ if mc_loss is not None:
1334
+ output = (mc_loss,) + output
1335
+ return ((lm_loss,) + output) if lm_loss is not None else output
1336
+
1337
+ return GPT2DoubleHeadsModelOutput(
1338
+ loss=lm_loss,
1339
+ mc_loss=mc_loss,
1340
+ logits=lm_logits,
1341
+ mc_logits=mc_logits,
1342
+ past_key_values=transformer_outputs.past_key_values,
1343
+ hidden_states=transformer_outputs.hidden_states,
1344
+ attentions=transformer_outputs.attentions,
1345
+ )
1346
+
1347
+ @staticmethod
1348
+ def _reorder_cache(
1349
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
1350
+ ) -> Tuple[Tuple[torch.Tensor]]:
1351
+ """
1352
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
1353
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
1354
+ beam_idx at every generation step.
1355
+ """
1356
+ return tuple(
1357
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
1358
+ for layer_past in past_key_values
1359
+ )
1360
+
1361
+
1362
+ @add_start_docstrings(
1363
+ """
1364
+ The GPT2 Model transformer with a sequence classification head on top (linear layer).
1365
+
1366
+ [`GPT2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1367
+ (e.g. GPT-1) do.
1368
+
1369
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1370
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1371
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1372
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1373
+ each row of the batch).
1374
+ """,
1375
+ GPT2_START_DOCSTRING,
1376
+ )
1377
+ class GPT2ForSequenceClassification(GPT2PreTrainedModel):
1378
+ _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.masked_bias", r"lm_head.weight"]
1379
+
1380
+ def __init__(self, config):
1381
+ super().__init__(config)
1382
+ self.num_labels = config.num_labels
1383
+ self.transformer = GPT2Model(config)
1384
+ self.score = nn.Linear(config.n_embd, self.num_labels, bias=False)
1385
+
1386
+ # Model parallel
1387
+ self.model_parallel = False
1388
+ self.device_map = None
1389
+
1390
+ # Initialize weights and apply final processing
1391
+ self.post_init()
1392
+
1393
+ @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING)
1394
+ @add_code_sample_docstrings(
1395
+ checkpoint="microsoft/DialogRPT-updown",
1396
+ output_type=SequenceClassifierOutputWithPast,
1397
+ config_class=_CONFIG_FOR_DOC,
1398
+ )
1399
+ def forward(
1400
+ self,
1401
+ input_ids: Optional[torch.LongTensor] = None,
1402
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1403
+ attention_mask: Optional[torch.FloatTensor] = None,
1404
+ token_type_ids: Optional[torch.LongTensor] = None,
1405
+ position_ids: Optional[torch.LongTensor] = None,
1406
+ head_mask: Optional[torch.FloatTensor] = None,
1407
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1408
+ labels: Optional[torch.LongTensor] = None,
1409
+ use_cache: Optional[bool] = None,
1410
+ output_attentions: Optional[bool] = None,
1411
+ output_hidden_states: Optional[bool] = None,
1412
+ return_dict: Optional[bool] = None,
1413
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1414
+ r"""
1415
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1416
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1417
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1418
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1419
+ """
1420
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1421
+
1422
+ transformer_outputs = self.transformer(
1423
+ input_ids,
1424
+ past_key_values=past_key_values,
1425
+ attention_mask=attention_mask,
1426
+ token_type_ids=token_type_ids,
1427
+ position_ids=position_ids,
1428
+ head_mask=head_mask,
1429
+ inputs_embeds=inputs_embeds,
1430
+ use_cache=use_cache,
1431
+ output_attentions=output_attentions,
1432
+ output_hidden_states=output_hidden_states,
1433
+ return_dict=return_dict,
1434
+ )
1435
+ hidden_states = transformer_outputs[0]
1436
+ logits = self.score(hidden_states)
1437
+
1438
+ if input_ids is not None:
1439
+ batch_size, sequence_length = input_ids.shape[:2]
1440
+ else:
1441
+ batch_size, sequence_length = inputs_embeds.shape[:2]
1442
+
1443
+ assert (
1444
+ self.config.pad_token_id is not None or batch_size == 1
1445
+ ), "Cannot handle batch sizes > 1 if no padding token is defined."
1446
+ if self.config.pad_token_id is None:
1447
+ sequence_lengths = -1
1448
+ else:
1449
+ if input_ids is not None:
1450
+ sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device)
1451
+ else:
1452
+ sequence_lengths = -1
1453
+ logger.warning(
1454
+ f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be "
1455
+ "unexpected if using padding tokens in conjunction with `inputs_embeds.`"
1456
+ )
1457
+
1458
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1459
+
1460
+ loss = None
1461
+ if labels is not None:
1462
+ if self.config.problem_type is None:
1463
+ if self.num_labels == 1:
1464
+ self.config.problem_type = "regression"
1465
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1466
+ self.config.problem_type = "single_label_classification"
1467
+ else:
1468
+ self.config.problem_type = "multi_label_classification"
1469
+
1470
+ if self.config.problem_type == "regression":
1471
+ loss_fct = MSELoss()
1472
+ if self.num_labels == 1:
1473
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1474
+ else:
1475
+ loss = loss_fct(pooled_logits, labels)
1476
+ elif self.config.problem_type == "single_label_classification":
1477
+ loss_fct = CrossEntropyLoss()
1478
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1479
+ elif self.config.problem_type == "multi_label_classification":
1480
+ loss_fct = BCEWithLogitsLoss()
1481
+ loss = loss_fct(pooled_logits, labels)
1482
+ if not return_dict:
1483
+ output = (pooled_logits,) + transformer_outputs[1:]
1484
+ return ((loss,) + output) if loss is not None else output
1485
+
1486
+ return SequenceClassifierOutputWithPast(
1487
+ loss=loss,
1488
+ logits=pooled_logits,
1489
+ past_key_values=transformer_outputs.past_key_values,
1490
+ hidden_states=transformer_outputs.hidden_states,
1491
+ attentions=transformer_outputs.attentions,
1492
+ )
1493
+
1494
+
1495
+ @add_start_docstrings(
1496
+ """
1497
+ GPT2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
1498
+ Named-Entity-Recognition (NER) tasks.
1499
+ """,
1500
+ GPT2_START_DOCSTRING,
1501
+ )
1502
+ class GPT2ForTokenClassification(GPT2PreTrainedModel):
1503
+ def __init__(self, config):
1504
+ super().__init__(config)
1505
+ self.num_labels = config.num_labels
1506
+
1507
+ self.transformer = GPT2Model(config)
1508
+ if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None:
1509
+ classifier_dropout = config.classifier_dropout
1510
+ elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None:
1511
+ classifier_dropout = config.hidden_dropout
1512
+ else:
1513
+ classifier_dropout = 0.1
1514
+ self.dropout = nn.Dropout(classifier_dropout)
1515
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
1516
+
1517
+ # Model parallel
1518
+ self.model_parallel = False
1519
+ self.device_map = None
1520
+
1521
+ # Initialize weights and apply final processing
1522
+ self.post_init()
1523
+
1524
+ @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING)
1525
+ # fmt: off
1526
+ @add_code_sample_docstrings(
1527
+ checkpoint="brad1141/gpt2-finetuned-comp2",
1528
+ output_type=TokenClassifierOutput,
1529
+ config_class=_CONFIG_FOR_DOC,
1530
+ expected_loss=0.25,
1531
+ expected_output=["Lead", "Lead", "Lead", "Position", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead"],
1532
+ )
1533
+ # fmt: on
1534
+ def forward(
1535
+ self,
1536
+ input_ids: Optional[torch.LongTensor] = None,
1537
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1538
+ attention_mask: Optional[torch.FloatTensor] = None,
1539
+ token_type_ids: Optional[torch.LongTensor] = None,
1540
+ position_ids: Optional[torch.LongTensor] = None,
1541
+ head_mask: Optional[torch.FloatTensor] = None,
1542
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1543
+ labels: Optional[torch.LongTensor] = None,
1544
+ use_cache: Optional[bool] = None,
1545
+ output_attentions: Optional[bool] = None,
1546
+ output_hidden_states: Optional[bool] = None,
1547
+ return_dict: Optional[bool] = None,
1548
+ ) -> Union[Tuple, TokenClassifierOutput]:
1549
+ r"""
1550
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1551
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1552
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1553
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1554
+ """
1555
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1556
+
1557
+ transformer_outputs = self.transformer(
1558
+ input_ids,
1559
+ past_key_values=past_key_values,
1560
+ attention_mask=attention_mask,
1561
+ token_type_ids=token_type_ids,
1562
+ position_ids=position_ids,
1563
+ head_mask=head_mask,
1564
+ inputs_embeds=inputs_embeds,
1565
+ use_cache=use_cache,
1566
+ output_attentions=output_attentions,
1567
+ output_hidden_states=output_hidden_states,
1568
+ return_dict=return_dict,
1569
+ )
1570
+
1571
+ hidden_states = transformer_outputs[0]
1572
+ hidden_states = self.dropout(hidden_states)
1573
+ logits = self.classifier(hidden_states)
1574
+
1575
+ loss = None
1576
+ if labels is not None:
1577
+ labels = labels.to(logits.device)
1578
+ loss_fct = CrossEntropyLoss()
1579
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1580
+
1581
+ if not return_dict:
1582
+ output = (logits,) + transformer_outputs[2:]
1583
+ return ((loss,) + output) if loss is not None else output
1584
+
1585
+ return TokenClassifierOutput(
1586
+ loss=loss,
1587
+ logits=logits,
1588
+ hidden_states=transformer_outputs.hidden_states,
1589
+ attentions=transformer_outputs.attentions,
1590
+ )
1591
+
1592
+ ### Backpack-Specific
1593
+ class BackpackGPT2PreTrainedModel(GPT2PreTrainedModel):
1594
+ """
1595
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
1596
+ models.
1597
+ """
1598
+ _keys_to_ignore_on_load_missing = [r"attn.masked_bias", r"attn.bias"]
1599
+
1600
+ config_class = BackpackGPT2Config
1601
+ base_model_prefix = "backpack"
1602
+ is_parallelizable = True
1603
+ supports_gradient_checkpointing = False
1604
+ _no_split_modules = ["GPT2Block", "BackpackNoMixBlock"]
1605
+
1606
+ def __init__(self, *inputs, **kwargs):
1607
+ super().__init__(*inputs, **kwargs)
1608
+
1609
+ class BackpackMLP(nn.Module):
1610
+
1611
+ def __init__(self, embed_dim, intermediate_dim, out_dim, config):
1612
+ super().__init__()
1613
+ self.c_fc = Conv1D(intermediate_dim, embed_dim)
1614
+ self.c_proj = Conv1D(out_dim, intermediate_dim)
1615
+ self.act = ACT2FN[config.activation_function]
1616
+ self.dropout = nn.Dropout(config.resid_pdrop)
1617
+
1618
+ def forward(self, hidden_states: Optional[Tuple[torch.FloatTensor]]) -> torch.FloatTensor:
1619
+ hidden_states = self.c_fc(hidden_states)
1620
+ hidden_states = self.act(hidden_states)
1621
+ hidden_states = self.c_proj(hidden_states)
1622
+ hidden_states = self.dropout(hidden_states)
1623
+ return hidden_states
1624
+
1625
+ class BackpackNoMixBlock(nn.Module):
1626
+
1627
+ def __init__(self, config):
1628
+ super().__init__()
1629
+ self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
1630
+ self.ln_2 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
1631
+ self.mlp = BackpackMLP(config.n_embd, config.n_embd*4, config.n_embd, config)
1632
+ self.resid_dropout1 = nn.Dropout(config.resid_pdrop)
1633
+ self.resid_dropout2 = nn.Dropout(config.resid_pdrop)
1634
+
1635
+ def forward(self, hidden_states, residual):
1636
+ residual = self.resid_dropout1(hidden_states) + residual
1637
+ hidden_states = self.ln_1(residual)
1638
+ mlp_out = self.mlp(hidden_states)
1639
+ residual = self.resid_dropout2(mlp_out) + residual
1640
+ hidden_states = self.ln_2(residual)
1641
+ return hidden_states
1642
+
1643
+
1644
+ class BackpackSenseNetwork(nn.Module):
1645
+ def __init__(self, config, num_senses, device=None, dtype=None):
1646
+ super().__init__()
1647
+ self.num_senses = num_senses
1648
+ #self.embeddings = embeddings
1649
+ self.n_embd = config.n_embd
1650
+
1651
+ self.dropout = nn.Dropout(config.embd_pdrop)
1652
+ self.block = BackpackNoMixBlock(config)
1653
+ self.ln = nn.LayerNorm(self.n_embd, eps=config.layer_norm_epsilon)
1654
+ self.final_mlp = BackpackMLP(
1655
+ embed_dim=config.n_embd,
1656
+ intermediate_dim=config.sense_intermediate_scale*config.n_embd,
1657
+ out_dim=config.n_embd*config.num_senses,
1658
+ config=config,
1659
+ )
1660
+
1661
+ def forward(self, input_embeds):
1662
+ residual = self.dropout(input_embeds)
1663
+ hidden_states = self.ln(residual)
1664
+ hidden_states = self.block(hidden_states, residual)
1665
+ senses = self.final_mlp(hidden_states)
1666
+ bs, s, nvd = senses.shape
1667
+ return senses.reshape(bs, s, self.num_senses, self.n_embd).transpose(1,2) # (bs, nv, s, d)
1668
+
1669
+ class BackpackWeightNetwork(nn.Module):
1670
+
1671
+ def __init__(self, num_senses, embed_dim):
1672
+ super().__init__()
1673
+ self.n_embd = embed_dim
1674
+ self.num_senses = num_senses
1675
+ self.c_attn = nn.Linear(embed_dim, 2*embed_dim)
1676
+ self.softmax_scale = None
1677
+
1678
+ def forward(self, encoded):
1679
+ b, s, d = encoded.shape
1680
+ encoded = self.c_attn(encoded) # (b, s, 2*d)
1681
+ encoded = encoded.reshape(b, s, 2, self.num_senses, d // self.num_senses) #(b, s, 2, nv, d//nv)
1682
+ batch_size, seqlen = encoded.shape[0], encoded.shape[1]
1683
+
1684
+ # compute scores & mask
1685
+ q, k = encoded.unbind(dim=2)
1686
+ softmax_scale = self.softmax_scale or 1.0 / math.sqrt(q.shape[-1])
1687
+ scores = torch.einsum('bthd,bshd->bhts', q, k * softmax_scale)
1688
+ causal_mask = torch.triu(torch.full((seqlen, seqlen), -10000.0, device=scores.device), 1)
1689
+ scores = scores + causal_mask.to(dtype=scores.dtype)
1690
+
1691
+ return torch.softmax(scores, dim=-1, dtype=q.dtype)
1692
+
1693
+
1694
+ @dataclass
1695
+ class BackpackGPT2BaseModelOutput(ModelOutput):
1696
+ hidden_states: torch.FloatTensor = None
1697
+ contextualization: torch.FloatTensor = None
1698
+
1699
+ class BackpackGPT2Model(BackpackGPT2PreTrainedModel):
1700
+ _keys_to_ignore_on_load_missing = [r".*attn.masked_bias", r".*attn.bias"]
1701
+
1702
+ def __init__(self, config):
1703
+ super().__init__(config)
1704
+
1705
+ self.embed_dim = config.n_embd
1706
+
1707
+ self.num_senses = config.num_senses
1708
+ self.gpt2_model = GPT2Model(config)
1709
+ self.sense_network = BackpackSenseNetwork(config, self.num_senses, self.gpt2_model.wte)
1710
+ self.word_embeddings = self.gpt2_model.wte
1711
+ self.position_embeddings = self.gpt2_model.wpe
1712
+ self.sense_weight_net = BackpackWeightNetwork(self.num_senses, self.embed_dim)
1713
+ # Model parallel
1714
+ self.model_parallel = False
1715
+ self.device_map = None
1716
+ self.gradient_checkpointing = False
1717
+
1718
+ def get_num_senses(self):
1719
+ return self.num_senses
1720
+
1721
+ def get_word_embeddings(self):
1722
+ return self.word_embeddings
1723
+
1724
+ def get_sense_network(self):
1725
+ return self.sense_network
1726
+
1727
+ def forward(self, input_ids, position_ids):
1728
+ # Compute senses
1729
+ sense_input_embeds = self.word_embeddings(input_ids)
1730
+ senses = self.sense_network(sense_input_embeds) # (bs, nv, s, d)
1731
+
1732
+ # Compute contextualization weights
1733
+ contextl_hidden_states = self.gpt2_model(input_ids, position_ids=position_ids).last_hidden_state # (bs, s, d)
1734
+ contextualization = self.sense_weight_net(contextl_hidden_states) # (bs, nv, s, s)
1735
+
1736
+ # Compute resulting outputs
1737
+ hidden_states = torch.sum(contextualization @ senses, dim=1) # (bs, nv, s, d) -> (bs, s, d)
1738
+ return BackpackGPT2BaseModelOutput(
1739
+ hidden_states=hidden_states,
1740
+ contextualization=contextualization,
1741
+ )
1742
+
1743
+ def run_with_custom_contextualization(self, input_ids, contextualization):
1744
+ # Compute senses
1745
+ sense_input_embeds = self.word_embeddings(input_ids)
1746
+ senses = self.sense_network(sense_input_embeds) # (bs, nv, s, d)
1747
+
1748
+ # Compute resulting outputs
1749
+ hidden_states = torch.sum(contextualization @ senses, dim=1) # (bs, nv, s, d) -> (bs, s, d)
1750
+ return BackpackGPT2BaseModelOutput(
1751
+ hidden_states=hidden_states,
1752
+ contextualization=contextualization,
1753
+ )
1754
+
1755
+ @dataclass
1756
+ class BackpackGPT2LMHeadModelOutput(ModelOutput):
1757
+ logits: torch.FloatTensor = None
1758
+ contextualization: torch.FloatTensor = None
1759
+
1760
+ class BackpackGPT2LMHeadModel(BackpackGPT2PreTrainedModel):
1761
+ _keys_to_ignore_on_load_missing = [r".*attn.masked_bias", r".*attn.bias"]
1762
+
1763
+ def __init__(self, config):
1764
+ super().__init__(config)
1765
+ self.backpack = BackpackGPT2Model(config)
1766
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
1767
+
1768
+ # Model parallel
1769
+ self.model_parallel = False
1770
+ self.device_map = None
1771
+
1772
+ self.tie_weights()
1773
+
1774
+ def tie_weights(self):
1775
+ self.lm_head.weight = self.backpack.word_embeddings.weight # also tied with the underlying underlying transf
1776
+
1777
+ def get_lm_head(self):
1778
+ return self.lm_head
1779
+
1780
+ def forward(self, input_ids, position_ids=None):
1781
+ outputs = self.backpack(input_ids, position_ids=position_ids)
1782
+ hidden_states, contextualization = outputs.hidden_states, outputs.contextualization
1783
+ lm_logits = self.lm_head(hidden_states) # (bs, s, V)
1784
+ return BackpackGPT2LMHeadModelOutput(
1785
+ logits=lm_logits,
1786
+ contextualization=contextualization,
1787
+ )
1788
+ # CausalLMOutput = namedtuple('CausalLMOutput', ['logits'])
1789
+ # return CausalLMOutput(logits=lm_logits)
1790
+
1791
+ def run_with_custom_contextualization(self, input_ids, contextualization):
1792
+ outputs = self.backpack.run_with_custom_contextualization(input_ids, contextualization)
1793
+ hidden_states, contextualization = outputs.hidden_states, outputs.contextualization
1794
+ lm_logits = self.lm_head(hidden_states)
1795
+ return BackpackGPT2LMHeadModelOutput(
1796
+ logits=lm_logits,
1797
+ contextualization=contextualization,
1798
+ )
1799
+
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c0db4ac7b9af81ea53a1278a708f8fedf02f98c5ef2b70f6453b2110471f27f
3
+ size 683550781