adalbertojunior commited on
Commit
ee3d854
1 Parent(s): ae83ad0

Upload 10 files

Browse files
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "roberta-base",
3
+ "alibi_starting_size": 512,
4
+ "architectures": [
5
+ "RobertaForMaskedLM"
6
+ ],
7
+ "attention_probs_dropout_prob": 0.1,
8
+ "auto_map": {
9
+ "AutoConfig": "configuration_roberta.RobertaConfig",
10
+ "AutoModelForMaskedLM": "roberta_layers.RobertaForMaskedLM"
11
+ },
12
+ "bos_token_id": 0,
13
+ "classifier_dropout": null,
14
+ "eos_token_id": 2,
15
+ "hidden_act": "gelu",
16
+ "hidden_dropout_prob": 0.1,
17
+ "hidden_size": 768,
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 3072,
20
+ "layer_norm_eps": 1e-05,
21
+ "max_position_embeddings": 514,
22
+ "model_type": "roberta",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 1,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.28.1",
29
+ "type_vocab_size": 1,
30
+ "use_cache": true,
31
+ "vocab_size": 50240
32
+ }
33
+
configuration_roberta.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 MosaicML Examples authors
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ from transformers import RobertaConfig as TransformersRobertaConfig
5
+
6
+
7
+ class RobertaConfig(TransformersRobertaConfig):
8
+
9
+ def __init__(
10
+ self,
11
+ alibi_starting_size: int = 512,
12
+ attention_probs_dropout_prob: float = 0.0,
13
+ **kwargs,
14
+ ):
15
+ """Configuration class for MosaicRoberta.
16
+ Args:
17
+ alibi_starting_size (int): Use `alibi_starting_size` to determine how large of an alibi tensor to
18
+ create when initializing the model. You should be able to ignore this parameter in most cases.
19
+ Defaults to 512.
20
+ attention_probs_dropout_prob (float): By default, turn off attention dropout in Mosaic Roberta
21
+ (otherwise, Flash Attention will be off by default). Defaults to 0.0.
22
+ """
23
+ super().__init__(
24
+ attention_probs_dropout_prob=attention_probs_dropout_prob, **kwargs)
25
+ self.alibi_starting_size = alibi_starting_size
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bb13c90b4ac4fea8fb5d5423b01228be0aa34d64cba7737952d069c9439de0d
3
+ size 498783737
roberta_layers.py ADDED
@@ -0,0 +1,1043 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 MosaicML Examples authors
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
5
+ # Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
6
+ # Copyright (c) 2022, Tri Dao.
7
+
8
+ """Implements Mosaic BERT, with an eye towards the Hugging Face API.
9
+ Mosaic BERT improves performance over Hugging Face BERT through the following:
10
+ 1. ALiBi. This architectural change removes positional embeddings and instead encodes positional
11
+ information through attention biases based on query-key position distance. It improves the effectiveness
12
+ of training with shorter sequence lengths by enabling extrapolation to longer sequences.
13
+ 2. Gated Linear Units (GLU). This architectural change replaces the FFN component of the BERT layer
14
+ to improve overall expressiveness, providing better convergence properties.
15
+ 3. Flash Attention. The Mosaic BERT's self-attention layer makes use of Flash Attention, which dramatically
16
+ improves the speed of self-attention. Our implementation utilizes a bleeding edge implementation that
17
+ supports attention biases, which allows us to use Flash Attention with ALiBi.
18
+ 4. Unpadding. Padding is often used to simplify batching across sequences of different lengths. Standard BERT
19
+ implementations waste computation on padded tokens. Mosaic BERT internally unpads to reduce unnecessary computation
20
+ and improve speed. It does this without changing how the user interfaces with the model, thereby
21
+ preserving the simple API of standard implementations.
22
+ Currently, Mosaic BERT is available for masked language modeling :class:`BertForMaskedLM` and sequence
23
+ classification :class:`BertForSequenceClassification`. We aim to expand this catalogue in future releases.
24
+ See :file:`./mosaic_bert.py` for utilities to simplify working with Mosaic BERT in Composer, and for example usage
25
+ of the core Mosaic BERT classes.
26
+ """
27
+
28
+ import copy
29
+ import logging
30
+ import math
31
+ import warnings
32
+ from typing import List, Optional, Tuple, Union
33
+
34
+ import torch
35
+ import torch.nn as nn
36
+ from einops import rearrange
37
+ from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
38
+ from transformers.activations import ACT2FN
39
+ from transformers.modeling_outputs import (MaskedLMOutput,
40
+ SequenceClassifierOutput)
41
+ from transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel
42
+
43
+ from .roberta_padding import (index_first_axis,
44
+ index_put_first_axis, pad_input,
45
+ unpad_input, unpad_input_only)
46
+
47
+ try:
48
+ import xformers
49
+ xformers_available=True
50
+ except ImportError as e:
51
+ xformers_available=False
52
+
53
+ logger = logging.getLogger(__name__)
54
+
55
+
56
+ class RobertaEmbeddings(nn.Module):
57
+ """Construct the embeddings for words, ignoring position.
58
+ There are no positional embeddings since we use ALiBi and token_type
59
+ embeddings.
60
+ This module is modeled after the Hugging Face BERT's
61
+ :class:`~transformers.model.bert.modeling_bert.RobertaEmbeddings`, but is
62
+ modified as part of Mosaic BERT's ALiBi implementation. The key change is
63
+ that position embeddings are removed. Position information instead comes
64
+ from attention biases that scale linearly with the position distance
65
+ between query and key tokens.
66
+ This module ignores the `position_ids` input to the `forward` method.
67
+ """
68
+
69
+ def __init__(self, config):
70
+ super().__init__()
71
+ self.word_embeddings = nn.Embedding(config.vocab_size,
72
+ config.hidden_size,
73
+ padding_idx=config.pad_token_id)
74
+ # ALiBi doesn't use position embeddings
75
+ self.token_type_embeddings = nn.Embedding(config.type_vocab_size,
76
+ config.hidden_size)
77
+
78
+ # self.LayerNorm is not snake-cased to stick with TensorFlow model
79
+ # variable name and be able to load any TensorFlow checkpoint file
80
+ self.LayerNorm = nn.LayerNorm(config.hidden_size,
81
+ eps=config.layer_norm_eps)
82
+ self.dropout = nn.Dropout(config.hidden_dropout_prob)
83
+ self.register_buffer('token_type_ids',
84
+ torch.zeros(config.max_position_embeddings,
85
+ dtype=torch.long),
86
+ persistent=False)
87
+
88
+ def forward(
89
+ self,
90
+ input_ids: Optional[torch.LongTensor] = None,
91
+ token_type_ids: Optional[torch.LongTensor] = None,
92
+ position_ids: Optional[torch.LongTensor] = None,
93
+ inputs_embeds: Optional[torch.FloatTensor] = None,
94
+ past_key_values_length: int = 0,
95
+ ) -> torch.Tensor:
96
+ if (input_ids is not None) == (inputs_embeds is not None):
97
+ raise ValueError('Must specify either input_ids or input_embeds!')
98
+ if input_ids is not None:
99
+ input_shape = input_ids.size()
100
+ else:
101
+ assert inputs_embeds is not None # just for type checking
102
+ input_shape = inputs_embeds.size()[:-1]
103
+
104
+ seq_length = input_shape[1]
105
+
106
+ if position_ids is None:
107
+ # great! ALiBi
108
+ pass
109
+
110
+ # Setting the token_type_ids to the registered buffer in constructor
111
+ # where it is all zeros, which usually occurs when it's auto-generated;
112
+ # registered buffer helps users when tracing the model without passing
113
+ # token_type_ids, solves issue #5664
114
+ if token_type_ids is None:
115
+ if hasattr(self, 'token_type_ids'):
116
+ assert isinstance(self.token_type_ids, torch.LongTensor)
117
+ buffered_token_type_ids = self.token_type_ids[:, :seq_length]
118
+ buffered_token_type_ids_expanded = buffered_token_type_ids.expand(
119
+ input_shape[0], seq_length)
120
+ token_type_ids = buffered_token_type_ids_expanded # type: ignore
121
+ else:
122
+ token_type_ids = torch.zeros(input_shape, # type: ignore
123
+ dtype=torch.long,
124
+ device=self.word_embeddings.device) # type: ignore # yapf: disable
125
+
126
+ if inputs_embeds is None:
127
+ inputs_embeds = self.word_embeddings(input_ids)
128
+ token_type_embeddings = self.token_type_embeddings(token_type_ids)
129
+
130
+ embeddings = inputs_embeds + token_type_embeddings
131
+ # no position embeddings! ALiBi
132
+ embeddings = self.LayerNorm(embeddings)
133
+ embeddings = self.dropout(embeddings)
134
+ return embeddings
135
+
136
+
137
+ class RobertaUnpadSelfAttention(nn.Module):
138
+ """Performs multi-headed self attention on a batch of unpadded sequences.
139
+ If Triton is installed, this module uses Flash Attention to greatly improve throughput.
140
+ The Flash Attention implementation used in Mosaic BERT supports arbitrary attention biases (which
141
+ we use to implement ALiBi), but does not support attention dropout. If either Triton is not installed
142
+ or `config.attention_probs_dropout_prob > 0`, the implementation will default to a
143
+ math-equivalent pytorch version, which is much slower.
144
+ See `forward` method for additional detail.
145
+ """
146
+
147
+ def __init__(self, config):
148
+ super().__init__()
149
+ if config.hidden_size % config.num_attention_heads != 0 and not hasattr(
150
+ config, 'embedding_size'):
151
+ raise ValueError(
152
+ f'The hidden size ({config.hidden_size}) is not a multiple of the number of attention '
153
+ f'heads ({config.num_attention_heads})')
154
+
155
+ self.num_attention_heads = config.num_attention_heads
156
+ self.attention_head_size = int(config.hidden_size /
157
+ config.num_attention_heads)
158
+ self.all_head_size = self.num_attention_heads * self.attention_head_size
159
+ self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
160
+ self.p_dropout = config.attention_probs_dropout_prob
161
+ self.Wqkv = nn.Linear(self.all_head_size, 3 * config.hidden_size)
162
+
163
+
164
+ def forward(self, hidden_states: torch.Tensor, cu_seqlens: torch.Tensor,
165
+ max_seqlen_in_batch: int, indices: torch.Tensor,
166
+ attn_mask: torch.Tensor, bias: torch.Tensor) -> torch.Tensor:
167
+ """Perform self-attention.
168
+ If dropout is zero, then we can use the Triton kernel, so we do that. However, if not, we send through a standard PyTorch
169
+ implementation of self-attention.
170
+ The arguments are unpadded, and our implementations of attention require padded arguments,
171
+ so we first call `pad_input`. Once we compute attention, we re-unpad our outputs for the other layers.
172
+ The pad/unpad operations add overhead, but not sending pad tokens through ffs saves compute.
173
+ It is possible to write an unpadded implementation of attention (in Triton and PyTorch), which we will eventually do.
174
+ Args:
175
+ hidden_states: (total_nnz, dim)
176
+ cu_seqlens: (batch + 1,)
177
+ max_seqlen_in_batch: int
178
+ indices: (total_nnz,)
179
+ attn_mask: (batch, max_seqlen_in_batch)
180
+ bias: (batch, heads, max_seqlen_in_batch, max_seqlen_in_batch)
181
+ Returns:
182
+ attention: (total_nnz, dim)
183
+ """
184
+ qkv = self.Wqkv(hidden_states)
185
+ qkv = pad_input(qkv, indices, cu_seqlens.shape[0] - 1,
186
+ max_seqlen_in_batch) # batch, max_seqlen_in_batch, thd
187
+ qkv = rearrange(qkv,
188
+ 'b s (t h d) -> b s t h d',
189
+ t=3,
190
+ h=self.num_attention_heads)
191
+ # if we have nonzero attention dropout (e.g. during fine-tuning) or no Triton, compute attention in PyTorch
192
+ q = qkv[:, :, 0, :, :].permute(0, 2, 1, 3) # b h s d
193
+ k = qkv[:, :, 1, :, :].permute(0, 2, 3, 1) # b h d s
194
+ v = qkv[:, :, 2, :, :].permute(0, 2, 1, 3) # b h s d
195
+
196
+ if self.p_dropout or xformers_available is False:
197
+
198
+ attention_scores = torch.matmul(q, k) / math.sqrt(
199
+ self.attention_head_size)
200
+ attention_scores = attention_scores + bias
201
+ attention_probs = nn.functional.softmax(attention_scores, dim=-1)
202
+ attention_probs = self.dropout(attention_probs)
203
+ attention = torch.matmul(attention_probs, v).permute(0, 2, 1,
204
+ 3) # b s h d
205
+ else:
206
+ # xformers implementation only supports 0 attention dropout
207
+ attention = xformers.ops.memory_efficient_attention(
208
+ q, k, v, attn_bias=None
209
+ )
210
+ attention = attention.to(q.dtype)
211
+ # convert_dtype = qkv.dtype not in [torch.float16, torch.bfloat16]
212
+ # if convert_dtype:
213
+ # # xformers implementation only supports fp16 and bf16
214
+ # orig_dtype = qkv.dtype
215
+ # qkv = qkv.to(torch.float16)
216
+ # bias_dtype = bias.dtype
217
+ # bias = bias.to(torch.float16)
218
+ # attention = flash_attn_qkvpacked_func(qkv, bias)
219
+ # attention = attention.to(orig_dtype)
220
+ # bias = bias.to(bias_dtype)
221
+ # else:
222
+ # attention = flash_attn_qkvpacked_func(qkv, bias)
223
+
224
+ # attn_mask is 1 for attend and 0 for don't
225
+ attention = unpad_input_only(attention, torch.squeeze(attn_mask) == 1)
226
+ return rearrange(attention, 'nnz h d -> nnz (h d)')
227
+
228
+
229
+ # Copy of transformer's library RobertaSelfOutput that will not be caught by surgery methods looking for HF BERT modules.
230
+ class RobertaSelfOutput(nn.Module):
231
+ """Computes the output of the attention layer.
232
+ This module is modeled after the Hugging Face BERT's
233
+ :class:`~transformers.model.bert.modeling_bert.RobertaSelfOutput`.
234
+ The implementation is identical. Rather than use the original module
235
+ directly, we re-implement it here so that Mosaic BERT's modules will not
236
+ be affected by any Composer surgery algorithm that modifies Hugging Face
237
+ BERT modules.
238
+ """
239
+
240
+ def __init__(self, config):
241
+ super().__init__()
242
+ self.dense = nn.Linear(config.hidden_size, config.hidden_size)
243
+ self.LayerNorm = nn.LayerNorm(config.hidden_size,
244
+ eps=config.layer_norm_eps)
245
+ self.dropout = nn.Dropout(config.hidden_dropout_prob)
246
+
247
+ def forward(self, hidden_states: torch.Tensor,
248
+ input_tensor: torch.Tensor) -> torch.Tensor:
249
+ hidden_states = self.dense(hidden_states)
250
+ hidden_states = self.dropout(hidden_states)
251
+ hidden_states = self.LayerNorm(hidden_states + input_tensor)
252
+ return hidden_states
253
+
254
+
255
+ class RobertaUnpadAttention(nn.Module):
256
+ """Chains attention, Dropout, and LayerNorm for Mosaic BERT."""
257
+
258
+ def __init__(self, config):
259
+ super().__init__()
260
+ self.self = RobertaUnpadSelfAttention(config)
261
+ self.output = RobertaSelfOutput(config)
262
+
263
+ def forward(
264
+ self,
265
+ input_tensor: torch.Tensor,
266
+ cu_seqlens: torch.Tensor,
267
+ max_s: int,
268
+ subset_idx: Optional[torch.Tensor] = None,
269
+ indices: Optional[torch.Tensor] = None,
270
+ attn_mask: Optional[torch.Tensor] = None,
271
+ bias: Optional[torch.Tensor] = None,
272
+ ) -> torch.Tensor:
273
+ """Forward pass for scaled self-attention without padding.
274
+ Arguments:
275
+ input_tensor: (total_nnz, dim)
276
+ cu_seqlens: (batch + 1,)
277
+ max_s: int
278
+ subset_idx: () set of indices whose values we care about at the end of the layer
279
+ (e.g., the masked tokens, if this is the final layer).
280
+ indices: None or (total_nnz,)
281
+ attn_mask: None or (batch, max_seqlen_in_batch)
282
+ bias: None or (batch, heads, max_seqlen_in_batch, max_seqlen_in_batch)
283
+ """
284
+ self_output = self.self(input_tensor, cu_seqlens, max_s, indices,
285
+ attn_mask, bias)
286
+ if subset_idx is not None:
287
+ return self.output(index_first_axis(self_output, subset_idx),
288
+ index_first_axis(input_tensor, subset_idx))
289
+ else:
290
+ return self.output(self_output, input_tensor)
291
+
292
+
293
+ class RobertaGatedLinearUnitMLP(nn.Module):
294
+ """Applies the FFN at the end of each Mosaic BERT layer.
295
+ Compared to the default BERT architecture, this block replaces :class:`~transformers.model.bert.modeling_bert.RobertaIntermediate`
296
+ and :class:`~transformers.model.bert.modeling_bert.SelfOutput` with a single module that has similar functionality, but
297
+ introduces Gated Linear Units.
298
+ Note: Mosaic BERT adds parameters in order to implement Gated Linear Units. To keep parameter count consistent with that of a
299
+ standard Hugging Face BERT, scale down `config.intermediate_size` by 2/3. For example, a Mosaic BERT constructed with
300
+ `config.intermediate_size=2048` will have the same parameter footprint as its Hugging Face BERT counterpart constructed
301
+ with the `config.intermediate_size=3072`.
302
+ However, in most cases it will not be necessary to adjust `config.intermediate_size` since, despite the increased
303
+ parameter size, Mosaic BERT typically offers a net higher throughput than a Hugging Face BERT built from the same `config`.
304
+ """
305
+
306
+ def __init__(self, config):
307
+ super().__init__()
308
+ self.config = config
309
+ self.gated_layers = nn.Linear(config.hidden_size,
310
+ config.intermediate_size * 2,
311
+ bias=False)
312
+ self.act = nn.GELU(approximate='none')
313
+ self.wo = nn.Linear(config.intermediate_size, config.hidden_size)
314
+ self.dropout = nn.Dropout(config.hidden_dropout_prob)
315
+ self.layernorm = nn.LayerNorm(config.hidden_size,
316
+ eps=config.layer_norm_eps)
317
+
318
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
319
+ """Compute new hidden states from current hidden states.
320
+ Args:
321
+ hidden_states (torch.Tensor): The (unpadded) hidden states from
322
+ the attention layer [nnz, dim].
323
+ """
324
+ residual_connection = hidden_states
325
+ # compute the activation
326
+ hidden_states = self.gated_layers(hidden_states)
327
+ gated = hidden_states[:, :self.config.intermediate_size]
328
+ non_gated = hidden_states[:, self.config.intermediate_size:]
329
+ hidden_states = self.act(gated) * non_gated
330
+ hidden_states = self.dropout(hidden_states)
331
+ # multiply by the second matrix
332
+ hidden_states = self.wo(hidden_states)
333
+ # add the residual connection and post-LN
334
+ hidden_states = self.layernorm(hidden_states + residual_connection)
335
+ return hidden_states
336
+
337
+
338
+ class RobertaLayer(nn.Module):
339
+ """Composes the Mosaic BERT attention and FFN blocks into a single layer."""
340
+
341
+ def __init__(self, config):
342
+ super(RobertaLayer, self).__init__()
343
+ self.attention = RobertaUnpadAttention(config)
344
+ self.mlp = RobertaGatedLinearUnitMLP(config)
345
+
346
+ def forward(
347
+ self,
348
+ hidden_states: torch.Tensor,
349
+ cu_seqlens: torch.Tensor,
350
+ seqlen: int,
351
+ subset_idx: Optional[torch.Tensor] = None,
352
+ indices: Optional[torch.Tensor] = None,
353
+ attn_mask: Optional[torch.Tensor] = None,
354
+ bias: Optional[torch.Tensor] = None,
355
+ ) -> torch.Tensor:
356
+ """Forward pass for a BERT layer, including both attention and MLP.
357
+ Args:
358
+ hidden_states: (total_nnz, dim)
359
+ cu_seqlens: (batch + 1,)
360
+ seqlen: int
361
+ subset_idx: () set of indices whose values we care about at the end of the layer
362
+ (e.g., the masked tokens, if this is the final layer).
363
+ indices: None or (total_nnz,)
364
+ attn_mask: None or (batch, max_seqlen_in_batch)
365
+ bias: None or (batch, heads, max_seqlen_in_batch, max_seqlen_in_batch)
366
+ """
367
+ attention_output = self.attention(hidden_states, cu_seqlens, seqlen,
368
+ subset_idx, indices, attn_mask, bias)
369
+ layer_output = self.mlp(attention_output)
370
+ return layer_output
371
+
372
+
373
+ class RobertaEncoder(nn.Module):
374
+ """A stack of BERT layers providing the backbone of Mosaic BERT.
375
+ This module is modeled after the Hugging Face BERT's :class:`~transformers.model.bert.modeling_bert.RobertaEncoder`,
376
+ but with substantial modifications to implement unpadding and ALiBi.
377
+ Compared to the analogous Hugging Face BERT module, this module handles unpadding to reduce unnecessary computation
378
+ at padded tokens, and pre-computes attention biases to implement ALiBi.
379
+ """
380
+
381
+ def __init__(self, config):
382
+ super().__init__()
383
+ layer = RobertaLayer(config)
384
+ self.layer = nn.ModuleList(
385
+ [copy.deepcopy(layer) for _ in range(config.num_hidden_layers)])
386
+
387
+ self.num_attention_heads = config.num_attention_heads
388
+
389
+ # The alibi mask will be dynamically expanded if it is too small for
390
+ # the input the model receives. But it generally helps to initialize it
391
+ # to a reasonably large size to help pre-allocate CUDA memory.
392
+ # The default `alibi_starting_size` is 512.
393
+ self._current_alibi_size = int(config.alibi_starting_size)
394
+ self.alibi = torch.zeros(
395
+ (1, self.num_attention_heads, self._current_alibi_size,
396
+ self._current_alibi_size))
397
+ self.rebuild_alibi_tensor(size=config.alibi_starting_size)
398
+
399
+ def rebuild_alibi_tensor(self,
400
+ size: int,
401
+ device: Optional[Union[torch.device, str]] = None):
402
+ # Alibi
403
+ # Following https://github.com/ofirpress/attention_with_linear_biases/issues/5 (Implementation 1)
404
+ # In the causal case, you can exploit the fact that softmax is invariant to a uniform translation
405
+ # of the logits, which makes the math work out *after* applying causal masking. If no causal masking
406
+ # will be applied, it is necessary to construct the diagonal mask.
407
+ n_heads = self.num_attention_heads
408
+
409
+ def _get_alibi_head_slopes(n_heads: int) -> List[float]:
410
+
411
+ def get_slopes_power_of_2(n_heads: int) -> List[float]:
412
+ start = (2**(-2**-(math.log2(n_heads) - 3)))
413
+ ratio = start
414
+ return [start * ratio**i for i in range(n_heads)]
415
+
416
+ # In the paper, they only train models that have 2^a heads for some a. This function
417
+ # has some good properties that only occur when the input is a power of 2. To
418
+ # maintain that even when the number of heads is not a power of 2, we use a
419
+ # workaround.
420
+ if math.log2(n_heads).is_integer():
421
+ return get_slopes_power_of_2(n_heads)
422
+
423
+ closest_power_of_2 = 2**math.floor(math.log2(n_heads))
424
+ slopes_a = get_slopes_power_of_2(closest_power_of_2)
425
+ slopes_b = _get_alibi_head_slopes(2 * closest_power_of_2)
426
+ slopes_b = slopes_b[0::2][:n_heads - closest_power_of_2]
427
+ return slopes_a + slopes_b
428
+
429
+ context_position = torch.arange(size, device=device)[:, None]
430
+ memory_position = torch.arange(size, device=device)[None, :]
431
+ relative_position = torch.abs(memory_position - context_position)
432
+ # [n_heads, max_token_length, max_token_length]
433
+ relative_position = relative_position.unsqueeze(0).expand(
434
+ n_heads, -1, -1)
435
+ slopes = torch.Tensor(_get_alibi_head_slopes(n_heads)).to(device)
436
+ alibi = slopes.unsqueeze(1).unsqueeze(1) * -relative_position
437
+ # [1, n_heads, max_token_length, max_token_length]
438
+ alibi = alibi.unsqueeze(0)
439
+ assert alibi.shape == torch.Size([1, n_heads, size, size])
440
+
441
+ self._current_alibi_size = size
442
+ self.alibi = alibi
443
+
444
+ def forward(
445
+ self,
446
+ hidden_states: torch.Tensor,
447
+ attention_mask: torch.Tensor,
448
+ output_all_encoded_layers: Optional[bool] = True,
449
+ subset_mask: Optional[torch.Tensor] = None,
450
+ ) -> List[torch.Tensor]:
451
+
452
+ extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
453
+ extended_attention_mask = extended_attention_mask.to(
454
+ dtype=next(self.parameters()).dtype) # fp16 compatibility
455
+ extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
456
+
457
+ attention_mask_bool = attention_mask.bool()
458
+ batch, seqlen = hidden_states.shape[:2]
459
+ # Unpad inputs and mask. It will remove tokens that are padded.
460
+ # Assume ntokens is total number of tokens (padded and non-padded)
461
+ # and ntokens_unpad is total number of non-padded tokens.
462
+ # Then unpadding performs the following compression of the inputs:
463
+ # hidden_states[ntokens,hidden] -> hidden_states[ntokens_unpad,hidden]
464
+ hidden_states, indices, cu_seqlens, _ = unpad_input(
465
+ hidden_states, attention_mask_bool)
466
+
467
+ # Add alibi matrix to extended_attention_mask
468
+ if self._current_alibi_size < seqlen:
469
+ # Rebuild the alibi tensor when needed
470
+ warnings.warn(
471
+ f'Increasing alibi size from {self._current_alibi_size} to {seqlen}'
472
+ )
473
+ self.rebuild_alibi_tensor(size=seqlen, device=hidden_states.device)
474
+ elif self.alibi.device != hidden_states.device:
475
+ # Device catch-up
476
+ self.alibi = self.alibi.to(hidden_states.device)
477
+ alibi_bias = self.alibi[:, :, :seqlen, :seqlen]
478
+ attn_bias = extended_attention_mask[:, :, :seqlen, :seqlen]
479
+ alibi_attn_mask = attn_bias + alibi_bias
480
+
481
+ all_encoder_layers = []
482
+ if subset_mask is None:
483
+ for layer_module in self.layer:
484
+ hidden_states = layer_module(hidden_states,
485
+ cu_seqlens,
486
+ seqlen,
487
+ None,
488
+ indices,
489
+ attn_mask=attention_mask,
490
+ bias=alibi_attn_mask)
491
+ if output_all_encoded_layers:
492
+ all_encoder_layers.append(hidden_states)
493
+ # Pad inputs and mask. It will insert back zero-padded tokens.
494
+ # Assume ntokens is total number of tokens (padded and non-padded)
495
+ # and ntokens_unpad is total number of non-padded tokens.
496
+ # Then padding performs the following de-compression:
497
+ # hidden_states[ntokens_unpad,hidden] -> hidden_states[ntokens,hidden]
498
+ hidden_states = pad_input(hidden_states, indices, batch, seqlen)
499
+ else:
500
+ for i in range(len(self.layer) - 1):
501
+ layer_module = self.layer[i]
502
+ hidden_states = layer_module(hidden_states,
503
+ cu_seqlens,
504
+ seqlen,
505
+ None,
506
+ indices,
507
+ attn_mask=attention_mask,
508
+ bias=alibi_attn_mask)
509
+ if output_all_encoded_layers:
510
+ all_encoder_layers.append(hidden_states)
511
+ subset_idx = torch.nonzero(subset_mask[attention_mask_bool],
512
+ as_tuple=False).flatten()
513
+ hidden_states = self.layer[-1](hidden_states,
514
+ cu_seqlens,
515
+ seqlen,
516
+ subset_idx=subset_idx,
517
+ indices=indices,
518
+ attn_mask=attention_mask,
519
+ bias=alibi_attn_mask)
520
+
521
+ if not output_all_encoded_layers:
522
+ all_encoder_layers.append(hidden_states)
523
+ return all_encoder_layers
524
+
525
+
526
+ class RobertaPooler(nn.Module):
527
+
528
+ def __init__(self, config):
529
+ super(RobertaPooler, self).__init__()
530
+ self.dense = nn.Linear(config.hidden_size, config.hidden_size)
531
+ self.activation = nn.Tanh()
532
+
533
+ def forward(self,
534
+ hidden_states: torch.Tensor,
535
+ pool: Optional[bool] = True) -> torch.Tensor:
536
+ # We "pool" the model by simply taking the hidden state corresponding
537
+ # to the first token.
538
+ first_token_tensor = hidden_states[:, 0] if pool else hidden_states
539
+ pooled_output = self.dense(first_token_tensor)
540
+ pooled_output = self.activation(pooled_output)
541
+ return pooled_output
542
+
543
+
544
+ class RobertaPredictionHeadTransform(nn.Module):
545
+
546
+ def __init__(self, config):
547
+ super().__init__()
548
+ self.dense = nn.Linear(config.hidden_size, config.hidden_size)
549
+ if isinstance(config.hidden_act, str):
550
+ self.transform_act_fn = ACT2FN[config.hidden_act]
551
+ else:
552
+ self.transform_act_fn = config.hidden_act
553
+ self.LayerNorm = torch.nn.LayerNorm(config.hidden_size, eps=1e-12)
554
+
555
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
556
+ hidden_states = self.dense(hidden_states)
557
+ hidden_states = self.transform_act_fn(hidden_states)
558
+ hidden_states = self.LayerNorm(hidden_states)
559
+ return hidden_states
560
+
561
+
562
+ class RobertaModel(RobertaPreTrainedModel):
563
+ """Overall BERT model.
564
+ Args:
565
+ config: a RobertaConfig class instance with the configuration to build a new model
566
+ Inputs:
567
+ `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length]
568
+ with the word token indices in the vocabulary(see the tokens preprocessing logic in the scripts
569
+ `extract_features.py`, `run_classifier.py` and `run_squad.py`)
570
+ `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token
571
+ types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to
572
+ a `sentence B` token (see BERT paper for more details).
573
+ `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices
574
+ selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max
575
+ input sequence length in the current batch. It's the mask that we typically use for attention when
576
+ a batch has varying length sentences.
577
+ `output_all_encoded_layers`: boolean which controls the content of the `encoded_layers` output as described below. Default: `True`.
578
+ Outputs: Tuple of (encoded_layers, pooled_output)
579
+ `encoded_layers`: controlled by `output_all_encoded_layers` argument:
580
+ - `output_all_encoded_layers=True`: outputs a list of the full sequences of encoded-hidden-states at the end
581
+ of each attention block (i.e. 12 full sequences for BERT-base, 24 for BERT-large), each
582
+ encoded-hidden-state is a torch.FloatTensor of size [batch_size, sequence_length, hidden_size],
583
+ - `output_all_encoded_layers=False`: outputs only the full sequence of hidden-states corresponding
584
+ to the last attention block of shape [batch_size, sequence_length, hidden_size],
585
+ `pooled_output`: a torch.FloatTensor of size [batch_size, hidden_size] which is the output of a
586
+ classifier pretrained on top of the hidden state associated to the first character of the
587
+ input (`CLS`) to train on the Next-Sentence task (see BERT's paper).
588
+ Example usage:
589
+ ```python
590
+ # Already been converted into WordPiece token ids
591
+ input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]])
592
+ input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]])
593
+ token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]])
594
+ config = modeling.RobertaConfig(vocab_size_or_config_json_file=32000, hidden_size=768,
595
+ num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072)
596
+ model = RobertaModel(config=config)
597
+ all_encoder_layers, pooled_output = model(input_ids, token_type_ids, input_mask)
598
+ ```
599
+ """
600
+
601
+ def __init__(self, config, add_pooling_layer=True):
602
+ super(RobertaModel, self).__init__(config)
603
+ self.embeddings = RobertaEmbeddings(config)
604
+ self.encoder = RobertaEncoder(config)
605
+ self.pooler = RobertaPooler(config) if add_pooling_layer else None
606
+ self.post_init()
607
+
608
+ def get_input_embeddings(self):
609
+ return self.embeddings.word_embeddings
610
+
611
+ def set_input_embeddings(self, value):
612
+ self.embeddings.word_embeddings = value
613
+
614
+ def forward(
615
+ self,
616
+ input_ids: torch.Tensor,
617
+ token_type_ids: Optional[torch.Tensor] = None,
618
+ attention_mask: Optional[torch.Tensor] = None,
619
+ position_ids: Optional[torch.Tensor] = None,
620
+ output_all_encoded_layers: Optional[bool] = False,
621
+ masked_tokens_mask: Optional[torch.Tensor] = None,
622
+ **kwargs
623
+ ) -> Tuple[Union[List[torch.Tensor], torch.Tensor], Optional[torch.Tensor]]:
624
+ if attention_mask is None:
625
+ attention_mask = torch.ones_like(input_ids)
626
+ if token_type_ids is None:
627
+ token_type_ids = torch.zeros_like(input_ids)
628
+
629
+ embedding_output = self.embeddings(input_ids, token_type_ids,
630
+ position_ids)
631
+
632
+ subset_mask = []
633
+ first_col_mask = []
634
+
635
+ if masked_tokens_mask is None:
636
+ subset_mask = None
637
+ else:
638
+ first_col_mask = torch.zeros_like(masked_tokens_mask)
639
+ first_col_mask[:, 0] = True
640
+ subset_mask = masked_tokens_mask | first_col_mask
641
+
642
+ encoder_outputs = self.encoder(
643
+ embedding_output,
644
+ attention_mask,
645
+ output_all_encoded_layers=output_all_encoded_layers,
646
+ subset_mask=subset_mask)
647
+
648
+ if masked_tokens_mask is None:
649
+ sequence_output = encoder_outputs[-1]
650
+ pooled_output = self.pooler(
651
+ sequence_output) if self.pooler is not None else None
652
+ else:
653
+ # TD [2022-03-01]: the indexing here is very tricky.
654
+ attention_mask_bool = attention_mask.bool()
655
+ subset_idx = subset_mask[attention_mask_bool] # type: ignore
656
+ sequence_output = encoder_outputs[-1][
657
+ masked_tokens_mask[attention_mask_bool][subset_idx]]
658
+ if self.pooler is not None:
659
+ pool_input = encoder_outputs[-1][
660
+ first_col_mask[attention_mask_bool][subset_idx]]
661
+ pooled_output = self.pooler(pool_input, pool=False)
662
+ else:
663
+ pooled_output = None
664
+
665
+ if not output_all_encoded_layers:
666
+ encoder_outputs = sequence_output
667
+
668
+ if self.pooler is not None:
669
+ return encoder_outputs, pooled_output
670
+
671
+ return encoder_outputs, None
672
+
673
+
674
+ ###################
675
+ # Roberta Heads
676
+ ###################
677
+ class RobertaLMPredictionHead(nn.Module):
678
+
679
+ def __init__(self, config, bert_model_embedding_weights):
680
+ super().__init__()
681
+ self.transform = RobertaPredictionHeadTransform(config)
682
+ # The output weights are the same as the input embeddings, but there is
683
+ # an output-only bias for each token.
684
+ self.decoder = nn.Linear(bert_model_embedding_weights.size(1),
685
+ bert_model_embedding_weights.size(0))
686
+ self.decoder.weight = bert_model_embedding_weights
687
+
688
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
689
+ hidden_states = self.transform(hidden_states)
690
+ hidden_states = self.decoder(hidden_states)
691
+ return hidden_states
692
+
693
+
694
+ class RobertaOnlyMLMHead(nn.Module):
695
+
696
+ def __init__(self, config, bert_model_embedding_weights):
697
+ super().__init__()
698
+ self.predictions = RobertaLMPredictionHead(config,
699
+ bert_model_embedding_weights)
700
+
701
+ def forward(self, sequence_output: torch.Tensor) -> torch.Tensor:
702
+ prediction_scores = self.predictions(sequence_output)
703
+ return prediction_scores
704
+
705
+
706
+ class RobertaOnlyNSPHead(nn.Module):
707
+
708
+ def __init__(self, config):
709
+ super().__init__()
710
+ self.seq_relationship = nn.Linear(config.hidden_size, 2)
711
+
712
+ def forward(self, pooled_output: torch.Tensor) -> torch.Tensor:
713
+ seq_relationship_score = self.seq_relationship(pooled_output)
714
+ return seq_relationship_score
715
+
716
+
717
+ #####################
718
+ # Various Roberta models
719
+ #####################
720
+
721
+
722
+ class RobertaForPreTraining(RobertaPreTrainedModel):
723
+ #TBD: Coming in Future Commit
724
+ pass
725
+
726
+
727
+ class RobertaLMHeadModel(RobertaPreTrainedModel):
728
+ #TBD: Coming in Future Commit
729
+ pass
730
+
731
+
732
+ class RobertaForMaskedLM(RobertaPreTrainedModel):
733
+
734
+ def __init__(self, config):
735
+ super().__init__(config)
736
+
737
+ if config.is_decoder:
738
+ warnings.warn(
739
+ 'If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for '
740
+ 'bi-directional self-attention.')
741
+
742
+ self.bert = RobertaModel(config, add_pooling_layer=False)
743
+ self.cls = RobertaOnlyMLMHead(config,
744
+ self.bert.embeddings.word_embeddings.weight)
745
+
746
+ # Initialize weights and apply final processing
747
+ self.post_init()
748
+
749
+ @classmethod
750
+ def from_composer(cls,
751
+ pretrained_checkpoint,
752
+ state_dict=None,
753
+ cache_dir=None,
754
+ from_tf=False,
755
+ config=None,
756
+ *inputs,
757
+ **kwargs):
758
+ """Load from pre-trained."""
759
+ model = cls(config, *inputs, **kwargs)
760
+ if from_tf:
761
+ raise ValueError(
762
+ 'Mosaic BERT does not support loading TensorFlow weights.')
763
+
764
+ state_dict = torch.load(pretrained_checkpoint)
765
+ # If the state_dict was saved after wrapping with `composer.HuggingFaceModel`, it takes on the `model` prefix
766
+ consume_prefix_in_state_dict_if_present(state_dict, prefix='model.')
767
+ missing_keys, unexpected_keys = model.load_state_dict(state_dict,
768
+ strict=False)
769
+
770
+ if len(missing_keys) > 0:
771
+ logger.warning(
772
+ f"Found these missing keys in the checkpoint: {', '.join(missing_keys)}"
773
+ )
774
+ if len(unexpected_keys) > 0:
775
+ logger.warning(
776
+ f"Found these unexpected keys in the checkpoint: {', '.join(unexpected_keys)}"
777
+ )
778
+
779
+ return model
780
+
781
+ def get_output_embeddings(self):
782
+ return self.cls.predictions.decoder
783
+
784
+ def set_output_embeddings(self, new_embeddings):
785
+ self.cls.predictions.decoder = new_embeddings
786
+
787
+ def forward(
788
+ self,
789
+ input_ids: Optional[torch.Tensor] = None,
790
+ attention_mask: Optional[torch.Tensor] = None,
791
+ token_type_ids: Optional[torch.Tensor] = None,
792
+ position_ids: Optional[torch.Tensor] = None,
793
+ head_mask: Optional[torch.Tensor] = None,
794
+ inputs_embeds: Optional[torch.Tensor] = None,
795
+ encoder_hidden_states: Optional[torch.Tensor] = None,
796
+ encoder_attention_mask: Optional[torch.Tensor] = None,
797
+ labels: Optional[torch.Tensor] = None,
798
+ output_attentions: Optional[bool] = None,
799
+ output_hidden_states: Optional[bool] = None,
800
+ return_dict: Optional[bool] = None,
801
+ ) -> Union[Tuple[torch.Tensor], MaskedLMOutput]:
802
+ # labels should be a `torch.LongTensor` of shape
803
+ # `(batch_size, sequence_length)`. These are used for computing the
804
+ # masked language modeling loss.
805
+ #
806
+ # Indices should be in `[-100, 0, ..., config.vocab_size]` (see
807
+ # `input_ids` docstring) Tokens with indices set to `-100` are ignored
808
+ # (masked), the loss is only computed for the tokens with labels in `[0,
809
+ # ..., config.vocab_size]`
810
+ #
811
+ # Prediction scores are only computed for masked tokens and the (bs,
812
+ # seqlen) dimensions are flattened
813
+ if (input_ids is not None) == (inputs_embeds is not None):
814
+ raise ValueError('Must specify either input_ids or input_embeds!')
815
+
816
+ if labels is None:
817
+ masked_tokens_mask = None
818
+ else:
819
+ masked_tokens_mask = labels > 0
820
+
821
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
822
+
823
+ outputs = self.bert(
824
+ input_ids,
825
+ attention_mask=attention_mask,
826
+ token_type_ids=token_type_ids,
827
+ position_ids=position_ids,
828
+ head_mask=head_mask,
829
+ inputs_embeds=inputs_embeds,
830
+ encoder_hidden_states=encoder_hidden_states,
831
+ encoder_attention_mask=encoder_attention_mask,
832
+ output_attentions=output_attentions,
833
+ output_hidden_states=output_hidden_states,
834
+ return_dict=return_dict,
835
+ masked_tokens_mask=masked_tokens_mask,
836
+ )
837
+
838
+ sequence_output = outputs[0]
839
+ prediction_scores = self.cls(sequence_output)
840
+
841
+ loss = None
842
+ if labels is not None:
843
+ # Compute loss
844
+ loss_fct = nn.CrossEntropyLoss()
845
+ masked_token_idx = torch.nonzero(labels.flatten() > 0,
846
+ as_tuple=False).flatten()
847
+ loss = loss_fct(prediction_scores,
848
+ labels.flatten()[masked_token_idx])
849
+
850
+ assert input_ids is not None, 'Coding error; please open an issue'
851
+ batch, seqlen = input_ids.shape[:2]
852
+ prediction_scores = rearrange(index_put_first_axis(
853
+ prediction_scores, masked_token_idx, batch * seqlen),
854
+ '(b s) d -> b s d',
855
+ b=batch)
856
+
857
+ if not return_dict:
858
+ output = (prediction_scores,) + outputs[2:]
859
+ return ((loss,) + output) if loss is not None else output
860
+
861
+ return MaskedLMOutput(
862
+ loss=loss,
863
+ logits=prediction_scores,
864
+ hidden_states=None,
865
+ attentions=None,
866
+ )
867
+
868
+ def prepare_inputs_for_generation(self, input_ids: torch.Tensor,
869
+ attention_mask: torch.Tensor,
870
+ **model_kwargs):
871
+ input_shape = input_ids.shape
872
+ effective_batch_size = input_shape[0]
873
+
874
+ # add a dummy token
875
+ if self.config.pad_token_id is None:
876
+ raise ValueError('The PAD token should be defined for generation')
877
+
878
+ attention_mask = torch.cat([
879
+ attention_mask,
880
+ attention_mask.new_zeros((attention_mask.shape[0], 1))
881
+ ],
882
+ dim=-1)
883
+ dummy_token = torch.full((effective_batch_size, 1),
884
+ self.config.pad_token_id,
885
+ dtype=torch.long,
886
+ device=input_ids.device)
887
+ input_ids = torch.cat([input_ids, dummy_token], dim=1)
888
+
889
+ return {'input_ids': input_ids, 'attention_mask': attention_mask}
890
+
891
+
892
+ class RobertaForNextSentencePrediction(RobertaPreTrainedModel):
893
+ #TBD: Push in future commit
894
+ pass
895
+
896
+
897
+ class RobertaForSequenceClassification(RobertaPreTrainedModel):
898
+ """Roberta Model transformer with a sequence classification/regression head.
899
+ This head is just a linear layer on top of the pooled output. Used for,
900
+ e.g., GLUE tasks.
901
+ """
902
+
903
+ def __init__(self, config):
904
+ super().__init__(config)
905
+ self.num_labels = config.num_labels
906
+ self.config = config
907
+
908
+ self.bert = RobertaModel(config)
909
+ classifier_dropout = (config.classifier_dropout
910
+ if config.classifier_dropout is not None else
911
+ config.hidden_dropout_prob)
912
+ self.dropout = nn.Dropout(classifier_dropout)
913
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
914
+
915
+ # Initialize weights and apply final processing
916
+ self.post_init()
917
+
918
+ @classmethod
919
+ def from_composer(cls,
920
+ pretrained_checkpoint,
921
+ state_dict=None,
922
+ cache_dir=None,
923
+ from_tf=False,
924
+ config=None,
925
+ *inputs,
926
+ **kwargs):
927
+ """Load from pre-trained."""
928
+ model = cls(config, *inputs, **kwargs)
929
+ if from_tf:
930
+ raise ValueError(
931
+ 'Mosaic BERT does not support loading TensorFlow weights.')
932
+
933
+ state_dict = torch.load(pretrained_checkpoint)
934
+ # If the state_dict was saved after wrapping with `composer.HuggingFaceModel`, it takes on the `model` prefix
935
+ consume_prefix_in_state_dict_if_present(state_dict, prefix='model.')
936
+ missing_keys, unexpected_keys = model.load_state_dict(state_dict,
937
+ strict=False)
938
+
939
+ if len(missing_keys) > 0:
940
+ logger.warning(
941
+ f"Found these missing keys in the checkpoint: {', '.join(missing_keys)}"
942
+ )
943
+ if len(unexpected_keys) > 0:
944
+ logger.warning(
945
+ f"Found these unexpected keys in the checkpoint: {', '.join(unexpected_keys)}"
946
+ )
947
+
948
+ return model
949
+
950
+ def forward(
951
+ self,
952
+ input_ids: Optional[torch.Tensor] = None,
953
+ attention_mask: Optional[torch.Tensor] = None,
954
+ token_type_ids: Optional[torch.Tensor] = None,
955
+ position_ids: Optional[torch.Tensor] = None,
956
+ head_mask: Optional[torch.Tensor] = None,
957
+ inputs_embeds: Optional[torch.Tensor] = None,
958
+ labels: Optional[torch.Tensor] = None,
959
+ output_attentions: Optional[bool] = None,
960
+ output_hidden_states: Optional[bool] = None,
961
+ return_dict: Optional[bool] = None,
962
+ ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]:
963
+ # labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
964
+ # Labels for computing the sequence classification/regression loss.
965
+ # Indices should be in `[0, ..., config.num_labels - 1]`.
966
+ # If `config.num_labels == 1` a regression loss is computed
967
+ # (mean-square loss). If `config.num_labels > 1` a classification loss
968
+ # is computed (cross-entropy).
969
+
970
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
971
+
972
+ outputs = self.bert(
973
+ input_ids,
974
+ attention_mask=attention_mask,
975
+ token_type_ids=token_type_ids,
976
+ position_ids=position_ids,
977
+ head_mask=head_mask,
978
+ inputs_embeds=inputs_embeds,
979
+ output_attentions=output_attentions,
980
+ output_hidden_states=output_hidden_states,
981
+ return_dict=return_dict,
982
+ )
983
+
984
+ pooled_output = outputs[1]
985
+
986
+ pooled_output = self.dropout(pooled_output)
987
+ logits = self.classifier(pooled_output)
988
+
989
+ loss = None
990
+ if labels is not None:
991
+ # Compute loss
992
+ if self.config.problem_type is None:
993
+ if self.num_labels == 1:
994
+ self.config.problem_type = 'regression'
995
+ elif self.num_labels > 1 and (labels.dtype == torch.long or
996
+ labels.dtype == torch.int):
997
+ self.config.problem_type = 'single_label_classification'
998
+ else:
999
+ self.config.problem_type = 'multi_label_classification'
1000
+
1001
+ if self.config.problem_type == 'regression':
1002
+ loss_fct = nn.MSELoss()
1003
+ if self.num_labels == 1:
1004
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
1005
+ else:
1006
+ loss = loss_fct(logits, labels)
1007
+ elif self.config.problem_type == 'single_label_classification':
1008
+ loss_fct = nn.CrossEntropyLoss()
1009
+ loss = loss_fct(logits.view(-1, self.num_labels),
1010
+ labels.view(-1))
1011
+ elif self.config.problem_type == 'multi_label_classification':
1012
+ loss_fct = nn.BCEWithLogitsLoss()
1013
+ loss = loss_fct(logits, labels)
1014
+
1015
+ if not return_dict:
1016
+ output = (logits,) + outputs[2:]
1017
+ return ((loss,) + output) if loss is not None else output
1018
+
1019
+ return SequenceClassifierOutput(
1020
+ loss=loss,
1021
+ logits=logits,
1022
+ hidden_states=None,
1023
+ attentions=None,
1024
+ )
1025
+
1026
+
1027
+ class RobertaForMultipleChoice(RobertaPreTrainedModel):
1028
+ #TBD: Push in future commit
1029
+ pass
1030
+
1031
+
1032
+ class RobertaForTokenClassification(RobertaPreTrainedModel):
1033
+ #TBD: Push in future commit
1034
+ pass
1035
+
1036
+
1037
+ class RobertaForQuestionAnswering(RobertaPreTrainedModel):
1038
+ """Bert Model with a span classification head.
1039
+ This is used for extractive question-answering tasks like SQuAD (a linear
1040
+ layers on top of the hidden states' output to compute `span start logits`
1041
+ and `span end logits`).
1042
+ """
1043
+ #TBD: Push in future commit
roberta_padding.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 MosaicML Examples authors
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # Adapted from https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/bert_padding.py
5
+ # Which was adapted from https://github.com/mlcommons/training_results_v1.1/blob/main/NVIDIA/benchmarks/bert/implementations/pytorch/padding.py
6
+
7
+ """Helper functions for padding and unpadding batches.
8
+ These functions are used extensively throughout the Mosaic BERT implementation
9
+ in `roberta_layers.py`.
10
+ """
11
+
12
+ from typing import Tuple, cast
13
+
14
+ import torch
15
+ import torch.nn.functional as F
16
+ from einops import rearrange, repeat
17
+
18
+
19
+ class IndexFirstAxis(torch.autograd.Function):
20
+
21
+ @staticmethod
22
+ def forward(ctx, input: torch.Tensor,
23
+ indices: torch.Tensor) -> torch.Tensor:
24
+ """Get just the values of `input` which are at `indices`.
25
+ Arguments:
26
+ ctx: the autograd context object
27
+ input: (b, ...) 2+ dimensional tensor
28
+ indices: (num_idx) 1D tensor
29
+ """
30
+ ctx.save_for_backward(indices)
31
+ assert input.ndim >= 2
32
+ ctx.first_axis_dim, other_shape = input.shape[0], input.shape[
33
+ 1:] # type: ignore
34
+ second_dim = other_shape.numel(
35
+ ) # product of sizes of all but first dimension
36
+ # TD [2022-03-04] For some reason torch.gather is a bit faster than indexing.
37
+ return torch.gather(
38
+ rearrange(input, 'b ... -> b (...)'), # (b, ...) -> (b, second_dim)
39
+ 0,
40
+ repeat(indices, 'z -> z d',
41
+ d=second_dim) # (indices,) -> (indices, second_dim)
42
+ ).reshape(-1, *other_shape) # (num_idx, ...)
43
+
44
+ @staticmethod
45
+ def backward(ctx, grad_output: torch.Tensor) -> Tuple[torch.Tensor, None]:
46
+ indices, = ctx.saved_tensors
47
+ assert grad_output.ndim >= 2
48
+ other_shape = grad_output.shape[1:]
49
+ grad_output = rearrange(grad_output, 'b ... -> b (...)')
50
+ grad_input = torch.zeros([ctx.first_axis_dim, grad_output.shape[1]],
51
+ device=grad_output.device,
52
+ dtype=grad_output.dtype)
53
+ # TD [2022-03-04] For some reason torch.scatter is a bit faster than indexing.
54
+ # grad_input[indices] = grad_output
55
+ grad_input.scatter_(0,
56
+ repeat(indices, 'z -> z d', d=grad_output.shape[1]),
57
+ grad_output)
58
+ return grad_input.reshape(ctx.first_axis_dim, *other_shape), None
59
+
60
+
61
+ index_first_axis = IndexFirstAxis.apply
62
+
63
+
64
+ class IndexPutFirstAxis(torch.autograd.Function):
65
+
66
+ @staticmethod
67
+ def forward(ctx, values: torch.Tensor, indices: torch.Tensor,
68
+ first_axis_dim) -> torch.Tensor:
69
+ ctx.save_for_backward(indices)
70
+ assert indices.ndim == 1
71
+ assert values.ndim >= 2
72
+ output = torch.zeros(first_axis_dim,
73
+ *values.shape[1:],
74
+ device=values.device,
75
+ dtype=values.dtype)
76
+ output[indices] = values
77
+ return output
78
+
79
+ @staticmethod
80
+ def backward(ctx,
81
+ grad_output: torch.Tensor) -> Tuple[torch.Tensor, None, None]:
82
+ indices, = ctx.saved_tensors
83
+ grad_values = grad_output[indices]
84
+ return grad_values, None, None
85
+
86
+
87
+ index_put_first_axis = IndexPutFirstAxis.apply
88
+
89
+
90
+ def unpad_input(
91
+ hidden_states: torch.Tensor,
92
+ attention_mask: torch.Tensor,
93
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, int]:
94
+ """Remove padding from input sequences.
95
+ Arguments:
96
+ hidden_states: (batch, seqlen, ...)
97
+ attention_mask: (batch, seqlen), bool / int, 1 means valid and 0 means not valid.
98
+ Returns:
99
+ hidden_states: (total_nnz, ...), where total_nnz = number of tokens in selected in attention_mask.
100
+ indices: (total_nnz)
101
+ cu_seqlens: (batch + 1), the cumulative sequence lengths, used to index into hidden_states.
102
+ max_seqlen_in_batch: int ()
103
+ """
104
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
105
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
106
+ max_seqlen_in_batch = int(seqlens_in_batch.max().item())
107
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.int32),
108
+ (1, 0))
109
+ # TD [2022-03-04] We don't want to index with a bool mask, because Pytorch will expand the
110
+ # bool mask, then call nonzero to get the indices, then index with those. The indices is @dim
111
+ # times larger than it needs to be, wasting memory. It's faster and more memory-efficient to
112
+ # index with integer indices. Moreover, torch's index is a bit slower than it needs to be,
113
+ # so we write custom forward and backward to make it a bit faster.
114
+ hidden_states = cast(
115
+ torch.Tensor,
116
+ index_first_axis(rearrange(hidden_states, 'b s ... -> (b s) ...'),
117
+ indices))
118
+ return hidden_states, indices, cu_seqlens, max_seqlen_in_batch
119
+
120
+
121
+ def unpad_input_only(
122
+ hidden_states: torch.Tensor,
123
+ attention_mask: torch.Tensor,
124
+ ) -> torch.Tensor:
125
+ """Like unpad_input, but only return the unpadded first tensor.
126
+ Save a small amount of overhead.
127
+ Arguments:
128
+ hidden_states: (batch, seqlen, ...)
129
+ attention_mask: (batch, seqlen), bool / int, 1 means valid and 0 means not valid.
130
+ Returns:
131
+ hidden_states: (total_nnz, ...), where total_nnz = number of tokens in selected in attention_mask.
132
+ """
133
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
134
+ return index_first_axis(rearrange(hidden_states, 'b s ... -> (b s) ...'),
135
+ indices)
136
+
137
+
138
+ def pad_input(hidden_states: torch.Tensor, indices: torch.Tensor, batch: int,
139
+ seqlen: int) -> torch.Tensor:
140
+ """Add padding to sequences.
141
+ Arguments:
142
+ hidden_states: (total_nnz, ...), where total_nnz = number of tokens in selected in attention_mask.
143
+ indices: (total_nnz)
144
+ batch: int batch_size
145
+ seqlen: int max sequence length
146
+ Returns:
147
+ hidden_states: (batch, seqlen, ...)
148
+ """
149
+ output = index_put_first_axis(hidden_states, indices, batch * seqlen)
150
+ return rearrange(output, '(b s) ... -> b s ...', b=batch)
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<s>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "cls_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "errors": "replace",
8
+ "mask_token": {
9
+ "__type": "AddedToken",
10
+ "content": "<mask>",
11
+ "lstrip": true,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "model_max_length": 512,
17
+ "pad_token": "<pad>",
18
+ "sep_token": "</s>",
19
+ "tokenizer_class": "RobertaTokenizer",
20
+ "trim_offsets": true,
21
+ "unk_token": "<unk>"
22
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff