SaulLu commited on
Commit
4cf987a
1 Parent(s): 77a4895

First version of the your-model-name model and tokenizer.

Browse files
config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.1,
3
+ "activation_function": "gelu",
4
+ "add_bias_logits": false,
5
+ "add_final_layer_norm": false,
6
+ "architectures": [
7
+ "BartModel"
8
+ ],
9
+ "attention_dropout": 0.1,
10
+ "bos_token_id": 0,
11
+ "classif_dropout": 0.1,
12
+ "classifier_dropout": 0.0,
13
+ "d_model": 1024,
14
+ "decoder_attention_heads": 16,
15
+ "decoder_ffn_dim": 4096,
16
+ "decoder_layerdrop": 0.0,
17
+ "decoder_layers": 12,
18
+ "decoder_start_token_id": 2,
19
+ "dropout": 0.1,
20
+ "early_stopping": true,
21
+ "encoder_attention_heads": 16,
22
+ "encoder_ffn_dim": 4096,
23
+ "encoder_layerdrop": 0.0,
24
+ "encoder_layers": 12,
25
+ "eos_token_id": 2,
26
+ "forced_eos_token_id": 2,
27
+ "gradient_checkpointing": false,
28
+ "id2label": {
29
+ "0": "LABEL_0",
30
+ "1": "LABEL_1",
31
+ "2": "LABEL_2"
32
+ },
33
+ "init_std": 0.02,
34
+ "is_encoder_decoder": true,
35
+ "label2id": {
36
+ "LABEL_0": 0,
37
+ "LABEL_1": 1,
38
+ "LABEL_2": 2
39
+ },
40
+ "max_position_embeddings": 1024,
41
+ "model_type": "bart",
42
+ "no_repeat_ngram_size": 3,
43
+ "normalize_before": false,
44
+ "num_beams": 4,
45
+ "num_hidden_layers": 12,
46
+ "pad_token_id": 1,
47
+ "scale_embedding": false,
48
+ "task_specific_params": {
49
+ "summarization": {
50
+ "length_penalty": 1.0,
51
+ "max_length": 128,
52
+ "min_length": 12,
53
+ "num_beams": 4
54
+ },
55
+ "summarization_cnn": {
56
+ "length_penalty": 2.0,
57
+ "max_length": 142,
58
+ "min_length": 56,
59
+ "num_beams": 4
60
+ },
61
+ "summarization_xsum": {
62
+ "length_penalty": 1.0,
63
+ "max_length": 62,
64
+ "min_length": 11,
65
+ "num_beams": 6
66
+ }
67
+ },
68
+ "torch_dtype": "float32",
69
+ "transformers_version": "4.11.0.dev0",
70
+ "use_cache": true,
71
+ "vocab_size": 50265
72
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
modeling_bart.py ADDED
@@ -0,0 +1,1817 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ PyTorch BART model. """
16
+ import copy
17
+ import math
18
+ import random
19
+ import warnings
20
+ from typing import Optional, Tuple
21
+
22
+ import torch
23
+ import torch.utils.checkpoint
24
+ from torch import nn
25
+ from torch.nn import CrossEntropyLoss, MSELoss
26
+
27
+ from ...activations import ACT2FN
28
+ from ...file_utils import (
29
+ add_code_sample_docstrings,
30
+ add_end_docstrings,
31
+ add_start_docstrings,
32
+ add_start_docstrings_to_model_forward,
33
+ replace_return_docstrings,
34
+ )
35
+ from ...modeling_outputs import (
36
+ BaseModelOutput,
37
+ BaseModelOutputWithPastAndCrossAttentions,
38
+ CausalLMOutputWithCrossAttentions,
39
+ Seq2SeqLMOutput,
40
+ Seq2SeqModelOutput,
41
+ Seq2SeqQuestionAnsweringModelOutput,
42
+ Seq2SeqSequenceClassifierOutput,
43
+ )
44
+ from ...modeling_utils import PreTrainedModel
45
+ from ...utils import logging
46
+ from .configuration_bart import BartConfig
47
+
48
+
49
+ logger = logging.get_logger(__name__)
50
+
51
+ _CHECKPOINT_FOR_DOC = "facebook/bart-large"
52
+ _CONFIG_FOR_DOC = "BartConfig"
53
+ _TOKENIZER_FOR_DOC = "BartTokenizer"
54
+
55
+
56
+ BART_PRETRAINED_MODEL_ARCHIVE_LIST = [
57
+ "facebook/bart-large",
58
+ # See all BART models at https://huggingface.co/models?filter=bart
59
+ ]
60
+
61
+
62
+ def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
63
+ """
64
+ Shift input ids one token to the right.
65
+ """
66
+ shifted_input_ids = input_ids.new_zeros(input_ids.shape)
67
+ shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
68
+ shifted_input_ids[:, 0] = decoder_start_token_id
69
+
70
+ assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined."
71
+ # replace possible -100 values in labels by `pad_token_id`
72
+ shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
73
+
74
+ return shifted_input_ids
75
+
76
+
77
+ def _make_causal_mask(input_ids_shape: torch.Size, dtype: torch.dtype, past_key_values_length: int = 0):
78
+ """
79
+ Make causal mask used for bi-directional self-attention.
80
+ """
81
+ bsz, tgt_len = input_ids_shape
82
+ mask = torch.full((tgt_len, tgt_len), float("-inf"))
83
+ mask_cond = torch.arange(mask.size(-1))
84
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
85
+ mask = mask.to(dtype)
86
+
87
+ if past_key_values_length > 0:
88
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1)
89
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
90
+
91
+
92
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
93
+ """
94
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
95
+ """
96
+ bsz, src_len = mask.size()
97
+ tgt_len = tgt_len if tgt_len is not None else src_len
98
+
99
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
100
+
101
+ inverted_mask = 1.0 - expanded_mask
102
+
103
+ return inverted_mask.masked_fill(inverted_mask.bool(), torch.finfo(dtype).min)
104
+
105
+
106
+ class BartLearnedPositionalEmbedding(nn.Embedding):
107
+ """
108
+ This module learns positional embeddings up to a fixed maximum size.
109
+ """
110
+
111
+ def __init__(self, num_embeddings: int, embedding_dim: int):
112
+ # Bart is set up so that if padding_idx is specified then offset the embedding ids by 2
113
+ # and adjust num_embeddings appropriately. Other models don't have this hack
114
+ self.offset = 2
115
+ super().__init__(num_embeddings + self.offset, embedding_dim)
116
+
117
+ def forward(self, input_ids_shape: torch.Size, past_key_values_length: int = 0):
118
+ """`input_ids_shape` is expected to be [bsz x seqlen]."""
119
+ bsz, seq_len = input_ids_shape[:2]
120
+ positions = torch.arange(
121
+ past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
122
+ )
123
+ return super().forward(positions + self.offset)
124
+
125
+
126
+ class BartAttention(nn.Module):
127
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
128
+
129
+ def __init__(
130
+ self,
131
+ embed_dim: int,
132
+ num_heads: int,
133
+ dropout: float = 0.0,
134
+ is_decoder: bool = False,
135
+ bias: bool = True,
136
+ ):
137
+ super().__init__()
138
+ self.embed_dim = embed_dim
139
+ self.num_heads = num_heads
140
+ self.dropout = dropout
141
+ self.head_dim = embed_dim // num_heads
142
+ assert (
143
+ self.head_dim * num_heads == self.embed_dim
144
+ ), f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`: {num_heads})."
145
+ self.scaling = self.head_dim ** -0.5
146
+ self.is_decoder = is_decoder
147
+
148
+ self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
149
+ self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
150
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
151
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
152
+
153
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
154
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
155
+
156
+ def forward(
157
+ self,
158
+ hidden_states: torch.Tensor,
159
+ key_value_states: Optional[torch.Tensor] = None,
160
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
161
+ attention_mask: Optional[torch.Tensor] = None,
162
+ layer_head_mask: Optional[torch.Tensor] = None,
163
+ output_attentions: bool = False,
164
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
165
+ """Input shape: Batch x Time x Channel"""
166
+
167
+ # if key_value_states are provided this layer is used as a cross-attention layer
168
+ # for the decoder
169
+ is_cross_attention = key_value_states is not None
170
+ bsz, tgt_len, embed_dim = hidden_states.size()
171
+
172
+ # get query proj
173
+ query_states = self.q_proj(hidden_states) * self.scaling
174
+ # get key, value proj
175
+ if is_cross_attention and past_key_value is not None:
176
+ # reuse k,v, cross_attentions
177
+ key_states = past_key_value[0]
178
+ value_states = past_key_value[1]
179
+ elif is_cross_attention:
180
+ # cross_attentions
181
+ key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
182
+ value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
183
+ elif past_key_value is not None:
184
+ # reuse k, v, self_attention
185
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
186
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
187
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
188
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
189
+ else:
190
+ # self_attention
191
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
192
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
193
+
194
+ if self.is_decoder:
195
+ # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
196
+ # Further calls to cross_attention layer can then reuse all cross-attention
197
+ # key/value_states (first "if" case)
198
+ # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
199
+ # all previous decoder key/value_states. Further calls to uni-directional self-attention
200
+ # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
201
+ # if encoder bi-directional self-attention `past_key_value` is always `None`
202
+ past_key_value = (key_states, value_states)
203
+
204
+ proj_shape = (bsz * self.num_heads, -1, self.head_dim)
205
+ query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
206
+ key_states = key_states.view(*proj_shape)
207
+ value_states = value_states.view(*proj_shape)
208
+
209
+ src_len = key_states.size(1)
210
+ attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
211
+
212
+ if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
213
+ raise ValueError(
214
+ f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}"
215
+ )
216
+
217
+ if attention_mask is not None:
218
+ if attention_mask.size() != (bsz, 1, tgt_len, src_len):
219
+ raise ValueError(
220
+ f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
221
+ )
222
+ attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
223
+ attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
224
+
225
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
226
+
227
+ if layer_head_mask is not None:
228
+ if layer_head_mask.size() != (self.num_heads,):
229
+ raise ValueError(
230
+ f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
231
+ )
232
+ attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
233
+ attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
234
+
235
+ if output_attentions:
236
+ # this operation is a bit awkward, but it's required to
237
+ # make sure that attn_weights keeps its gradient.
238
+ # In order to do so, attn_weights have to be reshaped
239
+ # twice and have to be reused in the following
240
+ attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
241
+ attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
242
+ else:
243
+ attn_weights_reshaped = None
244
+
245
+ attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
246
+
247
+ attn_output = torch.bmm(attn_probs, value_states)
248
+
249
+ if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
250
+ raise ValueError(
251
+ f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output.size()}"
252
+ )
253
+
254
+ attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
255
+ attn_output = attn_output.transpose(1, 2)
256
+ attn_output = attn_output.reshape(bsz, tgt_len, embed_dim)
257
+
258
+ attn_output = self.out_proj(attn_output)
259
+
260
+ return attn_output, attn_weights_reshaped, past_key_value
261
+
262
+
263
+ class BartEncoderLayer(nn.Module):
264
+ def __init__(self, config: BartConfig):
265
+ super().__init__()
266
+ self.embed_dim = config.d_model
267
+ self.self_attn = BartAttention(
268
+ embed_dim=self.embed_dim,
269
+ num_heads=config.encoder_attention_heads,
270
+ dropout=config.attention_dropout,
271
+ )
272
+ self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
273
+ self.dropout = config.dropout
274
+ self.activation_fn = ACT2FN[config.activation_function]
275
+ self.activation_dropout = config.activation_dropout
276
+ self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
277
+ self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
278
+ self.final_layer_norm = nn.LayerNorm(self.embed_dim)
279
+
280
+ def forward(
281
+ self,
282
+ hidden_states: torch.Tensor,
283
+ attention_mask: torch.Tensor,
284
+ layer_head_mask: torch.Tensor,
285
+ output_attentions: bool = False,
286
+ ):
287
+ """
288
+ Args:
289
+ hidden_states (:obj:`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`
290
+ attention_mask (:obj:`torch.FloatTensor`): attention mask of size
291
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
292
+ layer_head_mask (:obj:`torch.FloatTensor`): mask for attention heads in a given layer of size
293
+ `(encoder_attention_heads,)`.
294
+ output_attentions (:obj:`bool`, `optional`):
295
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
296
+ returned tensors for more detail.
297
+ """
298
+ residual = hidden_states
299
+ hidden_states, attn_weights, _ = self.self_attn(
300
+ hidden_states=hidden_states,
301
+ attention_mask=attention_mask,
302
+ layer_head_mask=layer_head_mask,
303
+ output_attentions=output_attentions,
304
+ )
305
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
306
+ hidden_states = residual + hidden_states
307
+ hidden_states = self.self_attn_layer_norm(hidden_states)
308
+
309
+ residual = hidden_states
310
+ hidden_states = self.activation_fn(self.fc1(hidden_states))
311
+ hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
312
+ hidden_states = self.fc2(hidden_states)
313
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
314
+ hidden_states = residual + hidden_states
315
+ hidden_states = self.final_layer_norm(hidden_states)
316
+
317
+ if hidden_states.dtype == torch.float16 and (
318
+ torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()
319
+ ):
320
+ clamp_value = torch.finfo(hidden_states.dtype).max - 1000
321
+ hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
322
+
323
+ outputs = (hidden_states,)
324
+
325
+ if output_attentions:
326
+ outputs += (attn_weights,)
327
+
328
+ return outputs
329
+
330
+
331
+ class BartDecoderLayer(nn.Module):
332
+ def __init__(self, config: BartConfig):
333
+ super().__init__()
334
+ self.embed_dim = config.d_model
335
+
336
+ self.self_attn = BartAttention(
337
+ embed_dim=self.embed_dim,
338
+ num_heads=config.decoder_attention_heads,
339
+ dropout=config.attention_dropout,
340
+ is_decoder=True,
341
+ )
342
+ self.dropout = config.dropout
343
+ self.activation_fn = ACT2FN[config.activation_function]
344
+ self.activation_dropout = config.activation_dropout
345
+
346
+ self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
347
+ self.encoder_attn = BartAttention(
348
+ self.embed_dim,
349
+ config.decoder_attention_heads,
350
+ dropout=config.attention_dropout,
351
+ is_decoder=True,
352
+ )
353
+ self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim)
354
+ self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim)
355
+ self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim)
356
+ self.final_layer_norm = nn.LayerNorm(self.embed_dim)
357
+
358
+ def forward(
359
+ self,
360
+ hidden_states: torch.Tensor,
361
+ attention_mask: Optional[torch.Tensor] = None,
362
+ encoder_hidden_states: Optional[torch.Tensor] = None,
363
+ encoder_attention_mask: Optional[torch.Tensor] = None,
364
+ layer_head_mask: Optional[torch.Tensor] = None,
365
+ cross_attn_layer_head_mask: Optional[torch.Tensor] = None,
366
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
367
+ output_attentions: Optional[bool] = False,
368
+ use_cache: Optional[bool] = True,
369
+ ):
370
+ """
371
+ Args:
372
+ hidden_states (:obj:`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
373
+ attention_mask (:obj:`torch.FloatTensor`): attention mask of size
374
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
375
+ encoder_hidden_states (:obj:`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)`
376
+ encoder_attention_mask (:obj:`torch.FloatTensor`): encoder attention mask of size
377
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
378
+ layer_head_mask (:obj:`torch.FloatTensor`): mask for attention heads in a given layer of size
379
+ `(encoder_attention_heads,)`.
380
+ cross_attn_layer_head_mask (:obj:`torch.FloatTensor`): mask for cross-attention heads in a given layer of
381
+ size `(decoder_attention_heads,)`.
382
+ past_key_value (:obj:`Tuple(torch.FloatTensor)`): cached past key and value projection states
383
+ output_attentions (:obj:`bool`, `optional`):
384
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
385
+ returned tensors for more detail.
386
+ """
387
+ residual = hidden_states
388
+
389
+ # Self Attention
390
+ # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
391
+ self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
392
+ # add present self-attn cache to positions 1,2 of present_key_value tuple
393
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
394
+ hidden_states=hidden_states,
395
+ past_key_value=self_attn_past_key_value,
396
+ attention_mask=attention_mask,
397
+ layer_head_mask=layer_head_mask,
398
+ output_attentions=output_attentions,
399
+ )
400
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
401
+ hidden_states = residual + hidden_states
402
+ hidden_states = self.self_attn_layer_norm(hidden_states)
403
+
404
+ # Cross-Attention Block
405
+ cross_attn_present_key_value = None
406
+ cross_attn_weights = None
407
+ if encoder_hidden_states is not None:
408
+ residual = hidden_states
409
+
410
+ # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
411
+ cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
412
+ hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
413
+ hidden_states=hidden_states,
414
+ key_value_states=encoder_hidden_states,
415
+ attention_mask=encoder_attention_mask,
416
+ layer_head_mask=cross_attn_layer_head_mask,
417
+ past_key_value=cross_attn_past_key_value,
418
+ output_attentions=output_attentions,
419
+ )
420
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
421
+ hidden_states = residual + hidden_states
422
+ hidden_states = self.encoder_attn_layer_norm(hidden_states)
423
+
424
+ # add cross-attn to positions 3,4 of present_key_value tuple
425
+ present_key_value = present_key_value + cross_attn_present_key_value
426
+
427
+ # Fully Connected
428
+ residual = hidden_states
429
+ hidden_states = self.activation_fn(self.fc1(hidden_states))
430
+ hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
431
+ hidden_states = self.fc2(hidden_states)
432
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
433
+ hidden_states = residual + hidden_states
434
+ hidden_states = self.final_layer_norm(hidden_states)
435
+
436
+ outputs = (hidden_states,)
437
+
438
+ if output_attentions:
439
+ outputs += (self_attn_weights, cross_attn_weights)
440
+
441
+ if use_cache:
442
+ outputs += (present_key_value,)
443
+
444
+ return outputs
445
+
446
+
447
+ class BartClassificationHead(nn.Module):
448
+ """Head for sentence-level classification tasks."""
449
+
450
+ def __init__(
451
+ self,
452
+ input_dim: int,
453
+ inner_dim: int,
454
+ num_classes: int,
455
+ pooler_dropout: float,
456
+ ):
457
+ super().__init__()
458
+ self.dense = nn.Linear(input_dim, inner_dim)
459
+ self.dropout = nn.Dropout(p=pooler_dropout)
460
+ self.out_proj = nn.Linear(inner_dim, num_classes)
461
+
462
+ def forward(self, hidden_states: torch.Tensor):
463
+ hidden_states = self.dropout(hidden_states)
464
+ hidden_states = self.dense(hidden_states)
465
+ hidden_states = torch.tanh(hidden_states)
466
+ hidden_states = self.dropout(hidden_states)
467
+ hidden_states = self.out_proj(hidden_states)
468
+ return hidden_states
469
+
470
+
471
+ class BartPretrainedModel(PreTrainedModel):
472
+ config_class = BartConfig
473
+ base_model_prefix = "model"
474
+ _keys_to_ignore_on_load_unexpected = [r"encoder\.version", r"decoder\.version"]
475
+
476
+ def _init_weights(self, module):
477
+ std = self.config.init_std
478
+ if isinstance(module, nn.Linear):
479
+ module.weight.data.normal_(mean=0.0, std=std)
480
+ if module.bias is not None:
481
+ module.bias.data.zero_()
482
+ elif isinstance(module, nn.Embedding):
483
+ module.weight.data.normal_(mean=0.0, std=std)
484
+ if module.padding_idx is not None:
485
+ module.weight.data[module.padding_idx].zero_()
486
+
487
+ @property
488
+ def dummy_inputs(self):
489
+ pad_token = self.config.pad_token_id
490
+ input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device)
491
+ dummy_inputs = {
492
+ "attention_mask": input_ids.ne(pad_token),
493
+ "input_ids": input_ids,
494
+ }
495
+ return dummy_inputs
496
+
497
+
498
+ class PretrainedBartModel(BartPretrainedModel):
499
+ def __init_subclass__(self):
500
+ warnings.warn(
501
+ "The class `PretrainedBartModel` has been depreciated, please use `BartPretrainedModel` instead.",
502
+ FutureWarning,
503
+ )
504
+
505
+
506
+ BART_START_DOCSTRING = r"""
507
+ This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic
508
+ methods the library implements for all its model (such as downloading or saving, resizing the input embeddings,
509
+ pruning heads etc.)
510
+
511
+ This model is also a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`__
512
+ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
513
+ general usage and behavior.
514
+
515
+ Parameters:
516
+ config (:class:`~transformers.BartConfig`):
517
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
518
+ load the weights associated with the model, only the configuration. Check out the
519
+ :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights.
520
+ """
521
+
522
+ BART_GENERATION_EXAMPLE = r"""
523
+ Summarization example::
524
+
525
+ >>> from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
526
+
527
+ >>> model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
528
+ >>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
529
+
530
+ >>> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
531
+ >>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
532
+
533
+ >>> # Generate Summary
534
+ >>> summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
535
+ >>> print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
536
+
537
+ Mask filling example::
538
+
539
+ >>> from transformers import BartTokenizer, BartForConditionalGeneration
540
+ >>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
541
+ >>> TXT = "My friends are <mask> but they eat too many carbs."
542
+
543
+ >>> model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
544
+ >>> input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
545
+ >>> logits = model(input_ids).logits
546
+
547
+ >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
548
+ >>> probs = logits[0, masked_index].softmax(dim=0)
549
+ >>> values, predictions = probs.topk(5)
550
+
551
+ >>> tokenizer.decode(predictions).split()
552
+ """
553
+
554
+ BART_INPUTS_DOCSTRING = r"""
555
+ Args:
556
+ input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
557
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
558
+ it.
559
+
560
+ Indices can be obtained using :class:`~transformers.BartTokenizer`. See
561
+ :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
562
+ details.
563
+
564
+ `What are input IDs? <../glossary.html#input-ids>`__
565
+ attention_mask (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
566
+ Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
567
+
568
+ - 1 for tokens that are **not masked**,
569
+ - 0 for tokens that are **masked**.
570
+
571
+ `What are attention masks? <../glossary.html#attention-mask>`__
572
+ decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
573
+ Indices of decoder input sequence tokens in the vocabulary.
574
+
575
+ Indices can be obtained using :class:`~transformers.BartTokenizer`. See
576
+ :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
577
+ details.
578
+
579
+ `What are decoder input IDs? <../glossary.html#decoder-input-ids>`__
580
+
581
+ Bart uses the :obj:`eos_token_id` as the starting token for :obj:`decoder_input_ids` generation. If
582
+ :obj:`past_key_values` is used, optionally only the last :obj:`decoder_input_ids` have to be input (see
583
+ :obj:`past_key_values`).
584
+
585
+ For translation and summarization training, :obj:`decoder_input_ids` should be provided. If no
586
+ :obj:`decoder_input_ids` is provided, the model will create this tensor by shifting the :obj:`input_ids` to
587
+ the right for denoising pre-training following the paper.
588
+ decoder_attention_mask (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
589
+ Default behavior: generate a tensor that ignores pad tokens in :obj:`decoder_input_ids`. Causal mask will
590
+ also be used by default.
591
+
592
+ If you want to change padding behavior, you should read :func:`modeling_bart._prepare_decoder_inputs` and
593
+ modify to your needs. See diagram 1 in `the paper <https://arxiv.org/abs/1910.13461>`__ for more
594
+ information on the default strategy.
595
+ head_mask (:obj:`torch.Tensor` of shape :obj:`(encoder_layers, encoder_attention_heads)`, `optional`):
596
+ Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in ``[0, 1]``:
597
+
598
+ - 1 indicates the head is **not masked**,
599
+ - 0 indicates the head is **masked**.
600
+
601
+ decoder_head_mask (:obj:`torch.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`):
602
+ Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in ``[0, 1]``:
603
+
604
+ - 1 indicates the head is **not masked**,
605
+ - 0 indicates the head is **masked**.
606
+
607
+ cross_attn_head_mask (:obj:`torch.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`):
608
+ Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in ``[0,
609
+ 1]``:
610
+
611
+ - 1 indicates the head is **not masked**,
612
+ - 0 indicates the head is **masked**.
613
+
614
+ encoder_outputs (:obj:`tuple(tuple(torch.FloatTensor)`, `optional`):
615
+ Tuple consists of (:obj:`last_hidden_state`, `optional`: :obj:`hidden_states`, `optional`:
616
+ :obj:`attentions`) :obj:`last_hidden_state` of shape :obj:`(batch_size, sequence_length, hidden_size)`,
617
+ `optional`) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the
618
+ cross-attention of the decoder.
619
+ past_key_values (:obj:`tuple(tuple(torch.FloatTensor))`, `optional`, returned when ``use_cache=True`` is passed or when ``config.use_cache=True``):
620
+ Tuple of :obj:`tuple(torch.FloatTensor)` of length :obj:`config.n_layers`, with each tuple having 2 tensors
621
+ of shape :obj:`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of
622
+ shape :obj:`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
623
+
624
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
625
+ blocks) that can be used (see :obj:`past_key_values` input) to speed up sequential decoding.
626
+
627
+ If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
628
+ (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
629
+ instead of all :obj:`decoder_input_ids`` of shape :obj:`(batch_size, sequence_length)`.
630
+ inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
631
+ Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
632
+ This is useful if you want more control over how to convert :obj:`input_ids` indices into associated
633
+ vectors than the model's internal embedding lookup matrix.
634
+ decoder_inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, target_sequence_length, hidden_size)`, `optional`):
635
+ Optionally, instead of passing :obj:`decoder_input_ids` you can choose to directly pass an embedded
636
+ representation. If :obj:`past_key_values` is used, optionally only the last :obj:`decoder_inputs_embeds`
637
+ have to be input (see :obj:`past_key_values`). This is useful if you want more control over how to convert
638
+ :obj:`decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix.
639
+
640
+ If :obj:`decoder_input_ids` and :obj:`decoder_inputs_embeds` are both unset, :obj:`decoder_inputs_embeds`
641
+ takes the value of :obj:`inputs_embeds`.
642
+ use_cache (:obj:`bool`, `optional`):
643
+ If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
644
+ decoding (see :obj:`past_key_values`).
645
+ output_attentions (:obj:`bool`, `optional`):
646
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
647
+ tensors for more detail.
648
+ output_hidden_states (:obj:`bool`, `optional`):
649
+ Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
650
+ more detail.
651
+ return_dict (:obj:`bool`, `optional`):
652
+ Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
653
+ """
654
+
655
+
656
+ class BartEncoder(BartPretrainedModel):
657
+ """
658
+ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a
659
+ :class:`BartEncoderLayer`.
660
+
661
+ Args:
662
+ config: BartConfig
663
+ embed_tokens (nn.Embedding): output embedding
664
+ """
665
+
666
+ def __init__(self, config: BartConfig, embed_tokens: Optional[nn.Embedding] = None):
667
+ super().__init__(config)
668
+
669
+ self.dropout = config.dropout
670
+ self.layerdrop = config.encoder_layerdrop
671
+
672
+ embed_dim = config.d_model
673
+ self.padding_idx = config.pad_token_id
674
+ self.max_source_positions = config.max_position_embeddings
675
+ self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0
676
+
677
+ if embed_tokens is not None:
678
+ self.embed_tokens = embed_tokens
679
+ else:
680
+ self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim, self.padding_idx)
681
+
682
+ self.embed_positions = BartLearnedPositionalEmbedding(
683
+ config.max_position_embeddings,
684
+ embed_dim,
685
+ )
686
+ self.layers = nn.ModuleList([BartEncoderLayer(config) for _ in range(config.encoder_layers)])
687
+ self.layernorm_embedding = nn.LayerNorm(embed_dim)
688
+
689
+ self.init_weights()
690
+
691
+ def forward(
692
+ self,
693
+ input_ids=None,
694
+ attention_mask=None,
695
+ head_mask=None,
696
+ inputs_embeds=None,
697
+ output_attentions=None,
698
+ output_hidden_states=None,
699
+ return_dict=None,
700
+ ):
701
+ r"""
702
+ Args:
703
+ input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
704
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
705
+ provide it.
706
+
707
+ Indices can be obtained using :class:`~transformers.BartTokenizer`. See
708
+ :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__`
709
+ for details.
710
+
711
+ `What are input IDs? <../glossary.html#input-ids>`__
712
+ attention_mask (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
713
+ Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
714
+
715
+ - 1 for tokens that are **not masked**,
716
+ - 0 for tokens that are **masked**.
717
+
718
+ `What are attention masks? <../glossary.html#attention-mask>`__
719
+ head_mask (:obj:`torch.Tensor` of shape :obj:`(encoder_layers, encoder_attention_heads)`, `optional`):
720
+ Mask to nullify selected heads of the attention modules. Mask values selected in ``[0, 1]``:
721
+
722
+ - 1 indicates the head is **not masked**,
723
+ - 0 indicates the head is **masked**.
724
+
725
+ inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
726
+ Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded
727
+ representation. This is useful if you want more control over how to convert :obj:`input_ids` indices
728
+ into associated vectors than the model's internal embedding lookup matrix.
729
+ output_attentions (:obj:`bool`, `optional`):
730
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
731
+ returned tensors for more detail.
732
+ output_hidden_states (:obj:`bool`, `optional`):
733
+ Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors
734
+ for more detail.
735
+ return_dict (:obj:`bool`, `optional`):
736
+ Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
737
+ """
738
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
739
+ output_hidden_states = (
740
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
741
+ )
742
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
743
+
744
+ # retrieve input_ids and inputs_embeds
745
+ if input_ids is not None and inputs_embeds is not None:
746
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
747
+ elif input_ids is not None:
748
+ input_shape = input_ids.size()
749
+ input_ids = input_ids.view(-1, input_shape[-1])
750
+ elif inputs_embeds is not None:
751
+ input_shape = inputs_embeds.size()[:-1]
752
+ else:
753
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
754
+
755
+ if inputs_embeds is None:
756
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
757
+
758
+ embed_pos = self.embed_positions(input_shape)
759
+
760
+ hidden_states = inputs_embeds + embed_pos
761
+ hidden_states = self.layernorm_embedding(hidden_states)
762
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
763
+
764
+ # expand attention_mask
765
+ if attention_mask is not None:
766
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
767
+ attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)
768
+
769
+ encoder_states = () if output_hidden_states else None
770
+ all_attentions = () if output_attentions else None
771
+
772
+ # check if head_mask has a correct number of layers specified if desired
773
+ if head_mask is not None:
774
+ assert head_mask.size()[0] == (
775
+ len(self.layers)
776
+ ), f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
777
+ for idx, encoder_layer in enumerate(self.layers):
778
+ if output_hidden_states:
779
+ encoder_states = encoder_states + (hidden_states,)
780
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
781
+ dropout_probability = random.uniform(0, 1)
782
+ if self.training and (dropout_probability < self.layerdrop): # skip the layer
783
+ layer_outputs = (None, None)
784
+ else:
785
+ if getattr(self.config, "gradient_checkpointing", False) and self.training:
786
+
787
+ def create_custom_forward(module):
788
+ def custom_forward(*inputs):
789
+ return module(*inputs, output_attentions)
790
+
791
+ return custom_forward
792
+
793
+ layer_outputs = torch.utils.checkpoint.checkpoint(
794
+ create_custom_forward(encoder_layer),
795
+ hidden_states,
796
+ attention_mask,
797
+ (head_mask[idx] if head_mask is not None else None),
798
+ )
799
+ else:
800
+ layer_outputs = encoder_layer(
801
+ hidden_states,
802
+ attention_mask,
803
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
804
+ output_attentions=output_attentions,
805
+ )
806
+
807
+ hidden_states = layer_outputs[0]
808
+
809
+ if output_attentions:
810
+ all_attentions = all_attentions + (layer_outputs[1],)
811
+
812
+ if output_hidden_states:
813
+ encoder_states = encoder_states + (hidden_states,)
814
+
815
+ if not return_dict:
816
+ return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
817
+ return BaseModelOutput(
818
+ last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
819
+ )
820
+
821
+
822
+ class BartDecoder(BartPretrainedModel):
823
+ """
824
+ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a :class:`BartDecoderLayer`
825
+
826
+ Args:
827
+ config: BartConfig
828
+ embed_tokens (nn.Embedding): output embedding
829
+ """
830
+
831
+ def __init__(self, config: BartConfig, embed_tokens: Optional[nn.Embedding] = None):
832
+ super().__init__(config)
833
+ self.dropout = config.dropout
834
+ self.layerdrop = config.decoder_layerdrop
835
+ self.padding_idx = config.pad_token_id
836
+ self.max_target_positions = config.max_position_embeddings
837
+ self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0
838
+
839
+ if embed_tokens is not None:
840
+ self.embed_tokens = embed_tokens
841
+ else:
842
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx)
843
+
844
+ self.embed_positions = BartLearnedPositionalEmbedding(
845
+ config.max_position_embeddings,
846
+ config.d_model,
847
+ )
848
+ self.layers = nn.ModuleList([BartDecoderLayer(config) for _ in range(config.decoder_layers)])
849
+ self.layernorm_embedding = nn.LayerNorm(config.d_model)
850
+
851
+ self.init_weights()
852
+
853
+ def get_input_embeddings(self):
854
+ return self.embed_tokens
855
+
856
+ def set_input_embeddings(self, value):
857
+ self.embed_tokens = value
858
+
859
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
860
+ # create causal mask
861
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
862
+ combined_attention_mask = None
863
+ if input_shape[-1] > 1:
864
+ combined_attention_mask = _make_causal_mask(
865
+ input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length
866
+ ).to(self.device)
867
+
868
+ if attention_mask is not None:
869
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
870
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
871
+ combined_attention_mask = (
872
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
873
+ )
874
+
875
+ return combined_attention_mask
876
+
877
+ def forward(
878
+ self,
879
+ input_ids=None,
880
+ attention_mask=None,
881
+ encoder_hidden_states=None,
882
+ encoder_attention_mask=None,
883
+ head_mask=None,
884
+ cross_attn_head_mask=None,
885
+ past_key_values=None,
886
+ inputs_embeds=None,
887
+ use_cache=None,
888
+ output_attentions=None,
889
+ output_hidden_states=None,
890
+ return_dict=None,
891
+ ):
892
+ r"""
893
+ Args:
894
+ input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
895
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
896
+ provide it.
897
+
898
+ Indices can be obtained using :class:`~transformers.BartTokenizer`. See
899
+ :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__`
900
+ for details.
901
+
902
+ `What are input IDs? <../glossary.html#input-ids>`__
903
+ attention_mask (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
904
+ Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
905
+
906
+ - 1 for tokens that are **not masked**,
907
+ - 0 for tokens that are **masked**.
908
+
909
+ `What are attention masks? <../glossary.html#attention-mask>`__
910
+ encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, encoder_sequence_length, hidden_size)`, `optional`):
911
+ Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
912
+ of the decoder.
913
+ encoder_attention_mask (:obj:`torch.LongTensor` of shape :obj:`(batch_size, encoder_sequence_length)`, `optional`):
914
+ Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
915
+ selected in ``[0, 1]``:
916
+
917
+ - 1 for tokens that are **not masked**,
918
+ - 0 for tokens that are **masked**.
919
+
920
+ `What are attention masks? <../glossary.html#attention-mask>`__
921
+ head_mask (:obj:`torch.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`):
922
+ Mask to nullify selected heads of the attention modules. Mask values selected in ``[0, 1]``:
923
+
924
+ - 1 indicates the head is **not masked**,
925
+ - 0 indicates the head is **masked**.
926
+
927
+ cross_attn_head_mask (:obj:`torch.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`):
928
+ Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing
929
+ cross-attention on hidden heads. Mask values selected in ``[0, 1]``:
930
+
931
+ - 1 indicates the head is **not masked**,
932
+ - 0 indicates the head is **masked**.
933
+
934
+ past_key_values (:obj:`tuple(tuple(torch.FloatTensor))`, `optional`, returned when ``use_cache=True`` is passed or when ``config.use_cache=True``):
935
+ Tuple of :obj:`tuple(torch.FloatTensor)` of length :obj:`config.n_layers`, with each tuple having 2
936
+ tensors of shape :obj:`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional
937
+ tensors of shape :obj:`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
938
+
939
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
940
+ cross-attention blocks) that can be used (see :obj:`past_key_values` input) to speed up sequential
941
+ decoding.
942
+
943
+ If :obj:`past_key_values` are used, the user can optionally input only the last
944
+ :obj:`decoder_input_ids` (those that don't have their past key value states given to this model) of
945
+ shape :obj:`(batch_size, 1)` instead of all :obj:`decoder_input_ids`` of shape :obj:`(batch_size,
946
+ sequence_length)`.
947
+ inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
948
+ Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded
949
+ representation. This is useful if you want more control over how to convert :obj:`input_ids` indices
950
+ into associated vectors than the model's internal embedding lookup matrix.
951
+ output_attentions (:obj:`bool`, `optional`):
952
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
953
+ returned tensors for more detail.
954
+ output_hidden_states (:obj:`bool`, `optional`):
955
+ Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors
956
+ for more detail.
957
+ return_dict (:obj:`bool`, `optional`):
958
+ Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
959
+ """
960
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
961
+ output_hidden_states = (
962
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
963
+ )
964
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
965
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
966
+
967
+ # retrieve input_ids and inputs_embeds
968
+ if input_ids is not None and inputs_embeds is not None:
969
+ raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
970
+ elif input_ids is not None:
971
+ input_shape = input_ids.size()
972
+ input_ids = input_ids.view(-1, input_shape[-1])
973
+ elif inputs_embeds is not None:
974
+ input_shape = inputs_embeds.size()[:-1]
975
+ else:
976
+ raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
977
+
978
+ # past_key_values_length
979
+ past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
980
+
981
+ if inputs_embeds is None:
982
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
983
+
984
+ attention_mask = self._prepare_decoder_attention_mask(
985
+ attention_mask, input_shape, inputs_embeds, past_key_values_length
986
+ )
987
+
988
+ # expand encoder attention mask
989
+ if encoder_hidden_states is not None and encoder_attention_mask is not None:
990
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
991
+ encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
992
+
993
+ # embed positions
994
+ positions = self.embed_positions(input_shape, past_key_values_length)
995
+
996
+ hidden_states = inputs_embeds + positions
997
+ hidden_states = self.layernorm_embedding(hidden_states)
998
+
999
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1000
+
1001
+ # decoder layers
1002
+ all_hidden_states = () if output_hidden_states else None
1003
+ all_self_attns = () if output_attentions else None
1004
+ all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None
1005
+ next_decoder_cache = () if use_cache else None
1006
+
1007
+ # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired
1008
+ for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]):
1009
+ if attn_mask is not None:
1010
+ assert attn_mask.size()[0] == (
1011
+ len(self.layers)
1012
+ ), f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
1013
+ for idx, decoder_layer in enumerate(self.layers):
1014
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
1015
+ if output_hidden_states:
1016
+ all_hidden_states += (hidden_states,)
1017
+ dropout_probability = random.uniform(0, 1)
1018
+ if self.training and (dropout_probability < self.layerdrop):
1019
+ continue
1020
+
1021
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
1022
+
1023
+ if getattr(self.config, "gradient_checkpointing", False) and self.training:
1024
+
1025
+ if use_cache:
1026
+ logger.warning(
1027
+ "`use_cache=True` is incompatible with `config.gradient_checkpointing=True`. Setting "
1028
+ "`use_cache=False`..."
1029
+ )
1030
+ use_cache = False
1031
+
1032
+ def create_custom_forward(module):
1033
+ def custom_forward(*inputs):
1034
+ # None for past_key_value
1035
+ return module(*inputs, output_attentions, use_cache)
1036
+
1037
+ return custom_forward
1038
+
1039
+ layer_outputs = torch.utils.checkpoint.checkpoint(
1040
+ create_custom_forward(decoder_layer),
1041
+ hidden_states,
1042
+ attention_mask,
1043
+ encoder_hidden_states,
1044
+ encoder_attention_mask,
1045
+ head_mask[idx] if head_mask is not None else None,
1046
+ cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None,
1047
+ None,
1048
+ )
1049
+ else:
1050
+
1051
+ layer_outputs = decoder_layer(
1052
+ hidden_states,
1053
+ attention_mask=attention_mask,
1054
+ encoder_hidden_states=encoder_hidden_states,
1055
+ encoder_attention_mask=encoder_attention_mask,
1056
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
1057
+ cross_attn_layer_head_mask=(
1058
+ cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None
1059
+ ),
1060
+ past_key_value=past_key_value,
1061
+ output_attentions=output_attentions,
1062
+ use_cache=use_cache,
1063
+ )
1064
+ hidden_states = layer_outputs[0]
1065
+
1066
+ if use_cache:
1067
+ next_decoder_cache += (layer_outputs[3 if output_attentions else 1],)
1068
+
1069
+ if output_attentions:
1070
+ all_self_attns += (layer_outputs[1],)
1071
+
1072
+ if encoder_hidden_states is not None:
1073
+ all_cross_attentions += (layer_outputs[2],)
1074
+
1075
+ # add hidden states from the last decoder layer
1076
+ if output_hidden_states:
1077
+ all_hidden_states += (hidden_states,)
1078
+
1079
+ next_cache = next_decoder_cache if use_cache else None
1080
+ if not return_dict:
1081
+ return tuple(
1082
+ v
1083
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions]
1084
+ if v is not None
1085
+ )
1086
+ return BaseModelOutputWithPastAndCrossAttentions(
1087
+ last_hidden_state=hidden_states,
1088
+ past_key_values=next_cache,
1089
+ hidden_states=all_hidden_states,
1090
+ attentions=all_self_attns,
1091
+ cross_attentions=all_cross_attentions,
1092
+ )
1093
+
1094
+
1095
+ @add_start_docstrings(
1096
+ "The bare BART Model outputting raw hidden-states without any specific head on top.",
1097
+ BART_START_DOCSTRING,
1098
+ )
1099
+ class BartModel(BartPretrainedModel):
1100
+ def __init__(self, config: BartConfig):
1101
+ super().__init__(config)
1102
+
1103
+ padding_idx, vocab_size = config.pad_token_id, config.vocab_size
1104
+ self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)
1105
+
1106
+ self.encoder = BartEncoder(config, self.shared)
1107
+ self.decoder = BartDecoder(config, self.shared)
1108
+
1109
+ self.init_weights()
1110
+
1111
+ def get_input_embeddings(self):
1112
+ return self.shared
1113
+
1114
+ def set_input_embeddings(self, value):
1115
+ self.shared = value
1116
+ self.encoder.embed_tokens = self.shared
1117
+ self.decoder.embed_tokens = self.shared
1118
+
1119
+ def get_encoder(self):
1120
+ return self.encoder
1121
+
1122
+ def get_decoder(self):
1123
+ return self.decoder
1124
+
1125
+ @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)
1126
+ @add_code_sample_docstrings(
1127
+ tokenizer_class=_TOKENIZER_FOR_DOC,
1128
+ checkpoint=_CHECKPOINT_FOR_DOC,
1129
+ output_type=Seq2SeqModelOutput,
1130
+ config_class=_CONFIG_FOR_DOC,
1131
+ )
1132
+ def forward(
1133
+ self,
1134
+ input_ids=None,
1135
+ attention_mask=None,
1136
+ decoder_input_ids=None,
1137
+ decoder_attention_mask=None,
1138
+ head_mask=None,
1139
+ decoder_head_mask=None,
1140
+ cross_attn_head_mask=None,
1141
+ encoder_outputs=None,
1142
+ past_key_values=None,
1143
+ inputs_embeds=None,
1144
+ decoder_inputs_embeds=None,
1145
+ use_cache=None,
1146
+ output_attentions=None,
1147
+ output_hidden_states=None,
1148
+ return_dict=None,
1149
+ ):
1150
+
1151
+ # different to other models, Bart automatically creates decoder_input_ids from
1152
+ # input_ids if no decoder_input_ids are provided
1153
+ if decoder_input_ids is None and decoder_inputs_embeds is None:
1154
+ decoder_input_ids = shift_tokens_right(
1155
+ input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
1156
+ )
1157
+
1158
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1159
+ output_hidden_states = (
1160
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1161
+ )
1162
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1163
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1164
+
1165
+ if encoder_outputs is None:
1166
+ encoder_outputs = self.encoder(
1167
+ input_ids=input_ids,
1168
+ attention_mask=attention_mask,
1169
+ head_mask=head_mask,
1170
+ inputs_embeds=inputs_embeds,
1171
+ output_attentions=output_attentions,
1172
+ output_hidden_states=output_hidden_states,
1173
+ return_dict=return_dict,
1174
+ )
1175
+ # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
1176
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1177
+ encoder_outputs = BaseModelOutput(
1178
+ last_hidden_state=encoder_outputs[0],
1179
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1180
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1181
+ )
1182
+
1183
+ # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
1184
+ decoder_outputs = self.decoder(
1185
+ input_ids=decoder_input_ids,
1186
+ attention_mask=decoder_attention_mask,
1187
+ encoder_hidden_states=encoder_outputs[0],
1188
+ encoder_attention_mask=attention_mask,
1189
+ head_mask=decoder_head_mask,
1190
+ cross_attn_head_mask=cross_attn_head_mask,
1191
+ past_key_values=past_key_values,
1192
+ inputs_embeds=decoder_inputs_embeds,
1193
+ use_cache=use_cache,
1194
+ output_attentions=output_attentions,
1195
+ output_hidden_states=output_hidden_states,
1196
+ return_dict=return_dict,
1197
+ )
1198
+
1199
+ if not return_dict:
1200
+ return decoder_outputs + encoder_outputs
1201
+
1202
+ return Seq2SeqModelOutput(
1203
+ last_hidden_state=decoder_outputs.last_hidden_state,
1204
+ past_key_values=decoder_outputs.past_key_values,
1205
+ decoder_hidden_states=decoder_outputs.hidden_states,
1206
+ decoder_attentions=decoder_outputs.attentions,
1207
+ cross_attentions=decoder_outputs.cross_attentions,
1208
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1209
+ encoder_hidden_states=encoder_outputs.hidden_states,
1210
+ encoder_attentions=encoder_outputs.attentions,
1211
+ )
1212
+
1213
+
1214
+ @add_start_docstrings(
1215
+ "The BART Model with a language modeling head. Can be used for summarization.", BART_START_DOCSTRING
1216
+ )
1217
+ class BartForConditionalGeneration(BartPretrainedModel):
1218
+ base_model_prefix = "model"
1219
+ _keys_to_ignore_on_load_missing = [r"final_logits_bias", r"lm_head\.weight"]
1220
+
1221
+ def __init__(self, config: BartConfig):
1222
+ super().__init__(config)
1223
+ self.model = BartModel(config)
1224
+ self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings)))
1225
+ self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)
1226
+
1227
+ self.init_weights()
1228
+
1229
+ def get_encoder(self):
1230
+ return self.model.get_encoder()
1231
+
1232
+ def get_decoder(self):
1233
+ return self.model.get_decoder()
1234
+
1235
+ def resize_token_embeddings(self, new_num_tokens: int) -> nn.Embedding:
1236
+ new_embeddings = super().resize_token_embeddings(new_num_tokens)
1237
+ self._resize_final_logits_bias(new_num_tokens)
1238
+ return new_embeddings
1239
+
1240
+ def _resize_final_logits_bias(self, new_num_tokens: int) -> None:
1241
+ old_num_tokens = self.final_logits_bias.shape[-1]
1242
+ if new_num_tokens <= old_num_tokens:
1243
+ new_bias = self.final_logits_bias[:, :new_num_tokens]
1244
+ else:
1245
+ extra_bias = torch.zeros((1, new_num_tokens - old_num_tokens), device=self.final_logits_bias.device)
1246
+ new_bias = torch.cat([self.final_logits_bias, extra_bias], dim=1)
1247
+ self.register_buffer("final_logits_bias", new_bias)
1248
+
1249
+ def get_output_embeddings(self):
1250
+ return self.lm_head
1251
+
1252
+ def set_output_embeddings(self, new_embeddings):
1253
+ self.lm_head = new_embeddings
1254
+
1255
+ @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)
1256
+ @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC)
1257
+ @add_end_docstrings(BART_GENERATION_EXAMPLE)
1258
+ def forward(
1259
+ self,
1260
+ input_ids=None,
1261
+ attention_mask=None,
1262
+ decoder_input_ids=None,
1263
+ decoder_attention_mask=None,
1264
+ head_mask=None,
1265
+ decoder_head_mask=None,
1266
+ cross_attn_head_mask=None,
1267
+ encoder_outputs=None,
1268
+ past_key_values=None,
1269
+ inputs_embeds=None,
1270
+ decoder_inputs_embeds=None,
1271
+ labels=None,
1272
+ use_cache=None,
1273
+ output_attentions=None,
1274
+ output_hidden_states=None,
1275
+ return_dict=None,
1276
+ ):
1277
+ r"""
1278
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1279
+ Labels for computing the masked language modeling loss. Indices should either be in ``[0, ...,
1280
+ config.vocab_size]`` or -100 (see ``input_ids`` docstring). Tokens with indices set to ``-100`` are ignored
1281
+ (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``.
1282
+
1283
+ Returns:
1284
+ """
1285
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1286
+
1287
+ if labels is not None:
1288
+ if decoder_input_ids is None:
1289
+ decoder_input_ids = shift_tokens_right(
1290
+ labels, self.config.pad_token_id, self.config.decoder_start_token_id
1291
+ )
1292
+
1293
+ outputs = self.model(
1294
+ input_ids,
1295
+ attention_mask=attention_mask,
1296
+ decoder_input_ids=decoder_input_ids,
1297
+ encoder_outputs=encoder_outputs,
1298
+ decoder_attention_mask=decoder_attention_mask,
1299
+ head_mask=head_mask,
1300
+ decoder_head_mask=decoder_head_mask,
1301
+ cross_attn_head_mask=cross_attn_head_mask,
1302
+ past_key_values=past_key_values,
1303
+ inputs_embeds=inputs_embeds,
1304
+ decoder_inputs_embeds=decoder_inputs_embeds,
1305
+ use_cache=use_cache,
1306
+ output_attentions=output_attentions,
1307
+ output_hidden_states=output_hidden_states,
1308
+ return_dict=return_dict,
1309
+ )
1310
+ lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias
1311
+
1312
+ masked_lm_loss = None
1313
+ if labels is not None:
1314
+ loss_fct = CrossEntropyLoss()
1315
+ masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))
1316
+
1317
+ if not return_dict:
1318
+ output = (lm_logits,) + outputs[1:]
1319
+ return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
1320
+
1321
+ return Seq2SeqLMOutput(
1322
+ loss=masked_lm_loss,
1323
+ logits=lm_logits,
1324
+ past_key_values=outputs.past_key_values,
1325
+ decoder_hidden_states=outputs.decoder_hidden_states,
1326
+ decoder_attentions=outputs.decoder_attentions,
1327
+ cross_attentions=outputs.cross_attentions,
1328
+ encoder_last_hidden_state=outputs.encoder_last_hidden_state,
1329
+ encoder_hidden_states=outputs.encoder_hidden_states,
1330
+ encoder_attentions=outputs.encoder_attentions,
1331
+ )
1332
+
1333
+ def prepare_inputs_for_generation(
1334
+ self,
1335
+ decoder_input_ids,
1336
+ past=None,
1337
+ attention_mask=None,
1338
+ head_mask=None,
1339
+ decoder_head_mask=None,
1340
+ cross_attn_head_mask=None,
1341
+ use_cache=None,
1342
+ encoder_outputs=None,
1343
+ **kwargs
1344
+ ):
1345
+ # cut decoder_input_ids if past is used
1346
+ if past is not None:
1347
+ decoder_input_ids = decoder_input_ids[:, -1:]
1348
+
1349
+ return {
1350
+ "input_ids": None, # encoder_outputs is defined. input_ids not needed
1351
+ "encoder_outputs": encoder_outputs,
1352
+ "past_key_values": past,
1353
+ "decoder_input_ids": decoder_input_ids,
1354
+ "attention_mask": attention_mask,
1355
+ "head_mask": head_mask,
1356
+ "decoder_head_mask": decoder_head_mask,
1357
+ "cross_attn_head_mask": cross_attn_head_mask,
1358
+ "use_cache": use_cache, # change this to avoid caching (presumably for debugging)
1359
+ }
1360
+
1361
+ def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
1362
+ return shift_tokens_right(labels, self.config.pad_token_id, self.config.decoder_start_token_id)
1363
+
1364
+ @staticmethod
1365
+ def _reorder_cache(past, beam_idx):
1366
+ reordered_past = ()
1367
+ for layer_past in past:
1368
+ # cached cross_attention states don't have to be reordered -> they are always the same
1369
+ reordered_past += (
1370
+ tuple(past_state.index_select(0, beam_idx) for past_state in layer_past[:2]) + layer_past[2:],
1371
+ )
1372
+ return reordered_past
1373
+
1374
+
1375
+ @add_start_docstrings(
1376
+ """
1377
+ Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
1378
+ tasks.
1379
+ """,
1380
+ BART_START_DOCSTRING,
1381
+ )
1382
+ class BartForSequenceClassification(BartPretrainedModel):
1383
+ def __init__(self, config: BartConfig, **kwargs):
1384
+ super().__init__(config, **kwargs)
1385
+ self.model = BartModel(config)
1386
+ self.classification_head = BartClassificationHead(
1387
+ config.d_model,
1388
+ config.d_model,
1389
+ config.num_labels,
1390
+ config.classifier_dropout,
1391
+ )
1392
+ self.model._init_weights(self.classification_head.dense)
1393
+ self.model._init_weights(self.classification_head.out_proj)
1394
+
1395
+ @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)
1396
+ @add_code_sample_docstrings(
1397
+ tokenizer_class=_TOKENIZER_FOR_DOC,
1398
+ checkpoint=_CHECKPOINT_FOR_DOC,
1399
+ output_type=Seq2SeqSequenceClassifierOutput,
1400
+ config_class=_CONFIG_FOR_DOC,
1401
+ )
1402
+ def forward(
1403
+ self,
1404
+ input_ids=None,
1405
+ attention_mask=None,
1406
+ decoder_input_ids=None,
1407
+ decoder_attention_mask=None,
1408
+ head_mask=None,
1409
+ decoder_head_mask=None,
1410
+ cross_attn_head_mask=None,
1411
+ encoder_outputs=None,
1412
+ inputs_embeds=None,
1413
+ decoder_inputs_embeds=None,
1414
+ labels=None,
1415
+ use_cache=None,
1416
+ output_attentions=None,
1417
+ output_hidden_states=None,
1418
+ return_dict=None,
1419
+ ):
1420
+ r"""
1421
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
1422
+ Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,
1423
+ config.num_labels - 1]`. If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1424
+ """
1425
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1426
+ if labels is not None:
1427
+ use_cache = False
1428
+
1429
+ if input_ids is None and inputs_embeds is not None:
1430
+ raise NotImplementedError(
1431
+ f"Passing input embeddings is currently not supported for {self.__class__.__name__}"
1432
+ )
1433
+
1434
+ outputs = self.model(
1435
+ input_ids,
1436
+ attention_mask=attention_mask,
1437
+ decoder_input_ids=decoder_input_ids,
1438
+ decoder_attention_mask=decoder_attention_mask,
1439
+ head_mask=head_mask,
1440
+ decoder_head_mask=decoder_head_mask,
1441
+ cross_attn_head_mask=cross_attn_head_mask,
1442
+ encoder_outputs=encoder_outputs,
1443
+ inputs_embeds=inputs_embeds,
1444
+ decoder_inputs_embeds=decoder_inputs_embeds,
1445
+ use_cache=use_cache,
1446
+ output_attentions=output_attentions,
1447
+ output_hidden_states=output_hidden_states,
1448
+ return_dict=return_dict,
1449
+ )
1450
+ hidden_states = outputs[0] # last hidden state
1451
+
1452
+ eos_mask = input_ids.eq(self.config.eos_token_id)
1453
+
1454
+ if len(torch.unique(eos_mask.sum(1))) > 1:
1455
+ raise ValueError("All examples must have the same number of <eos> tokens.")
1456
+ sentence_representation = hidden_states[eos_mask, :].view(hidden_states.size(0), -1, hidden_states.size(-1))[
1457
+ :, -1, :
1458
+ ]
1459
+ logits = self.classification_head(sentence_representation)
1460
+
1461
+ loss = None
1462
+ if labels is not None:
1463
+ if self.config.num_labels == 1:
1464
+ # regression
1465
+ loss_fct = MSELoss()
1466
+ loss = loss_fct(logits.view(-1), labels.view(-1))
1467
+ else:
1468
+ loss_fct = CrossEntropyLoss()
1469
+ loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))
1470
+
1471
+ if not return_dict:
1472
+ output = (logits,) + outputs[1:]
1473
+ return ((loss,) + output) if loss is not None else output
1474
+
1475
+ return Seq2SeqSequenceClassifierOutput(
1476
+ loss=loss,
1477
+ logits=logits,
1478
+ past_key_values=outputs.past_key_values,
1479
+ decoder_hidden_states=outputs.decoder_hidden_states,
1480
+ decoder_attentions=outputs.decoder_attentions,
1481
+ cross_attentions=outputs.cross_attentions,
1482
+ encoder_last_hidden_state=outputs.encoder_last_hidden_state,
1483
+ encoder_hidden_states=outputs.encoder_hidden_states,
1484
+ encoder_attentions=outputs.encoder_attentions,
1485
+ )
1486
+
1487
+
1488
+ @add_start_docstrings(
1489
+ """
1490
+ BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
1491
+ layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
1492
+ """,
1493
+ BART_START_DOCSTRING,
1494
+ )
1495
+ class BartForQuestionAnswering(BartPretrainedModel):
1496
+ def __init__(self, config):
1497
+ super().__init__(config)
1498
+
1499
+ config.num_labels = 2
1500
+ self.num_labels = config.num_labels
1501
+
1502
+ self.model = BartModel(config)
1503
+ self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
1504
+
1505
+ self.model._init_weights(self.qa_outputs)
1506
+
1507
+ @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)
1508
+ @add_code_sample_docstrings(
1509
+ tokenizer_class=_TOKENIZER_FOR_DOC,
1510
+ checkpoint=_CHECKPOINT_FOR_DOC,
1511
+ output_type=Seq2SeqQuestionAnsweringModelOutput,
1512
+ config_class=_CONFIG_FOR_DOC,
1513
+ )
1514
+ def forward(
1515
+ self,
1516
+ input_ids=None,
1517
+ attention_mask=None,
1518
+ decoder_input_ids=None,
1519
+ decoder_attention_mask=None,
1520
+ head_mask=None,
1521
+ decoder_head_mask=None,
1522
+ cross_attn_head_mask=None,
1523
+ encoder_outputs=None,
1524
+ start_positions=None,
1525
+ end_positions=None,
1526
+ inputs_embeds=None,
1527
+ decoder_inputs_embeds=None,
1528
+ use_cache=None,
1529
+ output_attentions=None,
1530
+ output_hidden_states=None,
1531
+ return_dict=None,
1532
+ ):
1533
+ r"""
1534
+ start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
1535
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1536
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1537
+ are not taken into account for computing the loss.
1538
+ end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
1539
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1540
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1541
+ are not taken into account for computing the loss.
1542
+ """
1543
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1544
+ if start_positions is not None and end_positions is not None:
1545
+ use_cache = False
1546
+
1547
+ outputs = self.model(
1548
+ input_ids,
1549
+ attention_mask=attention_mask,
1550
+ decoder_input_ids=decoder_input_ids,
1551
+ decoder_attention_mask=decoder_attention_mask,
1552
+ head_mask=head_mask,
1553
+ decoder_head_mask=decoder_head_mask,
1554
+ cross_attn_head_mask=cross_attn_head_mask,
1555
+ encoder_outputs=encoder_outputs,
1556
+ inputs_embeds=inputs_embeds,
1557
+ decoder_inputs_embeds=decoder_inputs_embeds,
1558
+ use_cache=use_cache,
1559
+ output_attentions=output_attentions,
1560
+ output_hidden_states=output_hidden_states,
1561
+ return_dict=return_dict,
1562
+ )
1563
+
1564
+ sequence_output = outputs[0]
1565
+
1566
+ logits = self.qa_outputs(sequence_output)
1567
+ start_logits, end_logits = logits.split(1, dim=-1)
1568
+ start_logits = start_logits.squeeze(-1).contiguous()
1569
+ end_logits = end_logits.squeeze(-1).contiguous()
1570
+
1571
+ total_loss = None
1572
+ if start_positions is not None and end_positions is not None:
1573
+ # If we are on multi-GPU, split add a dimension
1574
+ if len(start_positions.size()) > 1:
1575
+ start_positions = start_positions.squeeze(-1)
1576
+ if len(end_positions.size()) > 1:
1577
+ end_positions = end_positions.squeeze(-1)
1578
+ # sometimes the start/end positions are outside our model inputs, we ignore these terms
1579
+ ignored_index = start_logits.size(1)
1580
+ start_positions = start_positions.clamp(0, ignored_index)
1581
+ end_positions = end_positions.clamp(0, ignored_index)
1582
+
1583
+ loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
1584
+ start_loss = loss_fct(start_logits, start_positions)
1585
+ end_loss = loss_fct(end_logits, end_positions)
1586
+ total_loss = (start_loss + end_loss) / 2
1587
+
1588
+ if not return_dict:
1589
+ output = (
1590
+ start_logits,
1591
+ end_logits,
1592
+ ) + outputs[1:]
1593
+ return ((total_loss,) + output) if total_loss is not None else output
1594
+
1595
+ return Seq2SeqQuestionAnsweringModelOutput(
1596
+ loss=total_loss,
1597
+ start_logits=start_logits,
1598
+ end_logits=end_logits,
1599
+ past_key_values=outputs.past_key_values,
1600
+ decoder_hidden_states=outputs.decoder_hidden_states,
1601
+ decoder_attentions=outputs.decoder_attentions,
1602
+ cross_attentions=outputs.cross_attentions,
1603
+ encoder_last_hidden_state=outputs.encoder_last_hidden_state,
1604
+ encoder_hidden_states=outputs.encoder_hidden_states,
1605
+ encoder_attentions=outputs.encoder_attentions,
1606
+ )
1607
+
1608
+
1609
+ class BartDecoderWrapper(BartPretrainedModel):
1610
+ """
1611
+ This wrapper class is a helper class to correctly load pretrained checkpoints when the causal language model is
1612
+ used in combination with the :class:`~transformers.EncoderDecoderModel` framework.
1613
+ """
1614
+
1615
+ def __init__(self, config):
1616
+ super().__init__(config)
1617
+ self.decoder = BartDecoder(config)
1618
+
1619
+ def forward(self, *args, **kwargs):
1620
+ return self.decoder(*args, **kwargs)
1621
+
1622
+
1623
+ class BartForCausalLM(BartPretrainedModel):
1624
+ def __init__(self, config):
1625
+ super().__init__(config)
1626
+ config = copy.deepcopy(config)
1627
+ config.is_decoder = True
1628
+ config.is_encoder_decoder = False
1629
+ self.model = BartDecoderWrapper(config)
1630
+
1631
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1632
+
1633
+ self.init_weights()
1634
+
1635
+ def get_input_embeddings(self):
1636
+ return self.model.decoder.embed_tokens
1637
+
1638
+ def set_input_embeddings(self, value):
1639
+ self.model.decoder.embed_tokens = value
1640
+
1641
+ def get_output_embeddings(self):
1642
+ return self.lm_head
1643
+
1644
+ def set_output_embeddings(self, new_embeddings):
1645
+ self.lm_head = new_embeddings
1646
+
1647
+ def set_decoder(self, decoder):
1648
+ self.model.decoder = decoder
1649
+
1650
+ def get_decoder(self):
1651
+ return self.model.decoder
1652
+
1653
+ @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC)
1654
+ def forward(
1655
+ self,
1656
+ input_ids=None,
1657
+ attention_mask=None,
1658
+ encoder_hidden_states=None,
1659
+ encoder_attention_mask=None,
1660
+ head_mask=None,
1661
+ cross_attn_head_mask=None,
1662
+ past_key_values=None,
1663
+ inputs_embeds=None,
1664
+ labels=None,
1665
+ use_cache=None,
1666
+ output_attentions=None,
1667
+ output_hidden_states=None,
1668
+ return_dict=None,
1669
+ ):
1670
+ r"""
1671
+ Args:
1672
+ input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
1673
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
1674
+ provide it.
1675
+
1676
+ Indices can be obtained using :class:`~transformers.BartTokenizer`. See
1677
+ :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__`
1678
+ for details.
1679
+
1680
+ `What are input IDs? <../glossary.html#input-ids>`__
1681
+ attention_mask (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1682
+ Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
1683
+
1684
+ - 1 for tokens that are **not masked**,
1685
+ - 0 for tokens that are **masked**.
1686
+
1687
+ `What are attention masks? <../glossary.html#attention-mask>`__
1688
+ encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
1689
+ Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
1690
+ if the model is configured as a decoder.
1691
+ encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1692
+ Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
1693
+ in the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
1694
+ head_mask (:obj:`torch.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`):
1695
+ Mask to nullify selected heads of the attention modules. Mask values selected in ``[0, 1]``:
1696
+
1697
+ - 1 indicates the head is **not masked**,
1698
+ - 0 indicates the head is **masked**.
1699
+
1700
+ cross_attn_head_mask (:obj:`torch.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`):
1701
+ Mask to nullify selected heads of the cross-attention modules. Mask values selected in ``[0, 1]``:
1702
+
1703
+ - 1 indicates the head is **not masked**,
1704
+ - 0 indicates the head is **masked**.
1705
+
1706
+ past_key_values (:obj:`tuple(tuple(torch.FloatTensor))`, `optional`, returned when ``use_cache=True`` is passed or when ``config.use_cache=True``):
1707
+ Tuple of :obj:`tuple(torch.FloatTensor)` of length :obj:`config.n_layers`, with each tuple having 2
1708
+ tensors of shape :obj:`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional
1709
+ tensors of shape :obj:`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two
1710
+ additional tensors are only required when the model is used as a decoder in a Sequence to Sequence
1711
+ model.
1712
+
1713
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
1714
+ cross-attention blocks) that can be used (see :obj:`past_key_values` input) to speed up sequential
1715
+ decoding.
1716
+
1717
+ If :obj:`past_key_values` are used, the user can optionally input only the last ``decoder_input_ids``
1718
+ (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
1719
+ instead of all ``decoder_input_ids`` of shape :obj:`(batch_size, sequence_length)`.
1720
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1721
+ Labels for computing the masked language modeling loss. Indices should either be in ``[0, ...,
1722
+ config.vocab_size]`` or -100 (see ``input_ids`` docstring). Tokens with indices set to ``-100`` are
1723
+ ignored (masked), the loss is only computed for the tokens with labels in ``[0, ...,
1724
+ config.vocab_size]``.
1725
+ use_cache (:obj:`bool`, `optional`):
1726
+ If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
1727
+ decoding (see :obj:`past_key_values`).
1728
+
1729
+ - 1 for tokens that are **not masked**,
1730
+ - 0 for tokens that are **masked**.
1731
+ output_attentions (:obj:`bool`, `optional`):
1732
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
1733
+ returned tensors for more detail.
1734
+ output_hidden_states (:obj:`bool`, `optional`):
1735
+ Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors
1736
+ for more detail.
1737
+ return_dict (:obj:`bool`, `optional`):
1738
+ Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
1739
+
1740
+ Returns:
1741
+
1742
+ Example::
1743
+
1744
+ >>> from transformers import BartTokenizer, BartForCausalLM
1745
+
1746
+ >>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
1747
+ >>> model = BartForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
1748
+ >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
1749
+ >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
1750
+ >>> outputs = model(**inputs)
1751
+
1752
+ >>> last_hidden_states = outputs.last_hidden_state
1753
+ """
1754
+
1755
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1756
+ output_hidden_states = (
1757
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1758
+ )
1759
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1760
+
1761
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1762
+ outputs = self.model.decoder(
1763
+ input_ids=input_ids,
1764
+ attention_mask=attention_mask,
1765
+ encoder_hidden_states=encoder_hidden_states,
1766
+ encoder_attention_mask=encoder_attention_mask,
1767
+ head_mask=head_mask,
1768
+ cross_attn_head_mask=cross_attn_head_mask,
1769
+ past_key_values=past_key_values,
1770
+ inputs_embeds=inputs_embeds,
1771
+ use_cache=use_cache,
1772
+ output_attentions=output_attentions,
1773
+ output_hidden_states=output_hidden_states,
1774
+ return_dict=return_dict,
1775
+ )
1776
+
1777
+ logits = self.lm_head(outputs[0])
1778
+
1779
+ loss = None
1780
+ if labels is not None:
1781
+ loss_fct = CrossEntropyLoss()
1782
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
1783
+
1784
+ if not return_dict:
1785
+ output = (logits,) + outputs[1:]
1786
+ return (loss,) + output if loss is not None else output
1787
+
1788
+ return CausalLMOutputWithCrossAttentions(
1789
+ loss=loss,
1790
+ logits=logits,
1791
+ past_key_values=outputs.past_key_values,
1792
+ hidden_states=outputs.hidden_states,
1793
+ attentions=outputs.attentions,
1794
+ cross_attentions=outputs.cross_attentions,
1795
+ )
1796
+
1797
+ def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, use_cache=None, **kwargs):
1798
+ # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
1799
+ if attention_mask is None:
1800
+ attention_mask = input_ids.new_ones(input_ids.shape)
1801
+
1802
+ if past:
1803
+ input_ids = input_ids[:, -1:]
1804
+ # first step, decoder_cached_states are empty
1805
+ return {
1806
+ "input_ids": input_ids, # encoder_outputs is defined. input_ids not needed
1807
+ "attention_mask": attention_mask,
1808
+ "past_key_values": past,
1809
+ "use_cache": use_cache,
1810
+ }
1811
+
1812
+ @staticmethod
1813
+ def _reorder_cache(past, beam_idx):
1814
+ reordered_past = ()
1815
+ for layer_past in past:
1816
+ reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
1817
+ return reordered_past
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cbf1fe21b6cab81d0ea766b485d4293958b90812c9fc967b59578451475e5b7
3
+ size 1625343555
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "add_prefix_space": false, "errors": "replace", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": "<mask>", "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "facebook/bart-large", "tokenizer_class": "BartTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff