MBart¶
DISCLAIMER: If you see something strange, file a Github Issue and assign @sshleifer
Overview¶
The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text.
The Authors’ code can be found here
Training¶
MBart is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation task.
As the model is multilingual it expects the sequences in a different format. A special language id token
is added in both the source and target text. The source text format is X [eos, src_lang_code]
where X
is the source text. The target text format is [tgt_lang_code] X [eos]
. bos
is never used.
The prepare_seq2seq_batch()
handles this automatically and should be used to encode
the sequences for sequence-to-sequence fine-tuning.
Supervised training
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
batch = tokenizer.prepare_seq2seq_batch(example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian)
input_ids = batch["input_ids"]
target_ids = batch["decoder_input_ids"]
decoder_input_ids = target_ids[:, :-1].contiguous()
labels = target_ids[:, 1:].clone()
model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels) #forward
Generation
While generating the target text set the
decoder_start_token_id
to the target language id. The following example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.
from transformers import MBartForConditionalGeneration, MBartTokenizer
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro")
article = "UN Chief Says There Is No Military Solution in Syria"
batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], src_lang="en_XX")
translated_tokens = model.generate(**batch, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
assert translation == "Şeful ONU declară că nu există o soluţie militară în Siria"
MBartConfig¶
-
class
transformers.
MBartConfig
(activation_dropout=0.0, extra_pos_embeddings=2, activation_function='gelu', vocab_size=50265, d_model=1024, encoder_ffn_dim=4096, encoder_layers=12, encoder_attention_heads=16, decoder_ffn_dim=4096, decoder_layers=12, decoder_attention_heads=16, encoder_layerdrop=0.0, decoder_layerdrop=0.0, attention_dropout=0.0, dropout=0.1, max_position_embeddings=1024, init_std=0.02, classifier_dropout=0.0, num_labels=3, is_encoder_decoder=True, pad_token_id=1, bos_token_id=0, eos_token_id=2, normalize_before=False, add_final_layer_norm=False, do_blenderbot_90_layernorm=False, scale_embedding=False, normalize_embedding=True, static_position_embeddings=False, add_bias_logits=False, force_bos_token_to_be_generated=False, **common_kwargs)[source]¶ This is the configuration class to store the configuration of a
MBartForConditionalGeneration
. It is used to instantiate a BART model according to the specified arguments, defining the model architecture.Configuration objects inherit from
PretrainedConfig
and can be used to control the model outputs. Read the documentation fromPretrainedConfig
for more information.- Parameters
vocab_size (
int
, optional, defaults to 250027) – Vocabulary size of the MBART model. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingMBartForConditionalGeneration
.d_model (
int
, optional, defaults to 1024) – Dimensionality of the layers and the pooler layer.encoder_layers (
int
, optional, defaults to 12) – Number of encoder layers.decoder_layers (
int
, optional, defaults to 12) – Number of decoder layers.encoder_attention_heads (
int
, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer encoder.decoder_attention_heads (
int
, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer decoder.decoder_ffn_dim (
int
, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in decoder.encoder_ffn_dim (
int
, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in decoder.activation_function (
str
orfunction
, optional, defaults to"gelu"
) – The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"swish"
and"gelu_new"
are supported.dropout (
float
, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.attention_dropout (
float
, optional, defaults to 0.0) – The dropout ratio for the attention probabilities.activation_dropout (
float
, optional, defaults to 0.0) – The dropout ratio for activations inside the fully connected layer.classifier_dropout (
float
, optional, defaults to 0.0) – The dropout ratio for classifier.max_position_embeddings (
int
, optional, defaults to 1024) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).init_std (
float
, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.add_bias_logits (
bool
, optional, defaults toFalse
) – This should be completed, specific to marian.normalize_before (
bool
, optional, defaults toTrue
) – Call layernorm before attention ops.normalize_embedding (
bool
, optional, defaults toTrue
) – Call layernorm after embeddings. Only True for Bart.static_position_embeddings (
bool
, optional, defaults toFalse
) – Don’t learn positional embeddings, use sinusoidal.add_final_layer_norm (
bool
, optional, defaults toTrue
) – Why not add another layernorm?scale_embedding (
bool
, optional, defaults toFalse
) – Scale embeddings by diving by sqrt(d_model).eos_token_id (
int
, optional, defaults to 2) – End of stream token id.pad_token_id (
int
, optional, defaults to 1) – Padding token id.bos_token_id (
int
, optional, defaults to 0) – Beginning of stream token id.encoder_layerdrop – (
float
, optional, defaults to 0.0): The LayerDrop probability for the encoder. See the LayerDrop paper for more details.decoder_layerdrop – (
float
, optional, defaults to 0.0): The LayerDrop probability for the decoder. See the LayerDrop paper for more details.extra_pos_embeddings – (
int
, optional, defaults to 2): How many extra learned positional embeddings to use. Should be equal topad_token_id+1
.is_encoder_decoder (
bool
, optional, defaults toTrue
) – Whether this is an encoder/decoder modelforce_bos_token_to_be_generated (
bool
, optional, defaults toFalse
) – Whether or not to force BOS token to be generated at step 1 (afterdecoder_start_token_id
).
MBartTokenizer¶
-
class
transformers.
MBartTokenizer
(*args, tokenizer_file=None, **kwargs)[source]¶ Construct an MBART tokenizer.
MBartTokenizer
is a subclass ofXLMRobertaTokenizer
and adds a newprepare_seq2seq_batch()
Refer to superclass
XLMRobertaTokenizer
for usage examples and documentation concerning the initialization parameters and other methods.Warning
prepare_seq2seq_batch
should be used to encode inputs. Other tokenizer methods likeencode
do not work properly.The tokenization method is
<tokens> <eos> <language code>
for source language documents, and<language code> <tokens> <eos>`
for target language documents.Examples:
>>> from transformers import MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro') >>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria" >>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" >>> batch: dict = tokenizer.prepare_seq2seq_batch( ... example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian ... )
-
build_inputs_with_special_tokens
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An MBART sequence has the following format, where
X
represents the sequence:input_ids
(for encoder)X [eos, src_lang_code]
decoder_input_ids
: (for decoder)[tgt_lang_code] X [eos]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.
- Parameters
token_ids_0 (
List[int]
) – List of IDs to which the special tokens will be added.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.
- Returns
List of input IDs with the appropriate special tokens.
- Return type
List[int]
-
prepare_seq2seq_batch
(src_texts: List[str], src_lang: str = 'en_XX', tgt_texts: Optional[List[str]] = None, tgt_lang: str = 'ro_RO', max_length: Optional[int] = None, max_target_length: Optional[int] = None, truncation: bool = True, padding: str = 'longest', return_tensors: str = 'pt', add_prefix_space: bool = False, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]¶ Prepare model inputs for translation. For best performance, translate one sentence at a time.
- Parameters
src_texts (
List[str]
) – List of documents to summarize or source language texts.tgt_texts (
list
, optional) – List of summaries or target language texts.max_length (
int
, optional) – Controls the maximum length for encoder inputs (documents to summarize or source language texts) If left unset or set toNone
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.max_target_length (
int
, optional) – Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set toNone
, this will use the max_length value.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
return_tensors (
str
orTensorType
, optional, defaults to “pt”) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toTrue
) –Activates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
**kwargs – Additional keyword arguments passed along to
self.__call__
.
- Returns
A
BatchEncoding
with the following fields:input_ids – List of token ids to be fed to the encoder.
attention_mask – List of indices specifying which tokens should be attended to by the model.
decoder_input_ids – List of token ids to be fed to the decoder.
- decoder_attention_mask – List of indices specifying which tokens should be attended to by the decoder.
This does not include causal mask, which is built by the model.
The full set of keys
[input_ids, attention_mask, decoder_input_ids, decoder_attention_mask]
, will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.- Return type
-
MBartForConditionalGeneration¶
-
class
transformers.
MBartForConditionalGeneration
(config: transformers.configuration_bart.BartConfig)[source]¶ This class overrides
BartForConditionalGeneration
. Please check the superclass for the appropriate documentation alongside usage examples.- Examples::
>>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro") >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro") >>> article = "UN Chief Says There Is No Military Solution in Syria" >>> batch = tokenizer.prepare_seq2seq_batch(src_texts=[article]) >>> translated_tokens = model.generate(**batch) >>> translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] >>> assert translation == "Şeful ONU declară că nu există o soluţie militară în Siria"
-
forward
(input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, encoder_outputs=None, past_key_values=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **unused)¶ The
BartForConditionalGeneration
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using
BartTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
decoder_input_ids (
torch.LongTensor
of shape(batch_size, target_sequence_length)
, optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting theinput_ids
to the right, following the paper.decoder_attention_mask (
torch.BoolTensor
of shape(batch_size, tgt_seq_len)
, optional) –Default behavior: generate a tensor that ignores pad tokens in
decoder_input_ids
. Causal mask will also be used by default.If you want to change padding behavior, you should read
modeling_bart._prepare_decoder_inputs()
and modify to your needs. See diagram 1 in the paper for more information on the default strategy.encoder_outputs (
tuple(tuple(torch.FloatTensor)
, optional) – Tuple consists of (last_hidden_state
, optional:hidden_states
, optional:attentions
)last_hidden_state
of shape(batch_size, sequence_length, hidden_size)
, optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.past_key_values (
tuple(tuple(torch.FloatTensor))
of lengthconfig.n_layers
with each tuple having 4 tensors of shape(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) –Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If
past_key_values
are used, the user can optionally input only the lastdecoder_input_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of alldecoder_input_ids
of shape(batch_size, sequence_length)
.use_cache (
bool
, optional) – If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
).output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size]
or -100 (seeinput_ids
docstring). Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
.
- Returns
A
Seq2SeqLMOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftorch.FloatTensor
comprising various elements depending on the configuration (BartConfig
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) – Languaged modeling loss.logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
List[torch.FloatTensor]
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) – List oftorch.FloatTensor
of lengthconfig.n_layers
, with each tensor of shape(2, batch_size, num_heads, sequence_length, embed_size_per_head)
).Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see
past_key_values
input) to speed up sequential decoding.decoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
encoder_last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
Conditional generation example:
>>> # Mask filling only works for bart-large >>> from transformers import BartTokenizer, BartForConditionalGeneration >>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') >>> TXT = "My friends are <mask> but they eat too many carbs." >>> model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') >>> input_ids = tokenizer([TXT], return_tensors='pt')['input_ids'] >>> logits = model(input_ids).logits >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() >>> probs = logits[0, masked_index].softmax(dim=0) >>> values, predictions = probs.topk(5) >>> tokenizer.decode(predictions).split() >>> # ['good', 'great', 'all', 'really', 'very']
- Return type
Seq2SeqLMOutput
ortuple(torch.FloatTensor)
Summarization example:
>>> from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig >>> # see ``examples/summarization/bart/run_eval.py`` for a longer example >>> model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn') >>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn') >>> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') >>> # Generate Summary >>> summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True) >>> print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])