# MBart¶

DISCLAIMER: If you see something strange, file a Github Issue and assign @patrickvonplaten

## Overview¶

The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.

According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text.

The Authors’ code can be found here

### Examples¶

• Examples and scripts for fine-tuning mBART and other models for sequence to sequence tasks can be found in examples/seq2seq/.

• Given the large embeddings table, mBART consumes a large amount of GPU RAM, especially for fine-tuning. MarianMTModel is usually a better choice for bilingual machine translation.

## Training¶

MBart is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation task. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The target text format is [tgt_lang_code] X [eos]. bos is never used.

The prepare_seq2seq_batch() handles this automatically and should be used to encode the sequences for sequence-to-sequence fine-tuning.

• Supervised training

example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
batch = tokenizer.prepare_seq2seq_batch(example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian, return_tensors="pt")
model(input_ids=batch['input_ids'], labels=batch['labels']) # forward pass

• Generation

While generating the target text set the decoder_start_token_id to the target language id. The following example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.

from transformers import MBartForConditionalGeneration, MBartTokenizer
article = "UN Chief Says There Is No Military Solution in Syria"
batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], src_lang="en_XX", return_tensors="pt")
translated_tokens = model.generate(**batch, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
assert translation == "Şeful ONU declară că nu există o soluţie militară în Siria"


## MBartConfig¶

class transformers.MBartConfig(activation_dropout=0.0, extra_pos_embeddings=2, activation_function='gelu', vocab_size=50265, d_model=1024, encoder_ffn_dim=4096, encoder_layers=12, encoder_attention_heads=16, decoder_ffn_dim=4096, decoder_layers=12, decoder_attention_heads=16, encoder_layerdrop=0.0, decoder_layerdrop=0.0, attention_dropout=0.0, dropout=0.1, max_position_embeddings=1024, init_std=0.02, classifier_dropout=0.0, num_labels=3, is_encoder_decoder=True, normalize_before=False, add_final_layer_norm=False, do_blenderbot_90_layernorm=False, scale_embedding=False, normalize_embedding=True, static_position_embeddings=False, add_bias_logits=False, force_bos_token_to_be_generated=False, use_cache=True, pad_token_id=1, bos_token_id=0, eos_token_id=2, **common_kwargs)[source]

This is the configuration class to store the configuration of a MBartForConditionalGeneration. It is used to instantiate a BART model according to the specified arguments, defining the model architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
• vocab_size (int, optional, defaults to 250027) – Vocabulary size of the MBART model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling MBartForConditionalGeneration.

• d_model (int, optional, defaults to 1024) – Dimensionality of the layers and the pooler layer.

• encoder_layers (int, optional, defaults to 12) – Number of encoder layers.

• decoder_layers (int, optional, defaults to 12) – Number of decoder layers.

• encoder_attention_heads (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer encoder.

• decoder_attention_heads (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer decoder.

• decoder_ffn_dim (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in decoder.

• encoder_ffn_dim (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in decoder.

• activation_function (str or function, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.

• dropout (float, optional, defaults to 0.1) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

• attention_dropout (float, optional, defaults to 0.0) – The dropout ratio for the attention probabilities.

• activation_dropout (float, optional, defaults to 0.0) – The dropout ratio for activations inside the fully connected layer.

• classifier_dropout (float, optional, defaults to 0.0) – The dropout ratio for classifier.

• max_position_embeddings (int, optional, defaults to 1024) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

• init_std (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

• add_bias_logits (bool, optional, defaults to False) – This should be completed, specific to marian.

• normalize_before (bool, optional, defaults to True) – Call layernorm before attention ops.

• normalize_embedding (bool, optional, defaults to True) – Call layernorm after embeddings. Only True for Bart.

• static_position_embeddings (bool, optional, defaults to False) – Don’t learn positional embeddings, use sinusoidal.

• add_final_layer_norm (bool, optional, defaults to True) – Why not add another layernorm?

• scale_embedding (bool, optional, defaults to False) – Scale embeddings by diving by sqrt(d_model).

• eos_token_id (int, optional, defaults to 2) – End of stream token id.

• pad_token_id (int, optional, defaults to 1) – Padding token id.

• bos_token_id (int, optional, defaults to 0) – Beginning of stream token id.

• encoder_layerdrop – (float, optional, defaults to 0.0): The LayerDrop probability for the encoder. See the LayerDrop paper for more details.

• decoder_layerdrop – (float, optional, defaults to 0.0): The LayerDrop probability for the decoder. See the LayerDrop paper for more details.

• extra_pos_embeddings – (int, optional, defaults to 2): How many extra learned positional embeddings to use. Should be equal to pad_token_id+1.

• is_encoder_decoder (bool, optional, defaults to True) – Whether this is an encoder/decoder model

• force_bos_token_to_be_generated (bool, optional, defaults to False) – Whether or not to force BOS token to be generated at step 1 (after decoder_start_token_id).

## MBartTokenizer¶

class transformers.MBartTokenizer(*args, tokenizer_file=None, **kwargs)[source]

Construct an MBART tokenizer.

MBartTokenizer is a subclass of XLMRobertaTokenizer and adds a new prepare_seq2seq_batch()

Refer to superclass XLMRobertaTokenizer for usage examples and documentation concerning the initialization parameters and other methods.

Warning

prepare_seq2seq_batch should be used to encode inputs. Other tokenizer methods like encode do not work properly.

The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos> for target language documents.

Examples:

>>> from transformers import MBartTokenizer
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> batch: dict = tokenizer.prepare_seq2seq_batch(
...     example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian, return_tensors="pt"
... )

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An MBART sequence has the following format, where X represents the sequence:

• input_ids (for encoder) X [eos, src_lang_code]

• decoder_input_ids: (for decoder) [tgt_lang_code] X [eos]

BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.

Parameters
• token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of input IDs with the appropriate special tokens.

Return type

List[int]

prepare_seq2seq_batch(src_texts: List[str], src_lang: str = 'en_XX', tgt_texts: Optional[List[str]] = None, tgt_lang: str = 'ro_RO', max_length: Optional[int] = None, max_target_length: Optional[int] = None, truncation: bool = True, padding: str = 'longest', return_tensors: Optional[str] = None, add_prefix_space: bool = False, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]

Prepare model inputs for translation. For best performance, translate one sentence at a time.

Parameters
• src_texts (List[str]) – List of documents to summarize or source language texts.

• tgt_texts (list, optional) – List of summaries or target language texts.

• max_length (int, optional) – Controls the maximum length for encoder inputs (documents to summarize or source language texts) If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

• max_target_length (int, optional) – Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set to None, this will use the max_length value.

• padding (bool, str or PaddingStrategy, optional, defaults to False) –

Activates and controls padding. Accepts the following values:

• True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

• 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

• False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

• return_tensors (str or TensorType, optional) –

If set, will return tensors instead of list of python integers. Acceptable values are:

• 'tf': Return TensorFlow tf.constant objects.

• 'pt': Return PyTorch torch.Tensor objects.

• 'np': Return Numpy np.ndarray objects.

• truncation (bool, str or TruncationStrategy, optional, defaults to True) –

Activates and controls truncation. Accepts the following values:

• True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

• 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

• 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

• False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

• **kwargs – Additional keyword arguments passed along to self.__call__.

Returns

A BatchEncoding with the following fields:

• input_ids – List of token ids to be fed to the encoder.

• attention_mask – List of indices specifying which tokens should be attended to by the model.

• labels – List of token ids for tgt_texts.

The full set of keys [input_ids, attention_mask, labels], will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.

Return type

BatchEncoding

## MBartForConditionalGeneration¶

class transformers.MBartForConditionalGeneration(config: transformers.models.bart.configuration_bart.BartConfig)[source]

This class overrides BartForConditionalGeneration. Please check the superclass for the appropriate documentation alongside usage examples.

Examples::
>>> from transformers import MBartForConditionalGeneration, MBartTokenizer
>>> article = "UN Chief Says There Is No Military Solution in Syria"
>>> batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt")
>>> translated_tokens = model.generate(**batch)
>>> translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
>>> assert translation == "Şeful ONU declară că nu există o soluţie militară în Siria"

config_class

alias of transformers.models.mbart.configuration_mbart.MBartConfig

## TFMBartForConditionalGeneration¶

class transformers.TFMBartForConditionalGeneration(*args, **kwargs)[source]

mBART (multilingual BART) model for machine translation

This model inherits from TFBartForConditionalGeneration. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (MBartConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

config_class

alias of transformers.models.mbart.configuration_mbart.MBartConfig`