Bart

DISCLAIMER: If you see something strange, file a Github Issue and assign @sshleifer

Overview

The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,

  • Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).

  • The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.

  • BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.

The Authors’ code can be found here

Implementation Notes

  • Bart doesn’t use token_type_ids for sequence classification. Use BartTokenizer.encode to get the proper splitting.

  • The forward pass of BartModel will create decoder inputs (using the helper function transformers.modeling_bart._prepare_bart_decoder_inputs) if they are not passed. This is different than some other modeling APIs.

  • Model predictions are intended to be identical to the original implementation. This only works, however, if the string you pass to fairseq.encode starts with a space.

  • BartForConditionalGeneration.generate should be used for conditional generation tasks like summarization, see the example in that docstrings

  • Models that load the "facebook/bart-large-cnn" weights will not have a mask_token_id, or be able to perform mask filling tasks.

  • for training/forward passes that don’t involve beam search, pass use_cache=False

BartForConditionalGeneration

class transformers.BartForConditionalGeneration(config: transformers.configuration_bart.BartConfig)[source]

The BART Model with a language modeling head. Can be used for summarization.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.

Parameters

config (BartConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids, attention_mask=None, encoder_outputs=None, decoder_input_ids=None, decoder_attention_mask=None, past_key_values=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **unused)[source]

The BartForConditionalGeneration forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) – Indices of input sequence tokens in the vocabulary. Use BartTokenizer.encode to produce them. Padding will be ignored by default should you provide it. Indices can be obtained using transformers.BartTokenizer.encode(text).

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices in input_ids. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional, defaults to None) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.

  • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional, defaults to None) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids right, following the paper.

  • decoder_attention_mask (torch.BoolTensor of shape (batch_size, tgt_seq_len), optional, defaults to None) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read _prepare_decoder_inputs() and modify. See diagram 1 in the paper for more info on the default strategy

  • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains pre-computed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

  • use_cache (bool, optional, defaults to True) – If use_cache is True, past_key_values are returned and can be used to speed up decoding (see past_key_values).

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional, defaults to None) – If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional, defaults to None) –

    If set to True, the model will return a ModelOutput instead of a plain tuple.

    labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None):

    Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].

Returns

A Seq2SeqLMOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (BartConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Languaged modeling loss.

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).

    Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

Conditional generation example:

# Mask filling only works for bart-large
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
TXT = "My friends are <mask> but they eat too many carbs."

model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
logits = model(input_ids).logits

masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)

tokenizer.decode(predictions).split()
# ['good', 'great', 'all', 'really', 'very']

Return type

Seq2SeqLMOutput or tuple(torch.FloatTensor)

Summarization example:

from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig

# see ``examples/summarization/bart/run_eval.py`` for a longer example
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')

ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')

# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])

BartConfig

class transformers.BartConfig(activation_dropout=0.0, extra_pos_embeddings=2, activation_function='gelu', vocab_size=50265, d_model=1024, encoder_ffn_dim=4096, encoder_layers=12, encoder_attention_heads=16, decoder_ffn_dim=4096, decoder_layers=12, decoder_attention_heads=16, encoder_layerdrop=0.0, decoder_layerdrop=0.0, attention_dropout=0.0, dropout=0.1, max_position_embeddings=1024, init_std=0.02, classifier_dropout=0.0, num_labels=3, is_encoder_decoder=True, pad_token_id=1, bos_token_id=0, eos_token_id=2, normalize_before=False, add_final_layer_norm=False, scale_embedding=False, normalize_embedding=True, static_position_embeddings=False, add_bias_logits=False, force_bos_token_to_be_generated=False, **common_kwargs)[source]

The BartConfig forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • vocab_size (int, optional, defaults to 50265) – defines the different tokens that can be represented by inputs_ids passed to the forward method.

  • d_model (int, optional, defaults to 1024) – Dimensionality of the layers and the pooler layer.

  • encoder_layers (int, optional, defaults to 12) – Number of encoder layers, 16 for pegasus, 6 for bart-base and marian

  • decoder_layers (int, optional, defaults to 12) – Number of decoder layers, 16 for pegasus, 6 for bart-base and marian

  • encoder_attention_heads (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer encoder.

  • decoder_attention_heads (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer decoder.

  • decoder_ffn_dim (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in decoder.

  • encoder_ffn_dim (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in decoder.

  • activation_function (str or function, optional, defaults to “gelu”) – The non-linear activation function (function or string) in the encoder and pooler. If string, “gelu”, “relu”, “swish” and “gelu_new” are supported.

  • dropout (float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_dropout (float, optional, defaults to 0.0) – The dropout ratio for the attention probabilities.

  • activation_dropout (float, optional, defaults to 0.0) – The dropout ratio for activations inside the fully connected layer.

  • classifier_dropout (float, optional, defaults to 0.0) – The dropout ratio for classifier.

  • max_position_embeddings (int, optional, defaults to 1024) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

  • init_std (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • add_bias_logits (int, optional, defaults to False) – True for marian only.

  • normalize_before (bool, optional, defaults to False) – Call layernorm before attention ops. True for pegasus, mbart. False for bart. FIXME: marian?

  • normalize_embedding (bool, optional, defaults to True) – Call layernorm after embeddings. Only True for Bart.

  • static_position_embeddings (bool, optional, defaults to False) – Don’t learn positional embeddings, use sinusoidal. True for marian, pegasus.

  • add_final_layer_norm (bool, optional, defaults to False) – Why not add another layernorm?

  • scale_embedding (bool, optional, defaults to False) – Scale embeddings by diving by sqrt(d_model).

  • eos_token_id (int, optional, defaults to 2) – End of stream token id.

  • pad_token_id (int, optional, defaults to 1) – Padding token id.

  • bos_token_id (int, optional, defaults to 0) – Beginning of stream token id.

  • encoder_layerdrop – (float, optional, defaults to 0.0): Google “layerdrop arxiv”, as its not explainable in one line.

  • decoder_layerdrop – (float, optional, defaults to 0.0): Google “layerdrop arxiv”, as its not explainable in one line.

  • extra_pos_embeddings – (int, optional, defaults to 2): How many extra learned positional embeddings to use. Should be pad_token_id+1 for bart.

  • num_labels – (int, optional, defaults to 2): for SequenceClassification

  • is_encoder_decoder (int, optional, defaults to True) – True

  • force_bos_token_to_be_generated (bool, optional, defaults to False) – Whether or not to force BOS token to be generated at step 1 (after decoder_start_token_id), only true for bart-large-cnn.

Configuration class for Bart. Parameters are renamed from the fairseq implementation

is_valid_mbart() → bool[source]

Is the configuration aligned with the MBART paper.

BartTokenizer

class transformers.BartTokenizer(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)[source]
prepare_seq2seq_batch(src_texts: List[str], tgt_texts: Optional[List[str]] = None, max_length: Optional[int] = None, max_target_length: Optional[int] = None, padding: str = 'longest', return_tensors: str = 'None', truncation=True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]

Prepare a batch that can be passed directly to an instance of BartModel.

Parameters
  • src_texts – (List[str]): List of documents to summarize or source language texts.

  • tgt_texts – (List[str], optional): List of summaries or target language texts.

  • max_length (int, optional) – Controls the maximum length for encoder inputs (documents to summarize or source language texts). If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • max_target_length (int, optional) – Controls the maximum length of decoder inputs (target language texts or summaries). If left unset or set to None, this will use the max_length value.

  • padding (bool, str or PaddingStrategy, optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • return_tensors (str or TensorType, optional, defaults to “pt”) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.

    • 'pt': Return PyTorch torch.Tensor objects.

    • 'np': Return Numpy np.ndarray objects.

  • truncation (bool, str or TruncationStrategy, optional, defaults to True) –

    Activates and controls truncation. Accepts the following values:

    • True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • **kwargs – Additional keyword arguments passed along to self.__call__.

Returns

A BatchEncoding with the following fields:

  • input_ids – List of token ids to be fed to the encoder.

  • attention_mask – List of indices specifying which tokens should be attended to by the model.

  • decoder_input_ids – List of token ids to be fed to the decoder.

  • decoder_attention_mask – List of indices specifying which tokens should be attended to by the decoder.

    This does not include causal mask, which is built by the model.

The full set of keys [input_ids, attention_mask, decoder_input_ids,  decoder_attention_mask], will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.

Return type

BatchEncoding

BartModel

class transformers.BartModel(config: transformers.configuration_bart.BartConfig)[source]

The bare BART Model outputting raw hidden-states without any specific head on top.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.

Parameters

config (BartConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids, attention_mask=None, decoder_input_ids=None, encoder_outputs: Optional[Tuple] = None, decoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs)[source]

The BartModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) – Indices of input sequence tokens in the vocabulary. Use BartTokenizer.encode to produce them. Padding will be ignored by default should you provide it. Indices can be obtained using transformers.BartTokenizer.encode(text).

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices in input_ids. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional, defaults to None) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.

  • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional, defaults to None) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids right, following the paper.

  • decoder_attention_mask (torch.BoolTensor of shape (batch_size, tgt_seq_len), optional, defaults to None) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read _prepare_decoder_inputs() and modify. See diagram 1 in the paper for more info on the default strategy

  • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains pre-computed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

  • use_cache (bool, optional, defaults to True) – If use_cache is True, past_key_values are returned and can be used to speed up decoding (see past_key_values).

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional, defaults to None) – If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional, defaults to None) – If set to True, the model will return a ModelOutput instead of a plain tuple.

Returns

A BaseModelOutputWithPast (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (BartConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).

    Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

BaseModelOutputWithPast or tuple(torch.FloatTensor)

Example:

>>> from transformers import BartTokenizer, BartModel
>>> import torch

>>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
>>> model = BartModel.from_pretrained('facebook/bart-large', return_dict=True)

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
transformers.modeling_bart._prepare_bart_decoder_inputs(config, input_ids, decoder_input_ids=None, decoder_padding_mask=None, causal_mask_dtype=torch.float32)[source]

Prepare masks that ignore padding tokens in the decoder and a causal mask for the decoder if none are provided. This mimics the default behavior in fairseq. To override it pass in masks. Note: this is not called during generation

BartForSequenceClassification

class transformers.BartForSequenceClassification(config: transformers.configuration_bart.BartConfig, **kwargs)[source]

Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.

Parameters

config (BartConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids, attention_mask=None, encoder_outputs=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The BartForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) – Indices of input sequence tokens in the vocabulary. Use BartTokenizer.encode to produce them. Padding will be ignored by default should you provide it. Indices can be obtained using transformers.BartTokenizer.encode(text).

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices in input_ids. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional, defaults to None) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.

  • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional, defaults to None) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids right, following the paper.

  • decoder_attention_mask (torch.BoolTensor of shape (batch_size, tgt_seq_len), optional, defaults to None) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read _prepare_decoder_inputs() and modify. See diagram 1 in the paper for more info on the default strategy

  • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains pre-computed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

  • use_cache (bool, optional, defaults to True) – If use_cache is True, past_key_values are returned and can be used to speed up decoding (see past_key_values).

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional, defaults to None) – If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional, defaults to None) – If set to True, the model will return a ModelOutput instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

A Seq2SeqSequenceClassifierOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (BartConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) – Classification (or regression if config.num_labels==1) loss.

  • logits (torch.FloatTensor of shape (batch_size, config.num_labels)) – Classification (or regression if config.num_labels==1) scores (before SoftMax).

  • past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).

    Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import BartTokenizer, BartForSequenceClassification
>>> import torch

>>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
>>> model = BartForSequenceClassification.from_pretrained('facebook/bart-large', return_dict=True)

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits

BartForQuestionAnswering

class transformers.BartForQuestionAnswering(config)[source]

BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.

Parameters

config (BartConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids, attention_mask=None, encoder_outputs=None, decoder_input_ids=None, decoder_attention_mask=None, start_positions=None, end_positions=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The BartForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) – Indices of input sequence tokens in the vocabulary. Use BartTokenizer.encode to produce them. Padding will be ignored by default should you provide it. Indices can be obtained using transformers.BartTokenizer.encode(text).

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices in input_ids. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional, defaults to None) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.

  • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional, defaults to None) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids right, following the paper.

  • decoder_attention_mask (torch.BoolTensor of shape (batch_size, tgt_seq_len), optional, defaults to None) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read _prepare_decoder_inputs() and modify. See diagram 1 in the paper for more info on the default strategy

  • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains pre-computed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

  • use_cache (bool, optional, defaults to True) – If use_cache is True, past_key_values are returned and can be used to speed up decoding (see past_key_values).

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional, defaults to None) – If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional, defaults to None) – If set to True, the model will return a ModelOutput instead of a plain tuple.

  • start_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

A Seq2SeqQuestionAnsweringModelOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (BartConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

  • start_logits (torch.FloatTensor of shape (batch_size, sequence_length,)) – Span-start scores (before SoftMax).

  • end_logits (torch.FloatTensor of shape (batch_size, sequence_length,)) – Span-end scores (before SoftMax).

  • past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).

    Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import BartTokenizer, BartForQuestionAnswering
>>> import torch

>>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
>>> model = BartForQuestionAnswering.from_pretrained('facebook/bart-large', return_dict=True)

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss = outputs.loss
>>> start_scores = outputs.start_scores
>>> end_scores = outputs.end_scores