# RoBERTa¶

## Overview¶

The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018.

It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates.

The abstract from the paper is the following:

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.

Tips:

• This implementation is the same as BertModel with a tiny embeddings tweak as well as a setup for Roberta pretrained models.

• RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme.

• RoBERTa doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or </s>)

• CamemBERT is a wrapper around RoBERTa. Refer to this page for usage examples.

This model was contributed by julien-c. The original code can be found here.

## RobertaConfig¶

class transformers.RobertaConfig(pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs)[source]

This is the configuration class to store the configuration of a RobertaModel or a TFRobertaModel. It is used to instantiate a RoBERTa model according to the specified arguments, defining the model architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

The RobertaConfig class directly inherits BertConfig. It reuses the same defaults. Please check the parent class for more information.

Examples:

>>> from transformers import RobertaConfig, RobertaModel

>>> # Initializing a RoBERTa configuration
>>> configuration = RobertaConfig()

>>> # Initializing a model from the configuration
>>> model = RobertaModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config


## RobertaTokenizer¶

class transformers.RobertaTokenizer(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)[source]

Constructs a RoBERTa tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

>>> from transformers import RobertaTokenizer
>>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
>>> tokenizer("Hello world")['input_ids']
[0, 31414, 232, 328, 2]
>>> tokenizer(" Hello world")['input_ids']
[0, 20920, 232, 2]


You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

Note

When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

Parameters
• vocab_file (str) – Path to the vocabulary file.

• merges_file (str) – Path to the merges file.

• errors (str, optional, defaults to "replace") – Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

• bos_token (str, optional, defaults to "<s>") –

The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

Note

When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

• eos_token (str, optional, defaults to "</s>") –

The end of sequence token.

Note

When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

• sep_token (str, optional, defaults to "</s>") – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

• cls_token (str, optional, defaults to "<s>") – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

• unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

• pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.

• mask_token (str, optional, defaults to "<mask>") – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

• add_prefix_space (bool, optional, defaults to False) – Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (RoBERTa tokenizer detect beginning of words by the preceding space).

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

• single sequence: <s> X </s>

• pair of sequences: <s> A </s></s> B </s>

Parameters
• token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of input IDs with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

Parameters
• token_ids_0 (List[int]) – List of IDs.

• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of zeros.

Return type

List[int]

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

Parameters
• token_ids_0 (List[int]) – List of IDs.

• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

• already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.

Returns

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type

List[int]

save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str]

Save only the vocabulary of the tokenizer (vocabulary + added tokens).

This method won’t save the configuration and special token mappings of the tokenizer. Use _save_pretrained() to save the whole state of the tokenizer.

Parameters
• save_directory (str) – The directory in which to save the vocabulary.

• filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.

Returns

Paths to the files saved.

Return type

Tuple(str)

## RobertaTokenizerFast¶

class transformers.RobertaTokenizerFast(vocab_file, merges_file, tokenizer_file=None, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)[source]

Construct a “fast” RoBERTa tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

>>> from transformers import RobertaTokenizerFast
>>> tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
>>> tokenizer("Hello world")['input_ids']
[0, 31414, 232, 328, 2]
>>> tokenizer(" Hello world")['input_ids']
[0, 20920, 232, 2]


You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

Note

When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.

This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

Parameters
• vocab_file (str) – Path to the vocabulary file.

• merges_file (str) – Path to the merges file.

• errors (str, optional, defaults to "replace") – Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

• bos_token (str, optional, defaults to "<s>") –

The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

Note

When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

• eos_token (str, optional, defaults to "</s>") –

The end of sequence token.

Note

When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

• sep_token (str, optional, defaults to "</s>") – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

• cls_token (str, optional, defaults to "<s>") – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

• unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

• pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.

• mask_token (str, optional, defaults to "<mask>") – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

• add_prefix_space (bool, optional, defaults to False) – Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (RoBERTa tokenizer detect beginning of words by the preceding space).

• trim_offsets (bool, optional, defaults to True) – Whether the post processing step should trim offsets to avoid including whitespaces.

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

This implementation does not add special tokens and this method should be overridden in a subclass.

Parameters
• token_ids_0 (List[int]) – The first tokenized sequence.

• token_ids_1 (List[int], optional) – The second tokenized sequence.

Returns

The model input with special tokens.

Return type

List[int]

## RobertaModel¶

class transformers.RobertaModel(config, add_pooling_layer=True)[source]

The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.

To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape ((batch_size, sequence_length))) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape ((batch_size, sequence_length)), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape ((batch_size, sequence_length)), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape ((batch_size, sequence_length)), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape ((batch_size, sequence_length), hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

• encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

• past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –

Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

• use_cache (bool, optional) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

Returns

A BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

• pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

Return type

BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaModel
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaModel.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state


## RobertaForCausalLM¶

class transformers.RobertaForCausalLM(config)[source]

RoBERTa Model with a language modeling head on top for CLM fine-tuning.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaForCausalLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

• encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

• labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

• past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –

Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

• use_cache (bool, optional) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

Returns

A CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Language modeling loss (for next-token prediction).

• logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.

• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.

Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

Example:

>>> from transformers import RobertaTokenizer, RobertaForCausalLM, RobertaConfig
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> config = RobertaConfig.from_pretrained("roberta-base")
>>> config.is_decoder = True
>>> model = RobertaForCausalLM.from_pretrained('roberta-base', config=config)

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> prediction_logits = outputs.logits


Return type

CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)

class transformers.RobertaForMaskedLM(config)[source]

RoBERTa Model with a language modeling head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaForMaskedLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

• kwargs (Dict[str, any], optional, defaults to {}) – Used to hide legacy arguments that have been deprecated.

Returns

A MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Masked language modeling (MLM) loss.

• logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

MaskedLMOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForMaskedLM
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForMaskedLM.from_pretrained('roberta-base')

>>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]

>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits


## RobertaForSequenceClassification¶

class transformers.RobertaForSequenceClassification(config)[source]

RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• labels (torch.LongTensor of shape (batch_size,), optional) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

A SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Classification (or regression if config.num_labels==1) loss.

• logits (torch.FloatTensor of shape (batch_size, config.num_labels)) – Classification (or regression if config.num_labels==1) scores (before SoftMax).

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

SequenceClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForSequenceClassification
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForSequenceClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits


## RobertaForMultipleChoice¶

class transformers.RobertaForMultipleChoice(config)[source]

Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, token_type_ids=None, attention_mask=None, labels=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaForMultipleChoice forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• labels (torch.LongTensor of shape (batch_size,), optional) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)

Returns

A MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Classification loss.

• logits (torch.FloatTensor of shape (batch_size, num_choices)) – num_choices is the second dimension of the input tensors. (see input_ids above).

Classification scores (before SoftMax).

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

MultipleChoiceModelOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForMultipleChoice
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForMultipleChoice.from_pretrained('roberta-base')

>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0)  # choice0 is correct (according to Wikipedia ;)), batch size 1

>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=labels)  # batch size is 1

>>> # the linear classifier still needs to be trained
>>> loss = outputs.loss
>>> logits = outputs.logits


## RobertaForTokenClassification¶

class transformers.RobertaForTokenClassification(config)[source]

Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaForTokenClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

Returns

A TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Classification loss.

• logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) – Classification scores (before SoftMax).

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TokenClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForTokenClassification
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForTokenClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0)  # Batch size 1

>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits


class transformers.RobertaForQuestionAnswering(config)[source]

Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The RobertaForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

• start_positions (torch.LongTensor of shape (batch_size,), optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

• end_positions (torch.LongTensor of shape (batch_size,), optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

A QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

• start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) – Span-start scores (before SoftMax).

• end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) – Span-end scores (before SoftMax).

• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

QuestionAnsweringModelOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForQuestionAnswering
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForQuestionAnswering.from_pretrained('roberta-base')

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors='pt')
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss = outputs.loss
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits


## TFRobertaModel¶

class transformers.TFRobertaModel(*args, **kwargs)[source]

The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs)[source]

The TFRobertaModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.__call__() and transformers.PreTrainedTokenizer.encode() for details.

What are input IDs?

• attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.

• training (bool, optional, defaults to False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).

Returns

A TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

• pooler_output (tf.Tensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.

• hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TFBaseModelOutputWithPooling or tuple(tf.Tensor)

Example:

>>> from transformers import RobertaTokenizer, TFRobertaModel
>>> import tensorflow as tf

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = TFRobertaModel.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)

>>> last_hidden_states = outputs.last_hidden_state


class transformers.TFRobertaForMaskedLM(*args, **kwargs)[source]

RoBERTa Model with a language modeling head on top.

This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs)[source]

The TFRobertaForMaskedLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.__call__() and transformers.PreTrainedTokenizer.encode() for details.

What are input IDs?

• attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.

• training (bool, optional, defaults to False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).

• labels (tf.Tensor of shape (batch_size, sequence_length), optional) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

Returns

A TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) – Masked language modeling (MLM) loss.

• logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

• hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TFMaskedLMOutput or tuple(tf.Tensor)

Example:

>>> from transformers import RobertaTokenizer, TFRobertaForMaskedLM
>>> import tensorflow as tf

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = TFRobertaForMaskedLM.from_pretrained('roberta-base')

>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
>>> inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]

>>> outputs = model(inputs)
>>> loss = outputs.loss
>>> logits = outputs.logits


## TFRobertaForSequenceClassification¶

class transformers.TFRobertaForSequenceClassification(*args, **kwargs)[source]

RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs)[source]

The TFRobertaForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.__call__() and transformers.PreTrainedTokenizer.encode() for details.

What are input IDs?

• attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.

• training (bool, optional, defaults to False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).

• labels (tf.Tensor of shape (batch_size,), optional) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

A TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) – Classification (or regression if config.num_labels==1) loss.

• logits (tf.Tensor of shape (batch_size, config.num_labels)) – Classification (or regression if config.num_labels==1) scores (before SoftMax).

• hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TFSequenceClassifierOutput or tuple(tf.Tensor)

Example:

>>> from transformers import RobertaTokenizer, TFRobertaForSequenceClassification
>>> import tensorflow as tf

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1

>>> outputs = model(inputs)
>>> loss = outputs.loss
>>> logits = outputs.logits


## TFRobertaForMultipleChoice¶

class transformers.TFRobertaForMultipleChoice(*args, **kwargs)[source]

Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs)[source]

The TFRobertaForMultipleChoice forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.__call__() and transformers.PreTrainedTokenizer.encode() for details.

What are input IDs?

• attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.

• training (bool, optional, defaults to False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).

• labels (tf.Tensor of shape (batch_size,), optional) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)

Returns

A TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) – Classification loss.

• logits (tf.Tensor of shape (batch_size, num_choices)) – num_choices is the second dimension of the input tensors. (see input_ids above).

Classification scores (before SoftMax).

• hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TFMultipleChoiceModelOutput or tuple(tf.Tensor)

Example:

>>> from transformers import RobertaTokenizer, TFRobertaForMultipleChoice
>>> import tensorflow as tf

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = TFRobertaForMultipleChoice.from_pretrained('roberta-base')

>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."

>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='tf', padding=True)
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
>>> outputs = model(inputs)  # batch size is 1

>>> # the linear classifier still needs to be trained
>>> logits = outputs.logits


## TFRobertaForTokenClassification¶

class transformers.TFRobertaForTokenClassification(*args, **kwargs)[source]

RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs)[source]

The TFRobertaForTokenClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.__call__() and transformers.PreTrainedTokenizer.encode() for details.

What are input IDs?

• attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.

• training (bool, optional, defaults to False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).

• labels (tf.Tensor of shape (batch_size, sequence_length), optional) – Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

Returns

A TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) – Classification loss.

• logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) – Classification scores (before SoftMax).

• hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TFTokenClassifierOutput or tuple(tf.Tensor)

Example:

>>> from transformers import RobertaTokenizer, TFRobertaForTokenClassification
>>> import tensorflow as tf

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = TFRobertaForTokenClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> input_ids = inputs["input_ids"]
>>> inputs["labels"] = tf.reshape(tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))) # Batch size 1

>>> outputs = model(inputs)
>>> loss = outputs.loss
>>> logits = outputs.logits


class transformers.TFRobertaForQuestionAnswering(*args, **kwargs)[source]

RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

• having all inputs as keyword arguments (like PyTorch models), or

• having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

• a single Tensor with input_ids only and nothing else: model(inputs_ids)

• a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

• a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, start_positions=None, end_positions=None, training=False, **kwargs)[source]

The TFRobertaForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.__call__() and transformers.PreTrainedTokenizer.encode() for details.

What are input IDs?

• attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) –

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

• head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) –

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

• 1 indicates the head is not masked,

• 0 indicates the head is masked.

• inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.

• training (bool, optional, defaults to False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).

• start_positions (tf.Tensor of shape (batch_size,), optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

• end_positions (tf.Tensor of shape (batch_size,), optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

A TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) – Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

• start_logits (tf.Tensor of shape (batch_size, sequence_length)) – Span-start scores (before SoftMax).

• end_logits (tf.Tensor of shape (batch_size, sequence_length)) – Span-end scores (before SoftMax).

• hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TFQuestionAnsweringModelOutput or tuple(tf.Tensor)

Example:

>>> from transformers import RobertaTokenizer, TFRobertaForQuestionAnswering
>>> import tensorflow as tf

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = TFRobertaForQuestionAnswering.from_pretrained('roberta-base')

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> input_dict = tokenizer(question, text, return_tensors='tf')
>>> outputs = model(input_dict)
>>> start_logits = outputs.start_logits
>>> end_logits = outputs.end_logits

>>> all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0])
>>> answer = ' '.join(all_tokens[tf.math.argmax(start_logits, 1)[0] : tf.math.argmax(end_logits, 1)[0]+1])


## FlaxRobertaModel¶

class transformers.FlaxRobertaModel(config: transformers.models.roberta.configuration_roberta.RobertaConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]

The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

__call__(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)

The FlaxRobertaPreTrainedModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (numpy.ndarray of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• last_hidden_state (jax_xla.DeviceArray of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

• pooler_output (jax_xla.DeviceArray of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

• hidden_states (tuple(jax_xla.DeviceArray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of jax_xla.DeviceArray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(jax_xla.DeviceArray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of jax_xla.DeviceArray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, FlaxRobertaModel

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = FlaxRobertaModel.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors='jax')
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state


class transformers.FlaxRobertaForMaskedLM(config: transformers.models.roberta.configuration_roberta.RobertaConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]

RoBERTa Model with a language modeling head on top.

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

__call__(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)

The FlaxRobertaPreTrainedModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (numpy.ndarray of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• last_hidden_state (jax_xla.DeviceArray of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

• pooler_output (jax_xla.DeviceArray of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

• hidden_states (tuple(jax_xla.DeviceArray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of jax_xla.DeviceArray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(jax_xla.DeviceArray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of jax_xla.DeviceArray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, FlaxRobertaForMaskedLM

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = FlaxRobertaForMaskedLM.from_pretrained('roberta-base')

>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors='jax')

>>> outputs = model(**inputs)
>>> logits = outputs.logits


## FlaxRobertaForSequenceClassification¶

class transformers.FlaxRobertaForSequenceClassification(config: transformers.models.roberta.configuration_roberta.RobertaConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]

Roberta Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

__call__(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)

The FlaxRobertaPreTrainedModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (numpy.ndarray of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• logits (jax_xla.DeviceArray of shape (batch_size, config.num_labels)) – Classification (or regression if config.num_labels==1) scores (before SoftMax).

• hidden_states (tuple(jax_xla.DeviceArray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of jax_xla.DeviceArray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(jax_xla.DeviceArray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of jax_xla.DeviceArray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, FlaxRobertaForSequenceClassification

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = FlaxRobertaForSequenceClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors='jax')

>>> outputs = model(**inputs, labels=labels)
>>> logits = outputs.logits


## FlaxRobertaForMultipleChoice¶

class transformers.FlaxRobertaForMultipleChoice(config: transformers.models.roberta.configuration_roberta.RobertaConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]

Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

__call__(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)

The FlaxRobertaPreTrainedModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A FlaxMultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• logits (jax_xla.DeviceArray of shape (batch_size, num_choices)) – num_choices is the second dimension of the input tensors. (see input_ids above).

Classification scores (before SoftMax).

• hidden_states (tuple(jax_xla.DeviceArray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of jax_xla.DeviceArray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(jax_xla.DeviceArray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of jax_xla.DeviceArray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, FlaxRobertaForMultipleChoice

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = FlaxRobertaForMultipleChoice.from_pretrained('roberta-base')

>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."

>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='jax', padding=True)
>>> outputs = model(**{k: v[None, :] for k,v in encoding.items()})

>>> logits = outputs.logits


## FlaxRobertaForTokenClassification¶

class transformers.FlaxRobertaForTokenClassification(config: transformers.models.roberta.configuration_roberta.RobertaConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]

Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

__call__(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)

The FlaxRobertaPreTrainedModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (numpy.ndarray of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A FlaxTokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• logits (jax_xla.DeviceArray of shape (batch_size, sequence_length, config.num_labels)) – Classification scores (before SoftMax).

• hidden_states (tuple(jax_xla.DeviceArray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of jax_xla.DeviceArray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(jax_xla.DeviceArray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of jax_xla.DeviceArray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

FlaxTokenClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, FlaxRobertaForTokenClassification

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = FlaxRobertaForTokenClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors='jax')

>>> outputs = model(**inputs)
>>> logits = outputs.logits


class transformers.FlaxRobertaForQuestionAnswering(config: transformers.models.roberta.configuration_roberta.RobertaConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]

Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

__call__(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)

The FlaxRobertaPreTrainedModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
• input_ids (numpy.ndarray of shape (batch_size, sequence_length)) –

Indices of input sequence tokens in the vocabulary.

Indices can be obtained using BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

• attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

• 1 for tokens that are not masked,

• 0 for tokens that are masked.

What are attention masks?

• token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) –

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

• 0 corresponds to a sentence A token,

• 1 corresponds to a sentence B token.

What are token type IDs?

• position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A FlaxQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

• start_logits (jax_xla.DeviceArray of shape (batch_size, sequence_length)) – Span-start scores (before SoftMax).

• end_logits (jax_xla.DeviceArray of shape (batch_size, sequence_length)) – Span-end scores (before SoftMax).

• hidden_states (tuple(jax_xla.DeviceArray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of jax_xla.DeviceArray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

• attentions (tuple(jax_xla.DeviceArray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of jax_xla.DeviceArray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, FlaxRobertaForQuestionAnswering

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = FlaxRobertaForQuestionAnswering.from_pretrained('roberta-base')

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors='jax')

>>> outputs = model(**inputs)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits