XLM-RoBERTa

Overview

The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.

The abstract from the paper is the following:

This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.

Tips:

  • XLM-R is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require lang tensors to understand which language is used, and should be able to determine the correct language from the input ids.

  • This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.

The original code can be found here.

XLMRobertaConfig

class transformers.XLMRobertaConfig(pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs)[source]

This class overrides RobertaConfig. Please check the superclass for the appropriate documentation alongside usage examples.

XLMRobertaTokenizer

class transformers.XLMRobertaTokenizer(vocab_file, bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', **kwargs)[source]

Adapted from RobertaTokenizer and XLNetTokenizer SentencePiece based tokenizer. Peculiarities:

This tokenizer inherits from PreTrainedTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • bos_token (string, optional, defaults to “<s>”) –

    The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

  • eos_token (string, optional, defaults to “</s>”) –

    The end of sequence token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

  • sep_token (string, optional, defaults to “</s>”) – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

  • cls_token (string, optional, defaults to “<s>”) – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

  • unk_token (string, optional, defaults to “<unk>”) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (string, optional, defaults to “<pad>”) – The token used for padding, for example when batching sequences of different lengths.

  • mask_token (string, optional, defaults to “<mask>”) – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

  • additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) – Additional special tokens used by the tokenizer.

sp_model

The SentencePiece processor that is used for every conversion (string, tokens and IDs).

Type

SentencePieceProcessor

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A XLM-R sequence has the following format:

  • single sequence: <s> X </s>

  • pair of sequences: <s> A </s></s> B </s>

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added

  • token_ids_1 (List[int], optional, defaults to None) – Optional second list of IDs for sequence pairs.

Returns

list of input IDs with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Creates a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-R does not make use of token type ids, therefore a list of zeros is returned.

Parameters
  • token_ids_0 (List[int]) – List of ids.

  • token_ids_1 (List[int], optional, defaults to None) – Optional second list of IDs for sequence pairs.

Returns

List of zeros.

Return type

List[int]

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model methods.

Parameters
  • token_ids_0 (List[int]) – List of ids.

  • token_ids_1 (List[int], optional, defaults to None) – Optional second list of IDs for sequence pairs.

  • already_has_special_tokens (bool, optional, defaults to False) – Set to True if the token list is already formatted with special tokens for the model

Returns

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type

List[int]

save_vocabulary(save_directory)[source]

Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.

Parameters

save_directory (str) – The directory in which to save the vocabulary.

Returns

Paths to the files saved.

Return type

Tuple(str)

XLMRobertaModel

class transformers.XLMRobertaModel(config)[source]

The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaModel. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForMaskedLM

class transformers.XLMRobertaForMaskedLM(config)[source]

XLM-RoBERTa Model with a language modeling head on top.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForMaskedLM. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForSequenceClassification

class transformers.XLMRobertaForSequenceClassification(config)[source]

XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForSequenceClassification. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForMultipleChoice

class transformers.XLMRobertaForMultipleChoice(config)[source]

XLM-RoBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForMultipleChoice. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForTokenClassification

class transformers.XLMRobertaForTokenClassification(config)[source]

XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForTokenClassification. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForQuestionAnswering

class transformers.XLMRobertaForQuestionAnswering(config)[source]

XLM-RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForQuestionAnswering. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

TFXLMRobertaModel

class transformers.TFXLMRobertaModel(*args, **kwargs)[source]

The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides TFRobertaModel. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

TFXLMRobertaForMaskedLM

class transformers.TFXLMRobertaForMaskedLM(*args, **kwargs)[source]

XLM-RoBERTa Model with a language modeling head on top.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides TFRobertaForMaskedLM. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

TFXLMRobertaForSequenceClassification

class transformers.TFXLMRobertaForSequenceClassification(*args, **kwargs)[source]

XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides TFRobertaForSequenceClassification. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

TFXLMRobertaForMultipleChoice

class transformers.TFXLMRobertaForMultipleChoice(*args, **kwargs)[source]

Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides TFRobertaForMultipleChoice. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

TFXLMRobertaForTokenClassification

class transformers.TFXLMRobertaForTokenClassification(*args, **kwargs)[source]

XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides TFRobertaForTokenClassification. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig

TFXLMRobertaForQuestionAnswering

class transformers.TFXLMRobertaForQuestionAnswering(*args, **kwargs)[source]

XLM-RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides TFRobertaForQuestionAnsweringSimple. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.configuration_xlm_roberta.XLMRobertaConfig