XLNet

Overview

The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.

The abstract from the paper is the following:

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.

Tips:

  • The specific attention pattern can be controlled at training and test time using the perm_mask input.

  • Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the target_mapping input.

  • To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and target_mapping inputs to control the attention span and outputs (see examples in examples/text-generation/run_generation.py)

  • XLNet is one of the few models that has no sequence length limit.

The original code can be found here.

XLNetConfig

class transformers.XLNetConfig(vocab_size=32000, d_model=1024, n_layer=24, n_head=16, d_inner=4096, ff_activation='gelu', untie_r=True, attn_type='bi', initializer_range=0.02, layer_norm_eps=1e-12, dropout=0.1, mem_len=None, reuse_len=None, bi_data=False, clamp_len=- 1, same_length=False, summary_type='last', summary_use_proj=True, summary_activation='tanh', summary_last_dropout=0.1, start_n_top=5, end_n_top=5, pad_token_id=5, bos_token_id=1, eos_token_id=2, **kwargs)[source]

This is the configuration class to store the configuration of a XLNetModel. It is used to instantiate an XLNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the xlnet-large-cased architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 32000) – Vocabulary size of the XLNet model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of XLNetModel.

  • d_model (int, optional, defaults to 1024) – Dimensionality of the encoder layers and the pooler layer.

  • n_layer (int, optional, defaults to 24) – Number of hidden layers in the Transformer encoder.

  • n_head (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer encoder.

  • d_inner (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • ff_activation (string, optional, defaults to “gelu”) – The non-linear activation function (function or string) in the encoder and pooler. If string, “gelu”, “relu” and “swish” are supported.

  • untie_r (boolean, optional, defaults to True) – Untie relative position biases

  • attn_type (string, optional, defaults to “bi”) – The attention type used by the model. Set ‘bi’ for XLNet, ‘uni’ for Transformer-XL.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • dropout (float, optional, defaults to 0.1) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • mem_len (int or None, optional, defaults to None) – The number of tokens to cache. The key/value pairs that have already been pre-computed in a previous forward pass won’t be re-computed. See the quickstart for more information.

  • reuse_len (int or None, optional, defaults to None) – The number of tokens in the current batch to be cached and reused in the future.

  • bi_data (boolean, optional, defaults to False) – Whether to use bidirectional input pipeline. Usually set to True during pretraining and False during finetuning.

  • clamp_len (int, optional, defaults to -1) – Clamp all relative distances larger than clamp_len. Setting this attribute to -1 means no clamping.

  • same_length (boolean, optional, defaults to False) – Whether to use the same attention length for each token.

  • summary_type (string, optional, defaults to “last”) –

    Argument used when doing sequence summary. Used in for the multiple choice head in :class:transformers.XLNetForSequenceClassification` and XLNetForMultipleChoice. Is one of the following options:

    • ’last’ => take the last token hidden state (like XLNet)

    • ’first’ => take the first token hidden state (like Bert)

    • ’mean’ => take the mean of all tokens hidden states

    • ’cls_index’ => supply a Tensor of classification token position (GPT/GPT-2)

    • ’attn’ => Not implemented now, use multi-head attention

  • summary_use_proj (boolean, optional, defaults to True) – Argument used when doing sequence summary. Used in for the multiple choice head in XLNetForSequenceClassification and XLNetForMultipleChoice. Add a projection after the vector extraction

  • summary_activation (string or None, optional, defaults to None) – Argument used when doing sequence summary. Used in for the multiple choice head in XLNetForSequenceClassification and XLNetForMultipleChoice. ‘tanh’ => add a tanh activation to the output, Other => no activation.

  • summary_proj_to_labels (boolean, optional, defaults to True) – Argument used when doing sequence summary. Used in for the multiple choice head in XLNetForSequenceClassification and XLNetForMultipleChoice. If True, the projection outputs to config.num_labels classes (otherwise to hidden_size). Default: False.

  • summary_last_dropout (float, optional, defaults to 0.1) – Argument used when doing sequence summary. Used in for the multiple choice head in XLNetForSequenceClassification and XLNetForMultipleChoice. Add a dropout after the projection and activation

  • start_n_top (int, optional, defaults to 5) – Used in the SQuAD evaluation script for XLM and XLNet.

  • end_n_top (int, optional, defaults to 5) – Used in the SQuAD evaluation script for XLM and XLNet.

Example:

from transformers import XLNetConfig, XLNetModel

# Initializing a XLNet configuration
configuration = XLNetConfig()

# Initializing a model from the configuration
model = XLNetModel(configuration)

# Accessing the model configuration
configuration = model.config
pretrained_config_archive_map

A dictionary containing all the available pre-trained checkpoints.

Type

Dict[str, str]

XLNetTokenizer

class transformers.XLNetTokenizer(vocab_file, do_lower_case=False, remove_space=True, keep_accents=False, bos_token='<s>', eos_token='</s>', unk_token='<unk>', sep_token='<sep>', pad_token='<pad>', cls_token='<cls>', mask_token='<mask>', additional_special_tokens=['<eop>', '<eod>'], **kwargs)[source]

Constructs an XLNet tokenizer. Based on SentencePiece

This tokenizer inherits from PreTrainedTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods.

Parameters
  • vocab_file (string) – SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.

  • do_lower_case (bool, optional, defaults to True) – Whether to lowercase the input when tokenizing.

  • remove_space (bool, optional, defaults to True) – Whether to strip the text when tokenizing (removing excess spaces before and after the string).

  • keep_accents (bool, optional, defaults to False) – Whether to keep accents when tokenizing.

  • bos_token (string, optional, defaults to “<s>”) –

    The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

  • eos_token (string, optional, defaults to “</s>”) –

    The end of sequence token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

  • unk_token (string, optional, defaults to “<unk>”) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • sep_token (string, optional, defaults to “<sep>”) – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

  • pad_token (string, optional, defaults to “<pad>”) – The token used for padding, for example when batching sequences of different lengths.

  • cls_token (string, optional, defaults to “<cls>”) – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

  • mask_token (string, optional, defaults to “<mask>”) – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

  • additional_special_tokens (List[str], optional, defaults to ["<eop>", "<eod>"]) – Additional special tokens used by the tokenizer.

sp_model

The SentencePiece processor that is used for every conversion (string, tokens and IDs).

Type

SentencePieceProcessor

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format:

  • single sequence: X <sep> <cls>

  • pair of sequences: A <sep> B <sep> <cls>

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added

  • token_ids_1 (List[int], optional, defaults to None) – Optional second list of IDs for sequence pairs.

Returns

list of input IDs with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]

Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 2 | first sequence | second sequence | CLS segment ID

if token_ids_1 is None, only returns the first portion of the mask (0’s).

Parameters
  • token_ids_0 (List[int]) – List of ids.

  • token_ids_1 (List[int], optional, defaults to None) – Optional second list of IDs for sequence pairs.

Returns

List of token type IDs according to the given sequence(s).

Return type

List[int]

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters
  • token_ids_0 (List[int]) – List of ids.

  • token_ids_1 (List[int], optional, defaults to None) – Optional second list of IDs for sequence pairs.

  • already_has_special_tokens (bool, optional, defaults to False) – Set to True if the token list is already formatted with special tokens for the model

Returns

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type

List[int]

save_vocabulary(save_directory)[source]

Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.

Parameters

save_directory (str) – The directory in which to save the vocabulary.

Returns

Paths to the files saved.

Return type

Tuple(str)

XLNetModel

class transformers.XLNetModel(config)[source]

The bare XLNet Model transformer outputting raw hidden-states without any specific head on top.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

create_mask(qlen, mlen)[source]

Creates causal attention mask. Float mask where 1.0 indicates masked, 0.0 indicates not-masked.

Parameters
  • qlen – Sequence length

  • mlen – Mask length

      same_length=False:      same_length=True:
      <mlen > <  qlen >       <mlen > <  qlen >
   ^ [0 0 0 0 0 1 1 1 1]     [0 0 0 0 0 1 1 1 1]
     [0 0 0 0 0 0 1 1 1]     [1 0 0 0 0 0 1 1 1]
qlen [0 0 0 0 0 0 0 1 1]     [1 1 0 0 0 0 0 1 1]
     [0 0 0 0 0 0 0 0 1]     [1 1 1 0 0 0 0 0 1]
   v [0 0 0 0 0 0 0 0 0]     [1 1 1 1 0 0 0 0 0]
forward(input_ids=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, token_type_ids=None, input_mask=None, head_mask=None, inputs_embeds=None, use_cache=True)[source]

The XLNetModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

Returns

last_hidden_state (torch.FloatTensor of shape (batch_size, num_predict, hidden_size)):

Sequence of hidden-states at the last layer of the model. num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict corresponds to sequence_length.

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetModel
import torch

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetModel.from_pretrained('xlnet-large-cased')

input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=False)).unsqueeze(0)  # Batch size 1

outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
get_input_embeddings()[source]

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

set_input_embeddings(new_embeddings)[source]

Set model’s input embeddings

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

XLNetLMHeadModel

class transformers.XLNetLMHeadModel(config)[source]

XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, token_type_ids=None, input_mask=None, head_mask=None, inputs_embeds=None, use_cache=True, labels=None)[source]

The XLNetLMHeadModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

  • labels (torch.LongTensor of shape (batch_size, num_predict), optional, defaults to None) – Labels for masked language modeling. num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict corresponds to sequence_length. The labels should correspond to the masked input words that should be predicted and depends on target_mapping. Note in order to perform standard auto-regressive language modeling a <mask> token has to be added to the input_ids (see prepare_inputs_for_generation fn and examples below) Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored, the loss is only computed for labels in [0, ..., config.vocab_size]

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided)

Language modeling loss.

prediction_scores (torch.FloatTensor of shape (batch_size, num_predict, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict corresponds to sequence_length.

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetLMHeadModel
import torch

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased')

# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False)).unsqueeze(0)  # We will predict the masked token
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0  # Previous tokens don't see last token
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float)  # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0  # Our first (and only) prediction will be the last token of the sequence (the masked token)

outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
next_token_logits = outputs[0]  # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]

# The same way can the XLNetLMHeadModel be used to be trained by standard auto-regressive language modeling.
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False)).unsqueeze(0)  # We will predict the masked token
labels = torch.tensor(tokenizer.encode("cute", add_special_tokens=False)).unsqueeze(0)
assert labels.shape[0] == 1, 'only one word will be predicted'
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0  # Previous tokens don't see last token as is done in standard auto-regressive lm training
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float)  # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0  # Our first (and only) prediction will be the last token of the sequence (the masked token)

outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels)
loss, next_token_logits = outputs[:2]  # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

XLNetForSequenceClassification

class transformers.XLNetForSequenceClassification(config)[source]

XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, token_type_ids=None, input_mask=None, head_mask=None, inputs_embeds=None, use_cache=True, labels=None)[source]

The XLNetForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

  • labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Classification (or regression if config.num_labels==1) loss.

logits (torch.FloatTensor of shape :obj:(batch_size, config.num_labels)`):

Classification (or regression if config.num_labels==1) scores (before SoftMax).

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetForSequenceClassification
import torch

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetForSequenceClassification.from_pretrained('xlnet-large-cased')

input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]

XLNetForTokenClassification

class transformers.XLNetForTokenClassification(config)[source]

XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, token_type_ids=None, input_mask=None, head_mask=None, inputs_embeds=None, use_cache=True, labels=None)[source]

The XLNetForTokenClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

  • labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (see input_ids above)

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Classification loss.

logits (torch.FloatTensor of shape :obj:(batch_size, config.num_labels)`):

Classification scores (before SoftMax).

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetForTokenClassification
import torch

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetForTokenClassification.from_pretrained('xlnet-large-cased')

input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)

scores = outputs[0]

XLNetForMultipleChoice

class transformers.XLNetForMultipleChoice(config)[source]

XLNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RACE/SWAG tasks.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, token_type_ids=None, input_mask=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, head_mask=None, inputs_embeds=None, use_cache=True, labels=None)[source]

The XLNetForMultipleChoice forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

  • labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (see input_ids above)

Returns

loss (torch.FloatTensor` of shape ``(1,)`, optional, returned when labels is provided):

Classification loss.

classification_scores (torch.FloatTensor of shape (batch_size, num_choices)):

num_choices is the second dimension of the input tensors. (see input_ids above).

Classification scores (before SoftMax).

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetForMultipleChoice
import torch

tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetForMultipleChoice.from_pretrained('xlnet-base-cased')

choices = ["Hello, my dog is cute", "Hello, my cat is amazing"]
input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0)  # Batch size 1, 2 choices
labels = torch.tensor(1).unsqueeze(0)  # Batch size 1

outputs = model(input_ids, labels=labels)
loss, classification_scores = outputs[:2]

XLNetForQuestionAnsweringSimple

class transformers.XLNetForQuestionAnsweringSimple(config)[source]

XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, token_type_ids=None, input_mask=None, head_mask=None, inputs_embeds=None, use_cache=True, start_positions=None, end_positions=None)[source]

The XLNetForQuestionAnsweringSimple forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

  • start_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

start_scores (torch.FloatTensor of shape (batch_size, sequence_length,)):

Span-start scores (before SoftMax).

end_scores (torch.FloatTensor of shape (batch_size, sequence_length,)):

Span-end scores (before SoftMax).

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetForQuestionAnsweringSimple
import torch

tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetForQuestionAnsweringSimple.from_pretrained('xlnet-base-cased')

input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])

outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss = outputs[0]

XLNetForQuestionAnswering

class transformers.XLNetForQuestionAnswering(config)[source]

XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, token_type_ids=None, input_mask=None, head_mask=None, inputs_embeds=None, use_cache=True, start_positions=None, end_positions=None, is_impossible=None, cls_index=None, p_mask=None)[source]

The XLNetForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.BertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[torch.FloatTensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed. use_cache has to be set to True to make use of mems.

  • perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. The classifier token should be represented by a 2.

    What are token type IDs?

  • input_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

  • start_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • is_impossible (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels whether a question has an answer or no answer (SQuAD 2.0)

  • cls_index (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the classification token to use as input for computing plausibility of the answer.

  • p_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Optional mask of tokens which can’t be in answers (e.g. [CLS], [PAD], …). 1.0 means token should be masked. 0.0 mean token is not masked.

Returns

loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided):

Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.

start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided):

Log probabilities for the top config.start_n_top start token possibilities (beam-search).

start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided):

Indices for the top config.start_n_top start token possibilities (beam-search).

end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided):

Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).

end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided):

Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).

cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided):

Log probabilities for the is_impossible label of the answers.

mems (List[torch.FloatTensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

from transformers import XLNetTokenizer, XLNetForQuestionAnswering
import torch

tokenizer =  XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetForQuestionAnswering.from_pretrained('xlnet-base-cased')

input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss = outputs[0]

TFXLNetModel

class transformers.TFXLNetModel(*args, **kwargs)[source]

The bare XLNet Model transformer outputing raw hidden-states without any specific head on top.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFXLNetModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.XLNetTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[tf.Tensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed.

  • perm_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (tf.Tensor or Numpy array of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • input_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

Returns

last_hidden_state (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size)):

Sequence of hidden-states at the last layer of the model.

mems (List[tf.Tensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor or Numpy array (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor or Numpy array (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

import tensorflow as tf
from transformers import XLNetTokenizer, TFXLNetModel

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = TFXLNetModel.from_pretrained('xlnet-large-cased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple

TFXLNetLMHeadModel

class transformers.TFXLNetLMHeadModel(*args, **kwargs)[source]

XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings).

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFXLNetLMHeadModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.XLNetTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[tf.Tensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed.

  • perm_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (tf.Tensor or Numpy array of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • input_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

Returns

prediction_scores (tf.Tensor or Numpy array of shape (batch_size, sequence_length, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

mems (List[tf.Tensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor or Numpy array (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor or Numpy array (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

import tensorflow as tf
import numpy as np
from transformers import XLNetTokenizer, TFXLNetLMHeadModel

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = TFXLNetLMHeadModel.from_pretrained('xlnet-large-cased')

# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = tf.constant(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=True))[None, :]  # We will predict the masked token
perm_mask = np.zeros((1, input_ids.shape[1], input_ids.shape[1]))
perm_mask[:, :, -1] = 1.0  # Previous tokens don't see last token
target_mapping = np.zeros((1, 1, input_ids.shape[1]))  # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0  # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=tf.constant(perm_mask, dtype=tf.float32), target_mapping=tf.constant(target_mapping, dtype=tf.float32))

next_token_logits = outputs[0]  # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

tf.keras.layers.Layer

TFXLNetForSequenceClassification

class transformers.TFXLNetForSequenceClassification(*args, **kwargs)[source]

XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFXLNetForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.XLNetTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[tf.Tensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed.

  • perm_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (tf.Tensor or Numpy array of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • input_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

Returns

logits (tf.Tensor or Numpy array of shape :obj:(batch_size, config.num_labels)`):

Classification (or regression if config.num_labels==1) scores (before SoftMax).

mems (List[tf.Tensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor or Numpy array (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor or Numpy array (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

import tensorflow as tf
from transformers import XLNetTokenizer, TFXLNetForSequenceClassification

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = TFXLNetForSequenceClassification.from_pretrained('xlnet-large-cased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]  # Batch size 1
outputs = model(input_ids)
logits = outputs[0]

TFXLNetForQuestionAnsweringSimple

class transformers.TFXLNetForQuestionAnsweringSimple(*args, **kwargs)[source]

XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFXLNetForQuestionAnsweringSimple forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.XLNetTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • mems (List[tf.Tensor] of length config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as input ids as they have already been computed.

  • perm_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length, sequence_length), optional, defaults to None) – Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

  • target_mapping (tf.Tensor or Numpy array of shape (batch_size, num_predict, sequence_length), optional, defaults to None) – Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • input_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • input_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool) – If use_cache is True, mems are returned and can be used to speed up decoding (see mems). Defaults to True.

Returns

loss (tf.Tensor or Numpy array of shape (1,), optional, returned when labels is provided):

Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

start_scores (tf.Tensor or Numpy array of shape (batch_size, sequence_length,)):

Span-start scores (before SoftMax).

end_scores (tf.Tensor or Numpy array of shape (batch_size, sequence_length,)):

Span-end scores (before SoftMax).

mems (List[tf.Tensor] of length config.n_layers):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor or Numpy array (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor or Numpy array (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (XLNetConfig) and inputs

Examples:

import tensorflow as tf
from transformers import XLNetTokenizer, TFXLNetForQuestionAnsweringSimple

tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = TFXLNetForQuestionAnsweringSimple.from_pretrained('xlnet-base-cased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]  # Batch size 1
outputs = model(input_ids)
start_scores, end_scores = outputs[:2]