XLNet

XLNetConfig

class pytorch_transformers.XLNetConfig(vocab_size_or_config_json_file=32000, d_model=1024, n_layer=24, n_head=16, d_inner=4096, ff_activation='gelu', untie_r=True, attn_type='bi', initializer_range=0.02, layer_norm_eps=1e-12, dropout=0.1, mem_len=None, reuse_len=None, bi_data=False, clamp_len=-1, same_length=False, finetuning_task=None, num_labels=2, summary_type='last', summary_use_proj=True, summary_activation='tanh', summary_last_dropout=0.1, start_n_top=5, end_n_top=5, **kwargs)[source]

Configuration class to store the configuration of a XLNetModel.

Parameters
  • vocab_size_or_config_json_file – Vocabulary size of inputs_ids in XLNetModel.

  • d_model – Size of the encoder layers and the pooler layer.

  • n_layer – Number of hidden layers in the Transformer encoder.

  • n_head – Number of attention heads for each attention layer in the Transformer encoder.

  • d_inner – The size of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • ff_activation – The non-linear activation function (function or string) in the encoder and pooler. If string, “gelu”, “relu” and “swish” are supported.

  • untie_r – untie relative position biases

  • attn_type – ‘bi’ for XLNet, ‘uni’ for Transformer-XL

  • dropout – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • dropatt – The dropout ratio for the attention probabilities.

  • initializer_range – The sttdev of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps – The epsilon used by LayerNorm.

  • dropout – float, dropout rate.

  • dropatt – float, dropout rate on attention probabilities.

  • init – str, the initialization scheme, either “normal” or “uniform”.

  • init_range – float, initialize the parameters with a uniform distribution in [-init_range, init_range]. Only effective when init=”uniform”.

  • init_std – float, initialize the parameters with a normal distribution with mean 0 and stddev init_std. Only effective when init=”normal”.

  • mem_len – int, the number of tokens to cache.

  • reuse_len – int, the number of tokens in the currect batch to be cached and reused in the future.

  • bi_data – bool, whether to use bidirectional input pipeline. Usually set to True during pretraining and False during finetuning.

  • clamp_len – int, clamp all relative distances larger than clamp_len. -1 means no clamping.

  • same_length – bool, whether to use the same attention length for each token.

  • finetuning_task – name of the glue task on which the model was fine-tuned if any

XLNetTokenizer

class pytorch_transformers.XLNetTokenizer(vocab_file, max_len=None, do_lower_case=False, remove_space=True, keep_accents=False, bos_token='<s>', eos_token='</s>', unk_token='<unk>', sep_token='<sep>', pad_token='<pad>', cls_token='<cls>', mask_token='<mask>', additional_special_tokens=['<eop>', '<eod>'], **kwargs)[source]

SentencePiece based tokenizer. Peculiarities:

add_special_tokens_sentences_pair(token_ids_0, token_ids_1)[source]

Adds special tokens to a sequence for sequence classification tasks. An XLNet sequence has the following format: X [SEP][CLS]

add_special_tokens_single_sentence(token_ids)[source]

Adds special tokens to a sequence pair for sequence classification tasks. An XLNet sequence pair has the following format: A [SEP] B [SEP][CLS]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (strings for sub-words) in a single string.

save_vocabulary(save_directory)[source]

Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.

property vocab_size

Size of the base vocabulary (without the added tokens)

XLNetModel

class pytorch_transformers.XLNetModel(config)[source]

The bare XLNet Model transformer outputing raw hidden-states without any specific head on top. The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.

The specific attention pattern can be controlled at training and test time using the perm_mask input.

Do to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the target_mapping input.

To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and target_mapping inputs to control the attention span and outputs (see examples in examples/run_generation.py)

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. Indices can be obtained using pytorch_transformers.XLNetTokenizer. See pytorch_transformers.PreTrainedTokenizer.encode() and pytorch_transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

A parallel sequence of tokens (can be used to indicate various portions of the inputs). The embeddings from these tokens will be summed with the respective token embeddings. Indices are selected in the vocabulary (unlike BERT which has a specific vocabulary for segment indices).

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

input_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

mems: (optional)

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding and attend to longer context.

perm_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length, sequence_length):

Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

target_mapping: (optional) torch.FloatTensor of shape (batch_size, num_predict, sequence_length):

Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
last_hidden_state: torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)

Sequence of hidden-states at the last layer of the model.

mems:

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems input above). Can be used to speed up sequential decoding and attend to longer context.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetModel.from_pretrained('xlnet-large-cased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
cache_mem(curr_out, prev_mem)[source]

cache hidden states into memory.

create_mask(qlen, mlen)[source]

Creates causal attention mask. Float mask where 1.0 indicates masked, 0.0 indicates not-masked.

Parameters
  • qlen – TODO Lysandre didn’t fill

  • mlen – TODO Lysandre didn’t fill

      same_length=False:      same_length=True:
      <mlen > <  qlen >       <mlen > <  qlen >
   ^ [0 0 0 0 0 1 1 1 1]     [0 0 0 0 0 1 1 1 1]
     [0 0 0 0 0 0 1 1 1]     [1 0 0 0 0 0 1 1 1]
qlen [0 0 0 0 0 0 0 1 1]     [1 1 0 0 0 0 0 1 1]
     [0 0 0 0 0 0 0 0 1]     [1 1 1 0 0 0 0 0 1]
   v [0 0 0 0 0 0 0 0 0]     [1 1 1 1 0 0 0 0 0]
forward(input_ids, token_type_ids=None, input_mask=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, head_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

relative_positional_encoding(qlen, klen, bsz=None)[source]

create relative positional encoding.

XLNetLMHeadModel

class pytorch_transformers.XLNetLMHeadModel(config)[source]

XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings). The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.

The specific attention pattern can be controlled at training and test time using the perm_mask input.

Do to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the target_mapping input.

To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and target_mapping inputs to control the attention span and outputs (see examples in examples/run_generation.py)

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. Indices can be obtained using pytorch_transformers.XLNetTokenizer. See pytorch_transformers.PreTrainedTokenizer.encode() and pytorch_transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

A parallel sequence of tokens (can be used to indicate various portions of the inputs). The embeddings from these tokens will be summed with the respective token embeddings. Indices are selected in the vocabulary (unlike BERT which has a specific vocabulary for segment indices).

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

input_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

mems: (optional)

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding and attend to longer context.

perm_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length, sequence_length):

Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

target_mapping: (optional) torch.FloatTensor of shape (batch_size, num_predict, sequence_length):

Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

labels: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set lm_labels = input_ids Indices are selected in [-1, 0, ..., config.vocab_size] All labels set to -1 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Language modeling loss.

prediction_scores: torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

mems:

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems input above). Can be used to speed up sequential decoding and attend to longer context.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased')
# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is very <mask>")).unsqueeze(0)  # We will predict the masked token
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0  # Previous tokens don't see last token
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float)  # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0  # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
next_token_logits = outputs[0]  # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
forward(input_ids, token_type_ids=None, input_mask=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, labels=None, head_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

tie_weights()[source]

Make sure we are sharing the embeddings

XLNetForSequenceClassification

class pytorch_transformers.XLNetForSequenceClassification(config)[source]

XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.

The specific attention pattern can be controlled at training and test time using the perm_mask input.

Do to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the target_mapping input.

To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and target_mapping inputs to control the attention span and outputs (see examples in examples/run_generation.py)

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. Indices can be obtained using pytorch_transformers.XLNetTokenizer. See pytorch_transformers.PreTrainedTokenizer.encode() and pytorch_transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

A parallel sequence of tokens (can be used to indicate various portions of the inputs). The embeddings from these tokens will be summed with the respective token embeddings. Indices are selected in the vocabulary (unlike BERT which has a specific vocabulary for segment indices).

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

input_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

mems: (optional)

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding and attend to longer context.

perm_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length, sequence_length):

Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

target_mapping: (optional) torch.FloatTensor of shape (batch_size, num_predict, sequence_length):

Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

labels: (optional) torch.LongTensor of shape (batch_size,):

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Classification (or regression if config.num_labels==1) loss.

logits: torch.FloatTensor of shape (batch_size, config.num_labels)

Classification (or regression if config.num_labels==1) scores (before SoftMax).

mems:

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems input above). Can be used to speed up sequential decoding and attend to longer context.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetForSequenceClassification.from_pretrained('xlnet-large-cased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
forward(input_ids, token_type_ids=None, input_mask=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, labels=None, head_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

XLNetForQuestionAnswering

class pytorch_transformers.XLNetForQuestionAnswering(config)[source]

XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.

The specific attention pattern can be controlled at training and test time using the perm_mask input.

Do to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the target_mapping input.

To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and target_mapping inputs to control the attention span and outputs (see examples in examples/run_generation.py)

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLNetConfig) – Model configuration class with all the parameters of the model.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. Indices can be obtained using pytorch_transformers.XLNetTokenizer. See pytorch_transformers.PreTrainedTokenizer.encode() and pytorch_transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

A parallel sequence of tokens (can be used to indicate various portions of the inputs). The embeddings from these tokens will be summed with the respective token embeddings. Indices are selected in the vocabulary (unlike BERT which has a specific vocabulary for segment indices).

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

input_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for real tokens and 1 for padding. Kept for compatibility with the original code base. You can only uses one of input_mask and attention_mask Mask values selected in [0, 1]: 1 for tokens that are MASKED, 0 for tokens that are NOT MASKED.

mems: (optional)

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems output below). Can be used to speed up sequential decoding and attend to longer context.

perm_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length, sequence_length):

Mask to indicate the attention pattern for each input token with values selected in [0, 1]: If perm_mask[k, i, j] = 0, i attend to j in batch k; if perm_mask[k, i, j] = 1, i does not attend to j in batch k. If None, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).

target_mapping: (optional) torch.FloatTensor of shape (batch_size, num_predict, sequence_length):

Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

start_positions: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

end_positions: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

is_impossible: (optional) torch.LongTensor of shape (batch_size,):

Labels whether a question has an answer or no answer (SQuAD 2.0)

cls_index: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the classification token to use as input for computing plausibility of the answer.

p_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Optional mask of tokens which can’t be in answers (e.g. [CLS], [PAD], …). 1.0 means token should be masked. 0.0 mean token is not masked.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned if both start_positions and end_positions are provided) torch.FloatTensor of shape (1,):

Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.

start_top_log_probs: (optional, returned if start_positions or end_positions is not provided)

torch.FloatTensor of shape (batch_size, config.start_n_top) Log probabilities for the top config.start_n_top start token possibilities (beam-search).

start_top_index: (optional, returned if start_positions or end_positions is not provided)

torch.LongTensor of shape (batch_size, config.start_n_top) Indices for the top config.start_n_top start token possibilities (beam-search).

end_top_log_probs: (optional, returned if start_positions or end_positions is not provided)

torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top) Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).

end_top_index: (optional, returned if start_positions or end_positions is not provided)

torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top) Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).

cls_logits: (optional, returned if start_positions or end_positions is not provided)

torch.FloatTensor of shape (batch_size,) Log probabilities for the is_impossible label of the answers.

mems:

list of torch.FloatTensor (one for each layer): that contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see mems input above). Can be used to speed up sequential decoding and attend to longer context.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-en-2048')
model = XLMForQuestionAnswering.from_pretrained('xlnet-large-cased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss, start_scores, end_scores = outputs[:2]
forward(input_ids, token_type_ids=None, input_mask=None, attention_mask=None, mems=None, perm_mask=None, target_mapping=None, start_positions=None, end_positions=None, cls_index=None, is_impossible=None, p_mask=None, head_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.