Transformers documentation
Longformer
Longformer
DISCLAIMER: This model is still a work in progress, if you see something strange, file a Github Issue.
Overview
The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformerβs attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA.
Tips:
- Since the Longformer is based on RoBERTa, it doesnβt have
token_type_ids. You donβt need to indicate which token belongs to which segment. Just separate your segments with the separation tokentokenizer.sep_token(or</s>).
This model was contributed by beltagy. The Authorsβ code can be found here.
Longformer Self Attention
Longformer self attention employs self attention on both a βlocalβ context and a βglobalβ context. Most tokens only
attend βlocallyβ to each other meaning that each token attends to its previous tokens and
succeding tokens with being the window length as defined in
config.attention_window. Note that config.attention_window can be of type List to define a
different for each layer. A selected few tokens attend βgloballyβ to all other tokens, as it is
conventionally done for all tokens in BertSelfAttention.
Note that βlocallyβ and βgloballyβ attending tokens are projected by different query, key and value matrices. Also note that every βlocallyβ attending token not only attends to tokens within its window , but also to all βgloballyβ attending tokens so that global attention is symmetric.
The user can define which tokens attend βlocallyβ and which tokens attend βgloballyβ by setting the tensor
global_attention_mask at run-time appropriately. All Longformer models employ the following logic for
global_attention_mask:
- 0: the token attends βlocallyβ,
- 1: the token attends βgloballyβ.
For more information please also refer to forward() method.
Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually represents the memory and time bottleneck, can be reduced from to , with being the sequence length and being the average window size. It is assumed that the number of βgloballyβ attending tokens is insignificant as compared to the number of βlocallyβ attending tokens.
For more information, please refer to the official paper.
Training
LongformerForMaskedLM is trained the exact same way RobertaForMaskedLM is trained and should be used as follows:
input_ids = tokenizer.encode('This is a sentence from [MASK] training data', return_tensors='pt')
mlm_labels = tokenizer.encode('This is a sentence from the training data', return_tensors='pt')
loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0]LongformerConfig
( attention_window: typing.Union[typing.List[int], int] = 512 sep_token_id: int = 2 **kwargs )
This is the configuration class to store the configuration of a LongformerModel or a TFLongformerModel. It is used to instantiate a Longformer model according to the specified arguments, defining the model architecture.
This is the configuration class to store the configuration of a LongformerModel. It is used to instantiate an Longformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RoBERTa roberta-base architecture with a sequence length 4,096.
The LongformerConfig class directly inherits RobertaConfig. It reuses the same defaults. Please check the parent class for more information.
Example:
>>> from transformers import LongformerConfig, LongformerModel
>>> # Initializing a Longformer configuration
>>> configuration = LongformerConfig()
>>> # Initializing a model from the configuration
>>> model = LongformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configLongformerTokenizer
( vocab_file merges_file errors = 'replace' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' add_prefix_space = False **kwargs )
Construct a Longformer tokenizer.
LongformerTokenizer is identical to RobertaTokenizer. Refer to the superclass for usage examples and documentation concerning parameters.
LongformerTokenizerFast
( vocab_file = None merges_file = None tokenizer_file = None errors = 'replace' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' add_prefix_space = False **kwargs )
Construct a βfastβ Longformer tokenizer (backed by HuggingFaceβs tokenizers library).
LongformerTokenizerFast is identical to RobertaTokenizerFast. Refer to the superclass for usage examples and documentation concerning parameters.
Longformer specific outputs
( last_hidden_state: FloatTensor hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for Longformerβs outputs, with potential hidden states, local and global attentions.
( last_hidden_state: FloatTensor pooler_output: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for Longformerβs outputs that also contains a pooling of the last hidden states.
( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Masked language modeling (MLM) loss. -
logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for masked language models outputs.
( loss: typing.Optional[torch.FloatTensor] = None start_logits: FloatTensor = None end_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. -
start_logits (
torch.FloatTensorof shape(batch_size, sequence_length)) — Span-start scores (before SoftMax). -
end_logits (
torch.FloatTensorof shape(batch_size, sequence_length)) — Span-end scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of question answering Longformer models.
( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Classification (or regression if config.num_labels==1) loss. -
logits (
torch.FloatTensorof shape(batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of sentence classification models.
( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
loss (
torch.FloatTensorof shape (1,), optional, returned whenlabelsis provided) — Classification loss. -
logits (
torch.FloatTensorof shape(batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).Classification scores (before SoftMax).
-
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of multiple choice Longformer models.
( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Classification loss. -
logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of token classification models.
( last_hidden_state: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
last_hidden_state (
tf.Tensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for Longformerβs outputs, with potential hidden states, local and global attentions.
( last_hidden_state: Tensor = None pooler_output: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
last_hidden_state (
tf.Tensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
tf.Tensorof shape(batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for Longformerβs outputs that also contains a pooling of the last hidden states.
( loss: typing.Optional[tensorflow.python.framework.ops.Tensor] = None logits: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) — Masked language modeling (MLM) loss. -
logits (
tf.Tensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for masked language models outputs.
( loss: typing.Optional[tensorflow.python.framework.ops.Tensor] = None start_logits: Tensor = None end_logits: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. -
start_logits (
tf.Tensorof shape(batch_size, sequence_length)) — Span-start scores (before SoftMax). -
end_logits (
tf.Tensorof shape(batch_size, sequence_length)) — Span-end scores (before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of question answering Longformer models.
( loss: typing.Optional[tensorflow.python.framework.ops.Tensor] = None logits: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) — Classification (or regression if config.num_labels==1) loss. -
logits (
tf.Tensorof shape(batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of sentence classification models.
( loss: typing.Optional[tensorflow.python.framework.ops.Tensor] = None logits: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
loss (
tf.Tensorof shape (1,), optional, returned whenlabelsis provided) — Classification loss. -
logits (
tf.Tensorof shape(batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).Classification scores (before SoftMax).
-
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of multiple choice models.
( loss: typing.Optional[tensorflow.python.framework.ops.Tensor] = None logits: Tensor = None hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None global_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None )
Parameters
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) — Classification loss. -
logits (
tf.Tensorof shape(batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
Base class for outputs of token classification models.
LongformerModel
( config add_pooling_layer = True )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Longformer Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
This class copied code from RobertaModel and overwrote standard self-attention with longformer self-attention to provide the ability to process long sequences following the self-attention approach described in Longformer: the Long-Document Transformer by Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer self-attention combines a local (sliding window) and global attention to extend to long documents without the O(n^2) increase in memory and compute.
The self-attention module LongformerSelfAttention implemented here supports the combination of local and
global attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and
dilated attention are more relevant for autoregressive language modeling than finetuning on downstream tasks.
Future release will add support for autoregressive attention, but the support for dilated attention requires a
custom CUDA kernel to be memory and compute efficient.
(
input_ids = None
attention_mask = None
global_attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
LongformerBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
global_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
decoder_head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
LongformerBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A LongformerBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising
various elements depending on the configuration (LongformerConfig) and inputs.
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) β Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) β Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The LongformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Examples:
>>> import torch
>>> from transformers import LongformerModel, LongformerTokenizer
>>> model = LongformerModel.from_pretrained('allenai/longformer-base-4096')
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document
>>> input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
>>> attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention
>>> global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to global attention to be deactivated for all tokens
>>> global_attention_mask[:, [1, 4, 21,]] = 1 # Set global attention to random tokens for the sake of this example
... # Usually, set global attention based on the task. For example,
... # classification: the <s> token
... # QA: question tokens
... # LM: potentially on the beginning of sentences and paragraphs
>>> outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
>>> sequence_output = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_outputLongformerForMaskedLM
( config )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_ids = None
attention_mask = None
global_attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
LongformerMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
global_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
decoder_head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in[-100, 0, ..., config.vocab_size](seeinput_idsdocstring) Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size] -
kwargs (
Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated.
Returns
LongformerMaskedLMOutput or tuple(torch.FloatTensor)
A LongformerMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising
various elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) β Masked language modeling (MLM) loss. -
logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The LongformerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Examples:
>>> import torch
>>> from transformers import LongformerForMaskedLM, LongformerTokenizer
>>> model = LongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096')
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document
>>> input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
>>> attention_mask = None # default is local attention everywhere, which is a good choice for MaskedLM
... # check `LongformerModel.forward` for more details how to set _attention_mask_
>>> outputs = model(input_ids, attention_mask=attention_mask, labels=input_ids)
>>> loss = outputs.loss
>>> prediction_logits = outputs.logitsLongformerForSequenceClassification
( config )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_ids = None
attention_mask = None
global_attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
LongformerSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
global_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
decoder_head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensorof shape(batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]. Ifconfig.num_labels == 1a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1a classification loss is computed (Cross-Entropy).
Returns
LongformerSequenceClassifierOutput or tuple(torch.FloatTensor)
A LongformerSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising
various elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) β Classification (or regression if config.num_labels==1) loss. -
logits (
torch.FloatTensorof shape(batch_size, config.num_labels)) β Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The LongformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example of single-label classification:
>>> from transformers import LongformerTokenizer, LongformerForSequenceClassification
>>> import torch
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logitsExample of multi-label classification:
>>> from transformers import LongformerTokenizer, LongformerForSequenceClassification
>>> import torch
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([[1, 1]], dtype=torch.float) # need dtype=float for BCEWithLogitsLoss
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logitsLongformerForMultipleChoice
( config )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_ids = None
token_type_ids = None
attention_mask = None
global_attention_mask = None
head_mask = None
labels = None
position_ids = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
LongformerMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensorof shape(batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.FloatTensorof shape(batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
global_attention_mask (
torch.FloatTensorof shape(batch_size, num_choices, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
decoder_head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
token_type_ids (
torch.LongTensorof shape(batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
torch.LongTensorof shape(batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
torch.FloatTensorof shape(batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensorof shape(batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in[0, ..., num_choices-1]wherenum_choicesis the size of the second dimension of the input tensors. (Seeinput_idsabove)
Returns
LongformerMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A LongformerMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising
various elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
torch.FloatTensorof shape (1,), optional, returned whenlabelsis provided) β Classification loss. -
logits (
torch.FloatTensorof shape(batch_size, num_choices)) β num_choices is the second dimension of the input tensors. (see input_ids above).Classification scores (before SoftMax).
-
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The LongformerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, LongformerForMultipleChoice
>>> import torch
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = LongformerForMultipleChoice.from_pretrained('allenai/longformer-base-4096')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=labels) # batch size is 1
>>> # the linear classifier still needs to be trained
>>> loss = outputs.loss
>>> logits = outputs.logitsLongformerForTokenClassification
( config )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_ids = None
attention_mask = None
global_attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
LongformerTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
global_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
decoder_head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in[0, ..., config.num_labels - 1].
Returns
LongformerTokenClassifierOutput or tuple(torch.FloatTensor)
A LongformerTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising
various elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) β Classification loss. -
logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_labels)) β Classification scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The LongformerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, LongformerForTokenClassification
>>> import torch
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = LongformerForTokenClassification.from_pretrained('allenai/longformer-base-4096')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0) # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logitsLongformerForQuestionAnswering
( config )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / TriviaQA (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_ids = None
attention_mask = None
global_attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
start_positions = None
end_positions = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
LongformerQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
global_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
decoder_head_mask (
torch.Tensorof shape(num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
start_positions (
torch.LongTensorof shape(batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. -
end_positions (
torch.LongTensorof shape(batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
Returns
LongformerQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A LongformerQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising
various elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) β Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. -
start_logits (
torch.FloatTensorof shape(batch_size, sequence_length)) β Span-start scores (before SoftMax). -
end_logits (
torch.FloatTensorof shape(batch_size, sequence_length)) β Span-end scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The LongformerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Examples:
>>> from transformers import LongformerTokenizer, LongformerForQuestionAnswering
>>> import torch
>>> tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
>>> model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> encoding = tokenizer(question, text, return_tensors="pt")
>>> input_ids = encoding["input_ids"]
>>> # default is local attention everywhere
>>> # the forward method will automatically set global attention on question tokens
>>> attention_mask = encoding["attention_mask"]
>>> outputs = model(input_ids, attention_mask=attention_mask)
>>> start_logits = outputs.start_logits
>>> end_logits = outputs.end_logits
>>> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
>>> answer_tokens = all_tokens[torch.argmax(start_logits) :torch.argmax(end_logits)+1]
>>> answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # remove space prepending space tokenTFLongformerModel
( *args **kwargs )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Longformer Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all
the tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_idsonly and nothing else:model(inputs_ids) - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids]) - a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
This class copies code from TFRobertaModel and overwrites standard self-attention with longformer self-attention to provide the ability to process long sequences following the self-attention approach described in Longformer: the Long-Document Transformer by Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer self-attention combines a local (sliding window) and global attention to extend to long documents without the O(n^2) increase in memory and compute.
The self-attention module TFLongformerSelfAttention implemented here supports the combination of local and
global attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and
dilated attention are more relevant for autoregressive language modeling than finetuning on downstream tasks.
Future release will add support for autoregressive attention, but the support for dilated attention requires a
custom CUDA kernel to be memory and compute efficient.
( input_ids = None attention_mask = None head_mask = None global_attention_mask = None token_type_ids = None position_ids = None inputs_embeds = None output_attentions = None output_hidden_states = None return_dict = None training = False **kwargs )
Parameters
-
input_ids (
tf.Tensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.call() and transformers.PreTrainedTokenizer.encode() for details.
-
attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
head_mask (
tf.Tensorof shape(encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
global_attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
token_type_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
tf.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool, optional, defaults toFalse) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
The TFLongformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
TFLongformerForMaskedLM
( *args **kwargs )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all
the tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_idsonly and nothing else:model(inputs_ids) - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids]) - a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
(
input_ids = None
attention_mask = None
head_mask = None
global_attention_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
training = False
**kwargs
)
β
TFLongformerMaskedLMOutput or tuple(tf.Tensor)
Parameters
-
input_ids (
tf.Tensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.call() and transformers.PreTrainedTokenizer.encode() for details.
-
attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
head_mask (
tf.Tensorof shape(encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
global_attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
token_type_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
tf.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool, optional, defaults toFalse) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). -
labels (
tf.Tensorof shape(batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in[-100, 0, ..., config.vocab_size](seeinput_idsdocstring) Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
Returns
TFLongformerMaskedLMOutput or tuple(tf.Tensor)
A TFLongformerMaskedLMOutput or a tuple of
tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) β Masked language modeling (MLM) loss. -
logits (
tf.Tensorof shape(batch_size, sequence_length, config.vocab_size)) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The TFLongformerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, TFLongformerForMaskedLM
>>> import tensorflow as tf
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = TFLongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096')
>>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
>>> inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
>>> outputs = model(inputs)
>>> loss = outputs.loss
>>> logits = outputs.logitsTFLongformerForQuestionAnswering
( *args **kwargs )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / TriviaQA (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all
the tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_idsonly and nothing else:model(inputs_ids) - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids]) - a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
(
input_ids = None
attention_mask = None
head_mask = None
global_attention_mask = None
token_type_ids = None
position_ids = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
start_positions = None
end_positions = None
training = False
**kwargs
)
β
TFLongformerQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
-
input_ids (
tf.Tensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.call() and transformers.PreTrainedTokenizer.encode() for details.
-
attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
head_mask (
tf.Tensorof shape(encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
global_attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
token_type_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
tf.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool, optional, defaults toFalse) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). -
start_positions (
tf.Tensorof shape(batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. -
end_positions (
tf.Tensorof shape(batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
Returns
TFLongformerQuestionAnsweringModelOutput or tuple(tf.Tensor)
A TFLongformerQuestionAnsweringModelOutput or a tuple of
tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) β Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. -
start_logits (
tf.Tensorof shape(batch_size, sequence_length)) β Span-start scores (before SoftMax). -
end_logits (
tf.Tensorof shape(batch_size, sequence_length)) β Span-end scores (before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The TFLongformerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, TFLongformerForQuestionAnswering
>>> import tensorflow as tf
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-large-4096-finetuned-triviaqa')
>>> model = TFLongformerForQuestionAnswering.from_pretrained('allenai/longformer-large-4096-finetuned-triviaqa')
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> input_dict = tokenizer(question, text, return_tensors='tf')
>>> outputs = model(input_dict)
>>> start_logits = outputs.start_logits
>>> end_logits = outputs.end_logits
>>> all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0])
>>> answer = ' '.join(all_tokens[tf.math.argmax(start_logits, 1)[0] : tf.math.argmax(end_logits, 1)[0]+1])TFLongformerForSequenceClassification
( *args **kwargs )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all
the tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_idsonly and nothing else:model(inputs_ids) - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids]) - a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
(
input_ids = None
attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
global_attention_mask = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
training = False
**kwargs
)
β
TFLongformerSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
-
input_ids (
tf.Tensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.call() and transformers.PreTrainedTokenizer.encode() for details.
-
attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
head_mask (
tf.Tensorof shape(encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
global_attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
token_type_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
tf.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool, optional, defaults toFalse) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
TFLongformerSequenceClassifierOutput or tuple(tf.Tensor)
A TFLongformerSequenceClassifierOutput or a tuple of
tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) β Classification (or regression if config.num_labels==1) loss. -
logits (
tf.Tensorof shape(batch_size, config.num_labels)) β Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The TFLongformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
>>> import tensorflow as tf
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
>>> outputs = model(inputs)
>>> loss = outputs.loss
>>> logits = outputs.logitsTFLongformerForTokenClassification
( *args **kwargs )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all
the tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_idsonly and nothing else:model(inputs_ids) - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids]) - a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
(
input_ids = None
attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
global_attention_mask = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
training = False
**kwargs
)
β
TFLongformerTokenClassifierOutput or tuple(tf.Tensor)
Parameters
-
input_ids (
tf.Tensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.call() and transformers.PreTrainedTokenizer.encode() for details.
-
attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
head_mask (
tf.Tensorof shape(encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
global_attention_mask (
tf.Tensorof shape(batch_size, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
token_type_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
tf.Tensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
tf.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool, optional, defaults toFalse) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). -
labels (
tf.Tensorof shape(batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in[0, ..., config.num_labels - 1].
Returns
TFLongformerTokenClassifierOutput or tuple(tf.Tensor)
A TFLongformerTokenClassifierOutput or a tuple of
tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
tf.Tensorof shape(1,), optional, returned whenlabelsis provided) β Classification loss. -
logits (
tf.Tensorof shape(batch_size, sequence_length, config.num_labels)) β Classification scores (before SoftMax). -
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The TFLongformerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, TFLongformerForTokenClassification
>>> import tensorflow as tf
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = TFLongformerForTokenClassification.from_pretrained('allenai/longformer-base-4096')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> input_ids = inputs["input_ids"]
>>> inputs["labels"] = tf.reshape(tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))) # Batch size 1
>>> outputs = model(inputs)
>>> loss = outputs.loss
>>> logits = outputs.logitsTFLongformerForMultipleChoice
( *args **kwargs )
Parameters
- config (LongformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all
the tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_idsonly and nothing else:model(inputs_ids) - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids]) - a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
(
input_ids = None
attention_mask = None
head_mask = None
token_type_ids = None
position_ids = None
global_attention_mask = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
training = False
**kwargs
)
β
TFLongformerMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
-
input_ids (
tf.Tensorof shape(batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using LongformerTokenizer. See transformers.PreTrainedTokenizer.call() and transformers.PreTrainedTokenizer.encode() for details.
-
attention_mask (
tf.Tensorof shape(batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
head_mask (
tf.Tensorof shape(encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in[0, 1]:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
-
global_attention_mask (
tf.Tensorof shape(batch_size, num_choices, sequence_length), optional) — Mask to decide the attention given on each token, local attention or global attention. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is important for task-specific finetuning because it makes the model more flexible at representing the task. For example, for classification, thetoken should be given global attention. For QA, all question tokens should also have global attention. Please refer to the Longformer paper for more details. Mask values selected in[0, 1]:- 0 for local attention (a sliding window attention),
- 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
-
token_type_ids (
tf.Tensorof shape(batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
-
position_ids (
tf.Tensorof shape(batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]. -
inputs_embeds (
tf.Tensorof shape(batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. -
output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool, optional, defaults toFalse) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). -
labels (
tf.Tensorof shape(batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in[0, ..., num_choices]wherenum_choicesis the size of the second dimension of the input tensors. (Seeinput_idsabove)
Returns
TFLongformerMultipleChoiceModelOutput or tuple(tf.Tensor)
A TFLongformerMultipleChoiceModelOutput or a tuple of
tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
-
loss (
tf.Tensorof shape (1,), optional, returned whenlabelsis provided) β Classification loss. -
logits (
tf.Tensorof shape(batch_size, num_choices)) β num_choices is the second dimension of the input tensors. (see input_ids above).Classification scores (before SoftMax).
-
hidden_states (
tuple(tf.Tensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x + attention_window + 1), wherexis the number of tokens with global attention mask.Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first
xvalues) and to every token in the attention window (remainingattention_window + 1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions. -
global_attentions (
tuple(tf.Tensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, x), wherexis the number of tokens with global attention mask.Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence.
The TFLongformerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import LongformerTokenizer, TFLongformerForMultipleChoice
>>> import tensorflow as tf
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = TFLongformerForMultipleChoice.from_pretrained('allenai/longformer-base-4096')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='tf', padding=True)
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
>>> outputs = model(inputs) # batch size is 1
>>> # the linear classifier still needs to be trained
>>> logits = outputs.logits