Reformer
Overview
The Reformer model was proposed in the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
The abstract from the paper is the following:
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
This model was contributed by patrickvonplaten. The Authors’ code can be found here.
Usage tips
- Reformer does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035.
- Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices.
- Replace traditional attention by LSH (local-sensitive hashing) attention (see below for more details). It’s a technique to avoid computing the full product query-key in the attention layers.
- Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory).
- Compute the feedforward operations by chunks and not on the whole batch.
Axial Positional Encodings
Axial Positional Encodings were first implemented in Google’s trax library
and developed by the authors of this model’s paper. In models that are treating very long input sequences, the
conventional position id encodings store an embedings vector of size being the config.hidden_size
for
every position, with being config.max_embedding_size
. This means that having
a sequence length of and a config.hidden_size
of
would result in a position encoding matrix:
which alone has over 500M parameters to store. Axial positional encodings factorize into two matrices:
and
with:
Therefore the following holds:
Intuitively, this means that a position embedding vector is now the composition of two
factorized embedding vectors:, where as the config.max_embedding_size
dimension is factorized into. This design ensures that each position embedding vector is unique.
Using the above example again, axial position encoding with can drastically reduced the number of parameters from 500 000 000 to parameters, this means 85% less memory usage.
In practice, the parameter config.axial_pos_embds_dim
is set to a tuple which sum has to be
equal to config.hidden_size
and config.axial_pos_shape
is set to a tuple which
product has to be equal to config.max_embedding_size
, which during training has to be equal to the sequence
length of the input_ids
.
LSH Self Attention
In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key
query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in
Practical and Optimal LSH for Angular Distance to assign each of the tied key
query embedding vectors to one of config.num_buckets
possible buckets. The premise is that the more “similar”
key query embedding vectors (in terms of cosine similarity) are to each other, the more likely they are assigned to
the same bucket.
The accuracy of the LSH mechanism can be improved by increasing config.num_hashes
or directly the argument
num_hashes
of the forward function so that the output of the LSH self attention better approximates the output
of the “normal” full self attention. The buckets are then sorted and chunked into query key embedding vector chunks
each of length config.lsh_chunk_length
. For each chunk, the query embedding vectors attend to its key vectors
(which are tied to themselves) and to the key embedding vectors of config.lsh_num_chunks_before
previous
neighboring chunks and config.lsh_num_chunks_after
following neighboring chunks.
For more information, see the original Paper or this great blog post.
Note that config.num_buckets
can also be factorized into a list. This way instead of assigning the query key embedding vectors to one of they are assigned to one of. This is crucial for very long sequences to
save memory.
When training a model from scratch, it is recommended to leave config.num_buckets=None
, so that depending on the
sequence length a good value for num_buckets
is calculated on the fly. This value will then automatically be
saved in the config and should be reused for inference.
Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from to, which usually represents the memory and time bottleneck in a transformer model, with being the sequence length.
Local Self Attention
Local self attention is essentially a “normal” self attention layer with key, query and value projections, but is
chunked so that in each chunk of length config.local_chunk_length
the query embedding vectors only attends to
the key embedding vectors in its chunk and to the key embedding vectors of config.local_num_chunks_before
previous neighboring chunks and config.local_num_chunks_after
following neighboring chunks.
Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from to, which usually represents the memory and time bottleneck in a transformer model, with being the sequence length.
Training
During training, we must ensure that the sequence length is set to a value that can be divided by the least common
multiple of config.lsh_chunk_length
and config.local_chunk_length
and that the parameters of the Axial
Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can
easily be trained on sequences as long as 64000 tokens.
For training, the ReformerModelWithLMHead should be used as follows:
input_ids = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids)[0]
Resources
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
- Masked language modeling task guide
ReformerConfig
class transformers.ReformerConfig
< source >( attention_head_size = 64 attn_layers = ['local', 'lsh', 'local', 'lsh', 'local', 'lsh'] axial_norm_std = 1.0 axial_pos_embds = True axial_pos_shape = [64, 64] axial_pos_embds_dim = [64, 192] chunk_size_lm_head = 0 eos_token_id = 2 feed_forward_size = 512 hash_seed = None hidden_act = 'relu' hidden_dropout_prob = 0.05 hidden_size = 256 initializer_range = 0.02 is_decoder = False layer_norm_eps = 1e-12 local_num_chunks_before = 1 local_num_chunks_after = 0 local_attention_probs_dropout_prob = 0.05 local_attn_chunk_length = 64 lsh_attn_chunk_length = 64 lsh_attention_probs_dropout_prob = 0.0 lsh_num_chunks_before = 1 lsh_num_chunks_after = 0 max_position_embeddings = 4096 num_attention_heads = 12 num_buckets = None num_hashes = 1 pad_token_id = 0 vocab_size = 320 tie_word_embeddings = False use_cache = True classifier_dropout = None **kwargs )
Parameters
- attention_head_size (
int
, optional, defaults to 64) — Dimensionality of the projected key, query and value vectors - attn_layers (
List[str]
, optional, defaults to["local", "lsh", "local", "lsh", "local", "lsh"]
) — List of attention layer types in ascending order. It can be chosen between a LSHSelfAttention layer ("lsh"
) and a LocalSelfAttention layer ("local"
).For more information on LSHSelfAttention layer, see LSH Self Attention. For more information on LocalSelfAttention layer, see Local Self Attention.
- axial_pos_embds (
bool
, optional, defaults toTrue
) — Whether or not to use axial position embeddings. For more information on how axial position embeddings work, see Axial Position Encodings. - axial_norm_std (
float
, optional, defaults to 1.0) — The standard deviation of the normal_initializer for initializing the weight matrices of the axial positional encodings. - axial_pos_shape (
List[int]
, optional, defaults to[64, 64]
) — The position dims of the axial position encodings. During training, the product of the position dims has to be equal to the sequence length.For more information on how axial position embeddings work, see Axial Position Encodings.
- axial_pos_embds_dim (
List[int]
, optional, defaults to[64, 192]
) — The embedding dims of the axial position encodings. The sum of the embedding dims has to be equal to the hidden size.For more information on how axial position embeddings work, see Axial Position Encodings.
- chunk_size_lm_head (
int
, optional, defaults to 0) — The chunk size of the final language model feed forward head layer. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time.For more information on feed forward chunking, see How does Feed Forward Chunking work?.
- eos_token_id (
int
, optional, defaults to 2) — The token id for the end-of-sentence token. - feed_forward_size (
int
, optional, defaults to 512) — Dimensionality of the feed_forward layer in the residual attention block. - hash_seed (
int
, optional) — Seed that can be used to make local sensitive hashing inLSHSelfAttention
deterministic. This should only be set for testing purposed. For evaluation and training purposeshash_seed
should be left asNone
to ensure fully random rotations in local sensitive hashing scheme. - hidden_act (
str
orCallable
, optional, defaults to"relu"
) — The non-linear activation function (function or string) in the feed forward layer in the residual attention block. If string,"gelu"
,"relu"
,"silu"
and"gelu_new"
are supported. - hidden_dropout_prob (
float
, optional, defaults to 0.05) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - hidden_size (
int
, optional, defaults to 256) — Dimensionality of the output hidden states of the residual attention blocks. - initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - is_decoder (
bool
, optional, defaults toFalse
) — Whether or not to use a causal mask in addition to theattention_mask
passed to ReformerModel. When using the Reformer for causal language modeling, this argument should be set toTrue
. - layer_norm_eps (
float
, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. - local_chunk_length (
int
, optional, defaults to 64) — Length of chunk which attends to itself inLocalSelfAttention
. Chunking reduces memory complexity from sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk length (chunked self attention). - local_num_chunks_before (
int
, optional, defaults to 1) — Number of previous neighbouring chunks to attend to inLocalSelfAttention
layer to itself. - local_num_chunks_after (
int
, optional, defaults to 0) — Number of following neighbouring chunks to attend to inLocalSelfAttention
layer in addition to itself. - local_attention_probs_dropout_prob (
float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities inLocalSelfAttention
. - lsh_attn_chunk_length (
int
, optional, defaults to 64) — Length of chunk which attends to itself inLSHSelfAttention
. Chunking reduces memory complexity from sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk length (chunked self attention). - lsh_num_chunks_before (
int
, optional, defaults to 1) — Number of previous neighbouring chunks to attend to inLSHSelfAttention
layer to itself. - lsh_num_chunks_after (
int
, optional, defaults to 0) — Number of following neighbouring chunks to attend to inLSHSelfAttention
layer to itself. - lsh_attention_probs_dropout_prob (
float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities inLSHSelfAttention
. - max_position_embeddings (
int
, optional, defaults to 4096) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - num_attention_heads (
int
, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - num_buckets (
int
orList[int]
, optional) — Number of buckets, the key query vectors can be “hashed into” using the locality sensitive hashing scheme. Each query key vector is hashed into a hash in1, ..., num_buckets
. The number of buckets can also be factorized into a list for improved memory complexity. In this case, each query key vector is hashed into a hash in1-1, 1-2, ..., num_buckets[0]-1, ..., num_buckets[0]-num_buckets[1]
ifnum_buckets
is factorized into two factors. The number of buckets (or the product the factors) should approximately equal sequence length / lsh_chunk_length. Ifnum_buckets
not set, a good value is calculated on the fly. - num_hashes (
int
, optional, defaults to 1) — Number of hashing rounds (e.g., number of random rotations) in Local Sensitive Hashing scheme. The highernum_hashes
, the more accurate theLSHSelfAttention
becomes, but also the more memory and time intensive the hashing becomes. - pad_token_id (
int
, optional, defaults to 0) — The token id for the padding token. - vocab_size (
int
, optional, defaults to 320) —\ Vocabulary size of the Reformer model. Defines the number of different tokens that can be represented by theinputs_ids
passed when calling ReformerModel. - tie_word_embeddings (
bool
, optional, defaults toFalse
) — Whether to tie input and output embeddings. - use_cache (
bool
, optional, defaults toTrue
) — Whether or not the model should return the last key/values attentions (not used by all models). - classifier_dropout (
float
, optional) — The dropout ratio for the classification head.
This is the configuration class to store the configuration of a ReformerModel. It is used to instantiate a Reformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ReFormer google/reformer-crime-and-punishment architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import ReformerConfig, ReformerModel
>>> # Initializing a Reformer configuration
>>> configuration = ReformerConfig()
>>> # Initializing a Reformer model (with random weights)
>>> model = ReformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
ReformerTokenizer
class transformers.ReformerTokenizer
< source >( vocab_file eos_token = '</s>' unk_token = '<unk>' additional_special_tokens = [] sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs )
Parameters
- vocab_file (
str
) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. - eos_token (
str
, optional, defaults to"</s>"
) — The end of sequence token.When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the
sep_token
. - unk_token (
str
, optional, defaults to"<unk>"
) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - additional_special_tokens (
List[str]
, optional, defaults to[]
) — Additional special tokens used by the tokenizer. - sp_model_kwargs (
dict
, optional) — Will be passed to theSentencePieceProcessor.__init__()
method. The Python wrapper for SentencePiece can be used, among other things, to set:-
enable_sampling
: Enable subword regularization. -
nbest_size
: Sampling parameters for unigram. Invalid for BPE-Dropout.nbest_size = {0,1}
: No sampling is performed.nbest_size > 1
: samples from the nbest_size results.nbest_size < 0
: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
-
alpha
: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.
-
Construct a Reformer tokenizer. Based on SentencePiece .
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
ReformerTokenizerFast
class transformers.ReformerTokenizerFast
< source >( vocab_file = None tokenizer_file = None eos_token = '</s>' unk_token = '<unk>' additional_special_tokens = [] **kwargs )
Parameters
- vocab_file (
str
) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. - eos_token (
str
, optional, defaults to"</s>"
) — The end of sequence token.When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the
sep_token
. - unk_token (
str
, optional, defaults to"<unk>"
) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - pad_token (
str
, optional, defaults to"<pad>"
) — The token used for padding, for example when batching sequences of different lengths. - additional_special_tokens (
List[str]
, optional) — Additional special tokens used by the tokenizer.
Construct a “fast” Reformer tokenizer (backed by HuggingFace’s tokenizers library). Based on Unigram.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
ReformerModel
class transformers.ReformerModel
< source >( config )
Parameters
- config (ReformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Reformer Model transformer outputting raw hidden-stateswithout any specific head on top. Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None num_hashes: typing.Optional[int] = None past_buckets_states: typing.Optional[typing.List[typing.Tuple[torch.Tensor]]] = None use_cache: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.reformer.modeling_reformer.ReformerModelOutput
or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - num_hashes (
int
, optional) — The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites the default defined inconfig.num_hashes
.For more information, see
num_hashes
in ReformerConfig. - past_buckets_states (
List[Tuple(torch.LongTensor, torch.FloatTensor)]
, optional) — List ofTuple(torch.LongTensor, torch.FloatTensor
of lengthconfig.n_layers
, with the first element being the previous buckets of shape(batch_size, num_heads, num_hashes, sequence_length)
) and the second being the previous hidden_states of shape(batch_size, sequence_length, hidden_size)
).Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed up sequential decoding.
- use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.reformer.modeling_reformer.ReformerModelOutput
or tuple(torch.FloatTensor)
A transformers.models.reformer.modeling_reformer.ReformerModelOutput
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, num_predict, hidden_size)
) — Sequence of hidden-states at the last layer of the model.num_predict
corresponds totarget_mapping.shape[1]
. Iftarget_mapping
isNone
, thennum_predict
corresponds tosequence_length
. -
past_buckets_states (
List[Tuple(torch.LongTensor, torch.FloatTensor)]
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — List ofTuple(torch.LongTensor, torch.FloatTensor
of lengthconfig.n_layers
, with the first element being the previous buckets of shape(batch_size, num_heads, num_hashes, sequence_length)
) and the second being the previous hidden_states of shape(batch_size, sequence_length, hidden_size)
).Contains precomputed buckets and hidden-states that can be used (see
past_buckets_states
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ReformerModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, ReformerModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
>>> model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
ReformerModelWithLMHead
class transformers.ReformerModelWithLMHead
< source >( config )
Parameters
- config (ReformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model with a language modeling
head on top.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None num_hashes: typing.Optional[int] = None past_buckets_states: typing.Optional[typing.List[typing.Tuple[torch.Tensor]]] = None use_cache: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - num_hashes (
int
, optional) — The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites the default defined inconfig.num_hashes
.For more information, see
num_hashes
in ReformerConfig. - past_buckets_states (
List[Tuple(torch.LongTensor, torch.FloatTensor)]
, optional) — List ofTuple(torch.LongTensor, torch.FloatTensor
of lengthconfig.n_layers
, with the first element being the previous buckets of shape(batch_size, num_heads, num_hashes, sequence_length)
) and the second being the previous hidden_states of shape(batch_size, sequence_length, hidden_size)
).Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed up sequential decoding.
- use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - labels (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in[-100, 0, ..., config.vocab_size - 1]
. All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ReformerModelWithLMHead forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import AutoTokenizer, ReformerModelWithLMHead
>>> tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
>>> model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits
ReformerForMaskedLM
class transformers.ReformerForMaskedLM
< source >( config )
Parameters
- config (ReformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model with a language modeling
head on top.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None num_hashes: typing.Optional[int] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - num_hashes (
int
, optional) — The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites the default defined inconfig.num_hashes
.For more information, see
num_hashes
in ReformerConfig. - past_buckets_states (
List[Tuple(torch.LongTensor, torch.FloatTensor)]
, optional) — List ofTuple(torch.LongTensor, torch.FloatTensor
of lengthconfig.n_layers
, with the first element being the previous buckets of shape(batch_size, num_heads, num_hashes, sequence_length)
) and the second being the previous hidden_states of shape(batch_size, sequence_length, hidden_size)
).Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed up sequential decoding.
- use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Labels for computing the masked language modeling loss. Indices should be in[-100, 0, ..., config.vocab_size]
(seeinput_ids
docstring) Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Masked language modeling (MLM) loss. -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ReformerForMaskedLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a false checkpoint since we don’t have any available pretrained model for the masked language modeling task with the Reformer architecture.
Example:
>>> import torch
>>> from transformers import AutoTokenizer, ReformerForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-reformer")
>>> model = ReformerForMaskedLM.from_pretrained("hf-internal-testing/tiny-random-reformer")
>>> # add mask_token
>>> tokenizer.add_special_tokens({"mask_token": "[MASK]"})
>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
>>> # resize model's embedding matrix
>>> model.resize_token_embeddings(new_num_tokens=model.config.vocab_size + 1)
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # retrieve index of [MASK]
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
>>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
>>> predicted_token = tokenizer.decode(predicted_token_id)
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
>>> # mask labels of non-[MASK] tokens
>>> labels = torch.where(
... inputs.input_ids == tokenizer.mask_token_id, labels[:, : inputs["input_ids"].shape[-1]], -100
... )
>>> outputs = model(**inputs, labels=labels)
>>> loss = round(outputs.loss.item(), 2)
ReformerForSequenceClassification
class transformers.ReformerForSequenceClassification
< source >( config )
Parameters
- config (ReformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None num_hashes: typing.Optional[int] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - num_hashes (
int
, optional) — The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites the default defined inconfig.num_hashes
.For more information, see
num_hashes
in ReformerConfig. - past_buckets_states (
List[Tuple(torch.LongTensor, torch.FloatTensor)]
, optional) — List ofTuple(torch.LongTensor, torch.FloatTensor
of lengthconfig.n_layers
, with the first element being the previous buckets of shape(batch_size, num_heads, num_hashes, sequence_length)
) and the second being the previous hidden_states of shape(batch_size, sequence_length, hidden_size)
).Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed up sequential decoding.
- use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - labels (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Classification (or regression if config.num_labels==1) loss. -
logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ReformerForSequenceClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, ReformerForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
>>> model = ReformerForSequenceClassification.from_pretrained("google/reformer-crime-and-punishment")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> label = model.config.id2label[predicted_class_id]
>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = ReformerForSequenceClassification.from_pretrained(
... "google/reformer-crime-and-punishment", num_labels=num_labels
... )
>>> labels = torch.tensor(1)
>>> loss = model(**inputs, labels=labels).loss
ReformerForQuestionAnswering
class transformers.ReformerForQuestionAnswering
< source >( config )
Parameters
- config (ReformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / TriviaQA
( a linear layer on top of hidden-states output to compute span start logits
and span end logits
.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None num_hashes: typing.Optional[int] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - num_hashes (
int
, optional) — The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites the default defined inconfig.num_hashes
.For more information, see
num_hashes
in ReformerConfig. - past_buckets_states (
List[Tuple(torch.LongTensor, torch.FloatTensor)]
, optional) — List ofTuple(torch.LongTensor, torch.FloatTensor
of lengthconfig.n_layers
, with the first element being the previous buckets of shape(batch_size, num_heads, num_hashes, sequence_length)
) and the second being the previous hidden_states of shape(batch_size, sequence_length, hidden_size)
).Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed up sequential decoding.
- use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - start_positions (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence are not taken into account for computing the loss. - end_positions (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. -
start_logits (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Span-start scores (before SoftMax). -
end_logits (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Span-end scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ReformerForQuestionAnswering forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, ReformerForQuestionAnswering
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
>>> model = ReformerForQuestionAnswering.from_pretrained("google/reformer-crime-and-punishment")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> # target is "nice puppet"
>>> target_start_index = torch.tensor([14])
>>> target_end_index = torch.tensor([15])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss