Reformer

DISCLAIMER: This model is still a work in progress, if you see something strange, file a Github Issue

Overview

The Reformer model was presented in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. Here the abstract:

Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.

The Authors’ code can be found here .

Axial Positional Encodings

Axial Positional Encodings were first implemented in Google’s trax library and developed by the authors of this model’s paper. In models that are treating very long input sequences, the conventional position id encodings store an embedings vector of size \(d\) being the config.hidden_size for every position \(i, \ldots, n_s\), with \(n_s\) being config.max_embedding_size. E.g., having a sequence length of \(n_s = 2^{19} \approx 0.5M\) and a config.hidden_size of \(d = 2^{10} \approx 1000\) would result in a position encoding matrix:

\[X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]\]

which alone has over 500M parameters to store. Axial positional encodings factorize \(X_{i,j}\) into two matrices:

\[X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]\]

and

\[X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]\]

with:

\[d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .\]

Therefore the following holds:

\[\begin{split}X_{i,j} = \begin{cases} X^{1}_{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \\ X^{2}_{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor \end{cases}\end{split}\]

Intuitively, this means that a position embedding vector \(x_j \in \mathbb{R}^{d}\) is now the composition of two factorized embedding vectors: \(x^1_{k, l} + x^2_{l, k}\), where as the config.max_embedding_size dimension \(j\) is factorized into \(k \text{ and } l\). This design ensures that each position embedding vector \(x_j\) is unique.

Using the above example again, axial position encoding with \(d^1 = 2^5, d^2 = 2^5, n_s^1 = 2^9, n_s^2 = 2^{10}\) can drastically reduced the number of parameters to \(2^{14} + 2^{15} \approx 49000\) parameters.

In practice, the parameter config.axial_pos_embds_dim is set to list\((d^1, d^2)\) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list\((n_s^1, n_s^2)\) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids.

LSH Self Attention

In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in Practical and Optimal LSH for Angular Distance to assign each of the tied key query embedding vectors to one of config.num_buckets possible buckets. The premise is that the more “similar” key query embedding vectors (in terms of cosine similarity) are to each other, the more likely they are assigned to the same bucket. The accuracy of the LSH mechanism can be improved by increasing config.num_hashes or directly the argument num_hashes of the forward function so that the output of the LSH self attention better approximates the output of the “normal” full self attention. The buckets are then sorted and chunked into query key embedding vector chunks each of length config.lsh_chunk_length. For each chunk, the query embedding vectors attend to its key vectors (which are tied to themselves) and to the key embedding vectors of config.lsh_num_chunks_before previous neighboring chunks and config.lsh_num_chunks_after following neighboring chunks. For more information, see the original Paper or this great blog post.

Note that config.num_buckets can also be factorized into a list\((n_{\text{buckets}}^1, n_{\text{buckets}}^2)\). This way instead of assigning the query key embedding vectors to one of \((1,\ldots, n_{\text{buckets}})\) they are assigned to one of \((1-1,\ldots, n_{\text{buckets}}^1-1, \ldots, 1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)\). This is crucial for very long sequences to save memory.

When training a model from scratch, it is recommended to leave config.num_buckets=None, so that depending on the sequence length a good value for num_buckets is calculated on the fly. This value will then automatically be saved in the config and should be reused for inference.

Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory and time bottleneck in a transformer model, with \(n_s\) being the sequence length.

Local Self Attention

Local self attention is essentially a “normal” self attention layer with key, query and value projections, but is chunked so that in each chunk of length config.local_chunk_length the query embedding vectors only attends to the key embedding vectors in its chunk and to the key embedding vectors of config.local_num_chunks_before previous neighboring chunks and config.local_num_chunks_after following neighboring chunks.

Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory and time bottleneck in a transformer model, with \(n_s\) being the sequence length.

Training

During training, we must ensure that the sequence length is set to a value that can be divided by the least common multiple of config.lsh_chunk_length and config.local_chunk_length and that the parameters of the Axial Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can easily be trained on sequences as long as 64000 tokens. For training, the ReformerModelWithLMHead should be used as follows:

input_ids = tokenizer.encode('This is a sentence from the training data', return_tensors='pt')
loss = model(input_ids, labels=input_ids)[0]

ReformerConfig

class transformers.ReformerConfig(attention_head_size=64, attn_layers=['local', 'lsh', 'local', 'lsh', 'local', 'lsh'], axial_norm_std=1.0, axial_pos_embds=True, axial_pos_shape=[64, 64], axial_pos_embds_dim=[64, 192], chunk_size_lm_head=0, chunk_size_feed_forward=0, eos_token_id=2, feed_forward_size=512, hash_seed=None, hidden_act='relu', hidden_dropout_prob=0.05, hidden_size=256, initializer_range=0.02, is_decoder=False, layer_norm_eps=1e-12, local_num_chunks_before=1, local_num_chunks_after=0, local_attention_probs_dropout_prob=0.05, local_attn_chunk_length=64, lsh_attn_chunk_length=64, lsh_attention_probs_dropout_prob=0.0, lsh_num_chunks_before=1, lsh_num_chunks_after=0, max_position_embeddings=4096, num_attention_heads=2, num_buckets=None, num_hashes=1, pad_token_id=0, vocab_size=320, **kwargs)[source]

This is the configuration class to store the configuration of a ReformerModel. It is used to instantiate an Reformer model according to the specified arguments, defining the model architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
  • attention_head_size (int, optional, defaults to 64) – Dimensionality of the projected key, query and value vectors

  • attn_layers (list(str), optional, defaults to [“local”, “lsh”, “local”, “lsh”, “local”, “lsh”]) – List of attention layer types in ascending order. It can be chosen between a LSHSelfAttention layer (“lsh”) and a LocalSelfAttention layer (“local”). For more information on LSHSelfAttention layer, see LSH Self Attention . For more information on LocalSelfAttention layer, see Local Self Attention .

  • axial_pos_embds (bool, optional, defaults to True) – If True use axial position embeddings. For more information on how axial position embeddings work, see Axial Position Encodings

  • axial_norm_std (float, optional, defaluts to 1.0) – The standard deviation of the normal_initializer for initializing the weight matrices of the axial positional encodings.

  • axial_pos_shape (list(int), optional, defaults to [64, 64]) – The position dims of the axial position encodings. During training the product of the position dims has to equal the sequence length. For more information on how axial position embeddings work, see Axial Position Encodings.

  • axial_pos_embds_dim (list(int), optional, defaults to [64, 192]) – The embedding dims of the axial position encodings. The sum of the embedding dims has to equal the hidden size. For more information on how axial position embeddings work, see Axial Position Encodings.

  • chunk_size_lm_head (int, optional, defaults to 0) – The chunk size of the final language model feed forward head layer. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work? .

  • chunk_size_feed_forward (int, optional, defaults to 0) – The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work? .

  • eos_token_id (int, optional, defaults to 2) – The token id for the <EOS> token.

  • feed_forward_size (int, optional, defaults to 512) – Dimensionality of the “feed_forward” (i.e., feed-forward) layer in the residual attention block.

  • hash_seed (int, optional, defaults to None) – Seed that can be used to make local sensitive hashing in LSHSelfAttention deterministic. This should only be set for testing purposed. For evaluation and training purposes hash_seed should be set to None to ensure fully random rotations in local sensitive hashing scheme.

  • hidden_act (str or function, optional, defaults to “relu”) – The non-linear activation function (function or string) in the feed forward layer in the residual attention block. If string, “gelu”, “relu”, “swish”, “gelu_new” and “gelu_fast” are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.05) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • hidden_size (int, optional, defaults to 256) – Dimensionality of the output hidden states of the residual attention blocks.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • is_decoder (bool, optional, defaults to False) – If is_decoder is True, a causal mask is used in addition to attention_mask. When using the Reformer for causal language modeling, is_decoder is set to True.

  • layer_norm_eps (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • local_chunk_length (int, optional, defaults to 64) – Length of chunk which attends to itself in LocalSelfAttention. Chunking reduces memory complexity from sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk length (chunked self attention).

  • local_num_chunks_before (int, optional, defaults to 1) – Number of previous neighbouring chunks to attend to in LocalSelfAttention layer to itself.

  • local_num_chunks_after (int, optional, defaults to 0) – Number of following neighbouring chunks to attend to in LocalSelfAttention layer in addition to itself.

  • local_attention_probs_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities in LocalSelfAttention.

  • lsh_attn_chunk_length (int, optional, defaults to 64) – Length of chunk which attends to itself in LSHSelfAttention. Chunking reduces memory complexity from sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk length (chunked self attention).

  • lsh_num_chunks_before (int, optional, defaults to 1) – Number of previous neighbouring chunks to attend to in LSHSelfAttention layer to itself.

  • lsh_num_chunks_after (int, optional, defaults to 0) – Number of following neighbouring chunks to attend to in LSHSelfAttention layer to itself.

  • lsh_attention_probs_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities in LSHSelfAttention.

  • max_position_embeddings (int, optional, defaults to 4096) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • num_buckets (int or list(int), optional, defaults to None) – Number of buckets, the key query vectors can be “hashed into” using the locality sensitive hashing scheme. Each query key vector is hashed into a hash in 1, …, num_buckets. The number of buckets can also be factorized into a list for improved memory complexity. In this case, each query key vector is hashed into a hash in 1-1, 1-2, …, num_buckets[0]-1, …, num_buckets[0]-num_buckets[1] if num_buckets is factorized into two factors. The number of buckets (or the product the factors) should approximately equal sequence length / lsh_chunk_length. If num_buckets is set to None, a good value for num_buckets is calculated on the fly.

  • num_hashes (int, optional, defaults to 1) – Number of hashing rounds (e.g. number of random rotations) in Local Sensitive Hashing scheme. The higher num_hashes, the more accurate the LSHSelfAttention becomes, but also the more memory and time intensive the hashing becomes.

  • pad_token_id (int, optional, defaults to 0) – The token id for the <PAD> token.

  • vocab_size (int, optional, defaults to 320) – Vocabulary size of the Reformer model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of ReformerModel.

Example:

>>> from transformers import ReformerModel, ReformerConfig

>>> # Initializing a Reformer configuration
>>> configuration = ReformerConfig()

>>> # Initializing a Reformer model
>>> model = ReformerModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

ReformerTokenizer

class transformers.ReformerTokenizer(vocab_file, eos_token='</s>', unk_token='<unk>', pad_token='<pad>', additional_special_tokens=[], **kwargs)[source]

Constructs an Reformer tokenizer. Based on SentencePiece .

This tokenizer inherits from PreTrainedTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods.

Parameters
  • vocab_file (string) – SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.

  • eos_token (string, optional, defaults to “</s>”) –

    The end of sequence token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

  • unk_token (string, optional, defaults to “<unk>”) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (string, optional, defaults to “<pad>”) – The token used for padding, for example when batching sequences of different lengths.

  • additional_special_tokens (List[str], optional, defaults to None) – Additional special tokens used by the tokenizer.

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) in a single string.

get_vocab()[source]

Returns the vocabulary as a dict of {token: index} pairs. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

save_vocabulary(save_directory)[source]

Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.

property vocab_size

Size of the base vocabulary (without the added tokens)

ReformerModel

class transformers.ReformerModel(config)[source]

The bare Reformer Model transformer outputting raw hidden-stateswithout any specific head on top. Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ReformerConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, position_ids=None, head_mask=None, inputs_embeds=None, num_hashes=None, output_hidden_states=None, output_attentions=None)[source]

The ReformerModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.

    Indices can be obtained using transformers.ReformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • num_hashes (int, optional, defaults to None) – num_hashes is the number of hashing rounds that should be performed during bucketing. Setting num_hashes overwrites the default num_hashes defined in config.num_hashes. For more information, see num_hashes in transformers.ReformerConfig.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

Returns

last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)):

Sequence of hidden-states at the output of the last layer of the model.

all_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

all_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (BertConfig) and inputs

Example:

>>> from transformers import ReformerTokenizer, ReformerModel
>>> import torch

>>> tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
>>> model = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
get_input_embeddings()[source]

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

set_input_embeddings(value)[source]

Set model’s input embeddings

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

ReformerModelWithLMHead

class transformers.ReformerModelWithLMHead(config)[source]

Reformer Model with a language modeling head on top. Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ReformerConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, position_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, num_hashes=None, labels=None, output_hidden_states=None, output_attentions=None)[source]

The ReformerModelWithLMHead forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.

    Indices can be obtained using transformers.ReformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • num_hashes (int, optional, defaults to None) – num_hashes is the number of hashing rounds that should be performed during bucketing. Setting num_hashes overwrites the default num_hashes defined in config.num_hashes. For more information, see num_hashes in transformers.ReformerConfig.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Classification loss (cross entropy).

prediction_scores (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size))

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

all_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

all_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (BertConfig) and inputs

Example:

>>> import torch
>>> from transformers import ReformerTokenizer, ReformerModelWithLMHead

>>> tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
>>> model = ReformerModelWithLMHead.from_pretrained('google/reformer-crime-and-punishment')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss, logits = outputs[:2]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

tie_weights()[source]

Tie the weights between the input embeddings and the output embeddings. If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

ReformerForMaskedLM

class transformers.ReformerForMaskedLM(config)[source]

Reformer Model with a language modeling head on top. Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ReformerConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, position_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, num_hashes=None, labels=None, output_hidden_states=None, output_attentions=None)[source]

The ReformerForMaskedLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.

    Indices can be obtained using transformers.ReformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • num_hashes (int, optional, defaults to None) – num_hashes is the number of hashing rounds that should be performed during bucketing. Setting num_hashes overwrites the default num_hashes defined in config.num_hashes. For more information, see num_hashes in transformers.ReformerConfig.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Classification loss (cross entropy).

prediction_scores (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size))

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

all_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

all_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (BertConfig) and inputs

Example:

>>> from transformers import ReformerTokenizer, ReformerForMaskedLM
>>> import torch

>>> tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
>>> model = ReformerForMaskedLM.from_pretrained('google/reformer-crime-and-punishment')

>>> input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt")["input_ids"]

>>> outputs = model(input_ids, labels=input_ids)
>>> loss, prediction_scores = outputs[:2]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

tie_weights()[source]

Tie the weights between the input embeddings and the output embeddings. If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

ReformerForQuestionAnswering

class transformers.ReformerForQuestionAnswering(config)[source]

Reformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / TriviaQA ( a linear layer on top of hidden-states output to compute span start logits and span end logits. Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ReformerConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, position_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, num_hashes=None, start_positions=None, end_positions=None, output_hidden_states=None, output_attentions=None)[source]

The ReformerForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices are automatically padded to be a multiple of the chunk length.

    Indices can be obtained using transformers.ReformerTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • num_hashes (int, optional, defaults to None) – num_hashes is the number of hashing rounds that should be performed during bucketing. Setting num_hashes overwrites the default num_hashes defined in config.num_hashes. For more information, see num_hashes in transformers.ReformerConfig.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • start_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

start_scores (torch.FloatTensor of shape (batch_size, sequence_length,)):

Span-start scores (before SoftMax).

end_scores (torch.FloatTensor of shape (batch_size, sequence_length,)):

Span-end scores (before SoftMax).

all_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

all_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (ReformerConfig) and inputs

Example:

>>> from transformers import ReformerTokenizer, ReformerForQuestionAnswering
>>> import torch

>>> tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
>>> model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss, start_scores, end_scores = outputs[:3]
tie_weights()[source]

Tie the weights between the input embeddings and the output embeddings. If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.