Transformers documentation

RWKV

You are viewing v4.47.0 version. A newer version v4.47.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

RWKV

Overview

The RWKV model was proposed in this repo

It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below).

This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training).

This model was contributed by sgugger. The original code can be found here.

Usage example

import torch
from transformers import AutoTokenizer, RwkvConfig, RwkvModel

model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile")
tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile")

inputs = tokenizer("This is an example.", return_tensors="pt")
# Feed everything to the model
outputs = model(inputs["input_ids"])
output_whole = outputs.last_hidden_state

outputs = model(inputs["input_ids"][:, :2])
output_one = outputs.last_hidden_state

# Using the state computed on the first inputs, we will get the same output
outputs = model(inputs["input_ids"][:, 2:], state=outputs.state)
output_two = outputs.last_hidden_state

torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5)

If you want to make sure the model stops generating when '\n\n' is detected, we recommend using the following stopping criteria:

from transformers import StoppingCriteria

class RwkvStoppingCriteria(StoppingCriteria):
    def __init__(self, eos_sequence = [187,187], eos_token_id = 537):
        self.eos_sequence = eos_sequence
        self.eos_token_id = eos_token_id

    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
        last_2_ids = input_ids[:,-2:].tolist()
        return self.eos_sequence in last_2_ids


output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()])

RwkvConfig

class transformers.RwkvConfig

< >

( vocab_size = 50277 context_length = 1024 hidden_size = 4096 num_hidden_layers = 32 attention_hidden_size = None intermediate_size = None layer_norm_epsilon = 1e-05 bos_token_id = 0 eos_token_id = 0 rescale_every = 6 tie_word_embeddings = False use_cache = True **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 50277) — Vocabulary size of the RWKV model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling RwkvModel.
  • context_length (int, optional, defaults to 1024) — The maximum sequence length that this model can be used with in a single forward (using it in RNN mode lets use any sequence length).
  • hidden_size (int, optional, defaults to 4096) — Dimensionality of the embeddings and hidden states.
  • num_hidden_layers (int, optional, defaults to 32) — Number of hidden layers in the model.
  • attention_hidden_size (int, optional) — Dimensionality of the attention hidden states. Will default to hidden_size if unset.
  • intermediate_size (int, optional) — Dimensionality of the inner feed-forward layers. Will default to 4 times hidden_size if unset.
  • layer_norm_epsilon (float, optional, defaults to 1e-05) — The epsilon to use in the layer normalization layers.
  • bos_token_id (int, optional, defaults to 0) — The id of the beginning of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as GPTNeoX.
  • eos_token_id (int, optional, defaults to 0) — The id of the end of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as GPTNeoX.
  • rescale_every (int, optional, defaults to 6) — At inference, the hidden states (and weights of the correponding output layers) are divided by 2 every rescale_every layer. If set to 0 or a negative number, no rescale is done.
  • tie_word_embeddings (bool, optional, defaults to False) — Whether or not to tie the word embeddings with the input token embeddings.
  • use_cache (bool, optional, defaults to True) — Whether or not the model should return the last state.

This is the configuration class to store the configuration of a RwkvModel. It is used to instantiate a RWKV model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RWVK-4 RWKV/rwkv-4-169m-pile architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import RwkvConfig, RwkvModel

>>> # Initializing a Rwkv configuration
>>> configuration = RwkvConfig()

>>> # Initializing a model (with random weights) from the configuration
>>> model = RwkvModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

RwkvModel

class transformers.RwkvModel

< >

( config )

Parameters

  • config (RwkvConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare RWKV Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None state: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.models.rwkv.modeling_rwkv.RwkvOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.

    If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    This is currently not used by RwkvModel, but will be supported in the future.

    What are attention masks?

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • state (tuple of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers), optional) — If passed along, the model uses the previous state in all the blocks (which will give the output for the input_ids provided as if the model add state_input_ids + input_ids as context).
  • use_cache (bool, optional) — If set to True, the last state is returned and can be used to quickly generate the next logits.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.models.rwkv.modeling_rwkv.RwkvOutput or tuple(torch.FloatTensor)

A transformers.models.rwkv.modeling_rwkv.RwkvOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RwkvConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • state (list of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers)) — The state of the model at the last time step. Can be used in a forward method with the next input_ids to avoid providing the old input_ids.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The RwkvModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import AutoTokenizer, RwkvModel
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
>>> model = RwkvModel.from_pretrained("RWKV/rwkv-4-169m-pile")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state

RwkvLMHeadModel

class transformers.RwkvForCausalLM

< >

( config )

Parameters

  • config (RwkvConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The RWKV Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None state: typing.Optional[typing.List[torch.FloatTensor]] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.models.rwkv.modeling_rwkv.RwkvCausalLMOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.

    If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    This is currently not used by RwkvModel, but will be supported in the future.

    What are attention masks?

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • state (tuple of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers), optional) — If passed along, the model uses the previous state in all the blocks (which will give the output for the input_ids provided as if the model add state_input_ids + input_ids as context).
  • use_cache (bool, optional) — If set to True, the last state is returned and can be used to quickly generate the next logits.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

Returns

transformers.models.rwkv.modeling_rwkv.RwkvCausalLMOutput or tuple(torch.FloatTensor)

A transformers.models.rwkv.modeling_rwkv.RwkvCausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RwkvConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • state (list of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers)) — The state of the model at the last time step. Can be used in a forward method with the next input_ids to avoid providing the old input_ids.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The RwkvForCausalLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> import torch
>>> from transformers import AutoTokenizer, RwkvForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
>>> model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits

Rwkv attention and the recurrent formulas

In a traditional auto-regressive Transformer, attention is written as O=softmax(QKT/d)VO = \hbox{softmax}(QK^{T} / \sqrt{d}) V

withQQ,KK andVV are matrices of shape seq_len x hidden_size named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we’re only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The productQKTQK^{T} then has shape seq_len x seq_len and we can take the matrix product withVV to get the outputOO of the same shape as the others.

Replacing the softmax by its value gives: Oi=j=1ieQiKjT/dVjj=1ieQiKjT/dO_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}

Note that the entries inQKTQK^{T} corresponding toj>ij > i are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones).

In comparison, the RWKV attention is given by Oi=σ(Ri)j=1ieWij+KjVjj=1ieWij+KjO_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}

whereRR is a new matrix called receptance by the author,KK andVV are still the key and value (\(\sigma\) here is the sigmoid function).WW is a new vector that represents the position of the token and is given by W0=u and Wk=(k1)w for k1W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1

withuu andww learnable parameters called in the code time_first and time_decay respectively. The numerator and denominator can both be expressed recursively. Naming themNiN_{i} andDiD_{i} we have: Ni=eu+KiVi+N^i where N^i=eKi1Vi1+ew+Ki2Vi2+e(i2)w+K1V1N_{i} = e^{u + K_{i}} V_{i} + \hat{N}_{i} \hbox{ where } \hat{N}_{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}

soN^i\hat{N}_{i} (called numerator_state in the code) satisfies N^0=0 and N^j+1=eKjVj+ewN^j\hat{N}_{0} = 0 \hbox{ and } \hat{N}_{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}

and Di=eu+Ki+D^i where D^i=eKi1+ew+Ki2+e(i2)w+K1D_{i} = e^{u + K_{i}} + \hat{D}_{i} \hbox{ where } \hat{D}_{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}

soD^i\hat{D}_{i} (called denominator_state in the code) satisfies D^0=0 and D^j+1=eKj+ewD^j\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}

The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don’t want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: exij=1nexj=exiMj=1nexjM\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}

withMM the maximum of allxjx_{j}. So here on top of saving the numerator state (\(\hat{N}\)) and the denominator state (\(\hat{D}\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use N~i=eMiN^i and D~i=eMiD^i\tilde{N}_{i} = e^{-M_{i}} \hat{N}_{i} \hbox{ and } \tilde{D}_{i} = e^{-M_{i}} \hat{D}_{i}

defined by the following recurrent formulas: N~0=0 and N~j+1=eKjqVj+ew+MjqN~j where q=max(Kj,w+Mj)\tilde{N}_{0} = 0 \hbox{ and } \tilde{N}_{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})

and D~0=0 and D~j+1=eKjq+ew+MjqD~j where q=max(Kj,w+Mj)\tilde{D}_{0} = 0 \hbox{ and } \tilde{D}_{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})

andMj+1=qM_{j+1} = q. With those, we can then compute Ni=eu+KiqVi+eMiN~i where q=max(u+Ki,Mi)N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})

and Di=eu+Kiq+eMiD~i where q=max(u+Ki,Mi)D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})

which finally gives us Oi=σ(Ri)NiDiO_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}

< > Update on GitHub