Transformers documentation

FalconMamba

You are viewing v4.46.2 version. A newer version v4.46.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

FalconMamba

Overview

The FalconMamba model was proposed by TII UAE (Technology Innovation Institute) in their release.

The abstract from the paper is the following:

We present FalconMamba, a new base large language model based on the novel Mamba architecture. FalconMamba is trained on 5.8 trillion tokens with carefully selected data mixtures. As a pure Mamba-based model, FalconMamba surpasses leading open-weight models based on Transformers, such as Mistral 7B, Llama3 8B, and Falcon2 11B. It is on par with Gemma 7B and outperforms models with different architecture designs, such as RecurrentGemma 9B. Currently, FalconMamba is the best-performing Mamba model in the literature at this scale, surpassing both existing Mamba and hybrid Mamba-Transformer models. Due to its architecture, FalconMamba is significantly faster at inference and requires substantially less memory for long sequence generation. Despite recent studies suggesting that hybrid Mamba-Transformer models outperform pure architecture designs, we argue and demonstrate that the pure Mamba design can achieve similar, even superior results compared to the hybrid design. We make the weights of our implementation of FalconMamba publicly available under a permissive license.

Tips:

The model has been trained on approximtely 6T tokens consisting a mixture of many data sources such as RefineWeb, Cosmopedia and Math data.

For more details about the training procedure and the architecture, have a look at the technical paper of FalconMamba (coming soon).

Usage

Below we demonstrate how to use the model:

from transformers import FalconMambaForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b")
model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b")

input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]

out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))

The architecture is also compatible with torch.compile for faster generation:

from transformers import FalconMambaForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b")
model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b", torch_dtype=torch.bfloat16).to(0)
model = torch.compile(model)

input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]

out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))

If you have access to a GPU that is compatible with bitsandbytes, you can also quantize the model in 4-bit precision:

from transformers import FalconMambaForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b")
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b", quantization_config=quantization_config)

input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]

out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))

You can also play with the instruction fine-tuned model:

from transformers import FalconMambaForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))

FalconMambaConfig

class transformers.FalconMambaConfig

< >

( vocab_size = 50280 hidden_size = 768 state_size = 16 num_hidden_layers = 32 layer_norm_epsilon = 1e-05 pad_token_id = 0 bos_token_id = 0 eos_token_id = 0 expand = 2 conv_kernel = 4 use_bias = False use_conv_bias = True hidden_act = 'silu' initializer_range = 0.1 residual_in_fp32 = True time_step_rank = 'auto' time_step_scale = 1.0 time_step_min = 0.001 time_step_max = 0.1 time_step_init_scheme = 'random' time_step_floor = 0.0001 rescale_prenorm_residual = False use_cache = True use_mambapy = False mixer_rms_eps = 1e-06 **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 50280) — Vocabulary size of the FALCON_MAMBA model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling FalconMambaModel.
  • hidden_size (int, optional, defaults to 768) — Dimensionality of the embeddings and hidden states.
  • state_size (int, optional, defaults to 16) — shape of the state space latents.
  • num_hidden_layers (int, optional, defaults to 32) — Number of hidden layers in the model.
  • layer_norm_epsilon (float, optional, defaults to 1e-05) — The epsilon to use in the layer normalization layers.
  • pad_token_id (int, optional, defaults to 0) — Padding token id.
  • bos_token_id (int, optional, defaults to 0) — The id of the beginning of sentence token in the vocabulary.
  • eos_token_id (int, optional, defaults to 0) — The id of the end of sentence token in the vocabulary.
  • expand (int, optional, defaults to 2) — Expanding factor used to determine the intermediate size.
  • conv_kernel (int, optional, defaults to 4) — Size of the convolution kernel.
  • use_bias (bool, optional, defaults to False) — Whether or not to use bias in [“in_proj”, “out_proj”] of the mixer block
  • use_conv_bias (bool, optional, defaults to True) — Whether or not to use bias in the convolution layer of the mixer block.
  • hidden_act (str, optional, defaults to "silu") — The non-linear activation function (function or string) in the decoder.
  • initializer_range (float, optional, defaults to 0.1) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • residual_in_fp32 (bool, optional, defaults to True) — Whether or not residuals should be in float32. If set to False residuals will keep the same dtype as the rest of the model
  • time_step_rank (Union[int,str], optional, defaults to "auto") — Rank of the discretization projection matrix. "auto" means that it will default to math.ceil(self.hidden_size / 16)
  • time_step_scale (float, optional, defaults to 1.0) — Scale used used to scale dt_proj.bias.
  • time_step_min (float, optional, defaults to 0.001) — Minimum time_step used to bound dt_proj.bias.
  • time_step_max (float, optional, defaults to 0.1) — Maximum time_step used to bound dt_proj.bias.
  • time_step_init_scheme (float, optional, defaults to "random") — Init scheme used for dt_proj.weight. Should be one of ["random","uniform"]
  • time_step_floor (float, optional, defaults to 0.0001) — Minimum clamping value of the dt_proj.bias layer initialization.
  • rescale_prenorm_residual (bool, optional, defaults to False) — Whether or not to rescale out_proj weights when initializing.
  • use_cache (bool, optional, defaults to True) — Whether or not the cache should be used.
  • use_mambapy (bool, optional, defaults to False) — Determines the fallback strategy during training if the CUDA-based official implementation of FalconMamba is not avaiable. If True, the falcon_mamba.py implementation is used. If False, the naive and slower implementation is used. Consider switching to the naive version if memory is limited.
  • mixer_rms_eps (float, optional, defaults to 1e-06) — The RMS norm epsilon value that is used in the Mixer RMS norm for B, C and dt states.

This is the configuration class to store the configuration of a FalconMambaModel. It is used to instantiate a FALCON_MAMBA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FALCON_MAMBA tiiuae/falcon-mamba-7b architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import FalconMambaConfig, FalconMambaModel

>>> # Initializing a FalconMamba configuration
>>> configuration = FalconMambaConfig()

>>> # Initializing a model (with random weights) from the configuration
>>> model = FalconMambaModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

FalconMambaModel

class transformers.FalconMambaModel

< >

( config )

Parameters

  • config (FalconMambaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare FALCONMAMBA Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: Optional = None inputs_embeds: Optional = None cache_params: Optional = None use_cache: Optional = None output_hidden_states: Optional = None return_dict: Optional = None cache_position: Optional = None attention_mask: Optional = None ) β†’ transformers.models.falcon_mamba.modeling_falcon_mamba.FalconMambaOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — Indices of input sequence tokens in the vocabulary.

    If cache_params.seqlen_offset>0, only input_ids that do not have their past calculated should be passed as input_ids.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • cache_params (MambaCache, optional) — If passed along, the model uses the previous state in all the blocks (which will give the output for the input_ids provided as if the model add state_input_ids + input_ids as context).
  • use_cache (bool, optional) — If set to True, the cache_params is returned and can be used to quickly generate the next logits.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.models.falcon_mamba.modeling_falcon_mamba.FalconMambaOutput or tuple(torch.FloatTensor)

A transformers.models.falcon_mamba.modeling_falcon_mamba.FalconMambaOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FalconMambaConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the model.

  • cache_params (MambaCache) β€” The state of the model at the last time step. Can be used in a forward method with the next input_ids to avoid providing the old input_ids.

    Includes both the State space model state matrices after the selective scan, and the Convolutional states

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

The FalconMambaModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import AutoTokenizer, FalconMambaModel
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b")
>>> model = FalconMambaModel.from_pretrained("tiiuae/falcon-mamba-7b")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state

FalconMambaLMHeadModel

class transformers.FalconMambaForCausalLM

< >

( config )

Parameters

  • config (FalconMambaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The FALCONMAMBA Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: Optional = None attention_mask: Optional = None inputs_embeds: Optional = None cache_params: Optional = None labels: Optional = None output_hidden_states: Optional = None return_dict: Optional = None use_cache: Optional = None cache_position: Optional = None **kwargs ) β†’ transformers.models.falcon_mamba.modeling_falcon_mamba.FalconMambaCausalLMOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — Indices of input sequence tokens in the vocabulary.

    If cache_params.seqlen_offset>0, only input_ids that do not have their past calculated should be passed as input_ids.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • cache_params (MambaCache, optional) — If passed along, the model uses the previous state in all the blocks (which will give the output for the input_ids provided as if the model add state_input_ids + input_ids as context).
  • use_cache (bool, optional) — If set to True, the cache_params is returned and can be used to quickly generate the next logits.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

Returns

transformers.models.falcon_mamba.modeling_falcon_mamba.FalconMambaCausalLMOutput or tuple(torch.FloatTensor)

A transformers.models.falcon_mamba.modeling_falcon_mamba.FalconMambaCausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FalconMambaConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) β€” Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) β€” Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • cache_params (MambaCache) β€” The state of the model at the last time step. Can be used in a forward method with the next input_ids to avoid providing the old input_ids.

    Includes both the State space model state matrices after the selective scan, and the Convolutional states

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

The FalconMambaForCausalLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> import torch
>>> from transformers import AutoTokenizer, FalconMambaForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b")
>>> model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits
< > Update on GitHub