Text Generation
Transformers
Safetensors
English
falcon_mamba
Inference Endpoints
4-bit precision
bitsandbytes
File size: 10,114 Bytes
e84067f
9252de3
 
 
 
 
cd26dd8
 
7c94eae
 
7811dbc
e84067f
 
8c6c7e3
e84067f
9252de3
e84067f
 
9252de3
e84067f
9252de3
 
 
 
 
e84067f
 
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
 
 
 
 
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
120bb9e
e84067f
9252de3
 
e84067f
9252de3
 
e84067f
120bb9e
 
e84067f
9252de3
 
e84067f
9252de3
 
 
e84067f
9252de3
e84067f
120bb9e
e84067f
 
9252de3
 
e84067f
9252de3
 
e84067f
120bb9e
 
 
e84067f
9252de3
120bb9e
e84067f
9252de3
 
 
e84067f
9252de3
e84067f
 
9252de3
e84067f
9252de3
e84067f
00f217f
9252de3
 
 
e84067f
9252de3
 
e84067f
9252de3
e84067f
9252de3
 
e84067f
9252de3
e84067f
9252de3
 
 
 
 
 
 
e84067f
 
9252de3
 
 
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
 
9252de3
 
 
 
 
 
 
 
 
 
 
 
 
 
e84067f
 
 
9252de3
 
 
 
 
 
 
 
 
 
 
 
 
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
 
 
e84067f
f8f2788
e84067f
 
 
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
 
 
 
 
 
 
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
9252de3
e84067f
406da0e
4dd5666
406da0e
 
 
 
 
 
 
 
4dd5666
 
406da0e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
---
datasets:
- tiiuae/falcon-refinedweb
- HuggingFaceFW/fineweb-edu
language:
- en
license:
- other
license_name: falcon-mamba-7b-license
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
base_model: tiiuae/falcon-mamba-7b
---

<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/>

**Make sure to install `bitsandbytes` and have a GPU compatible with `bitsandbytes` to run this model**


#  Table of Contents

0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)


# TL;DR

# Model Details

## Model Description

- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
- **Model type:** Causal decoder-only
- **Architecture:** Mamba
- **Language(s) (NLP):** Mainly English
- **License:** TII Falcon-Mamba License 2.0

### Model Source

- **Paper:** *coming soon*.

# Usage

Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source):

## Using the Pytorch model

This checkpoint will only run on a GPU device with `bitsandbytes` installed. See below for more details on how to load it

<details>
<summary> Click to expand </summary>

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-4bit")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-4bit")

input_text = "Question: How many hours in one day? Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>

You can also dequantize the model with `model.dequantize()` method:


<details>
<summary> Click to expand </summary>

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-4bit")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-4bit")
model = model.dequantize()

input_text = "Question: How many hours in one day? Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>


# Training Details

## Training Data

Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated.
Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length training from 2,048 up to 8,192. 
Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
At the last training stage, small portion of high-quality curated data was used to further enhance performance.

Overall, the data sources included RefinedWeb-English, high quality technical data, code data and conversational data extracted from public sources.
In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).

The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer.

## Training Procedure
Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO.

#### Training Hyperparameters

| **Hyperparameter** | **Value**  | **Comment**                               |
|--------------------|------------|-------------------------------------------|
| Precision          | `bfloat16` |                                           |
| Optimizer          | AdamW      |                                           |
| Max learning rate  | 6.4e-4     | Following a WSD (warmup-stable-decay) learning rate schedule |
| Weight decay       | 1e-1       |                                           |
| Batch size         | 2048       |                                           |


The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. 
In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. 
Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant.  

#### Speeds, Sizes, Times

The model training took roughly two months. 

# Evaluation

## Benchmarks

We evaluate our model on all benchmarks of the leaderboard's version 2 using the `lm-evaluation-harness` package, and we evaluate it on the benchmarks of version 1 using `lighteval`.


| `model name`              |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| 
|:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:|
| ***Pure SSM models***     |        |       |           |       |       |          |         |
| `Falcon-Mamba-7B`         | 33.36  | 19.88 |    3.63   | 8.05  | 10.86 | 14.47    |**15.04**|
| `TRI-ML/mamba-7b-rw`      | 22.46  | 6.71  | 0.45      | 1.12  | 5.51  | 1.69     | 6.25    |
|***Hybrid SSM-attention models***   |       |           |       |       |          |         |
| `Zamba-7B-v1`             | 24.06  | 21.12 | 3.32      | 3.03  | 7.74  | 16.02    | 12.55   |
|`recurrentgemma-9b`        | 30.76  | 14.80 | 4.83      | 4.70  | 6.60  | 17.88    |  13.20  |
|***Transformer models***   |        |       |           |       |       |          |         |
| `Falcon2-11B`             | 32.61  | 21.94 |    2.34   | 2.80  | 7.53  | 15.44    |  13.78  |
| `Meta-Llama-3-8B`         | 14.55  | 24.50 |    3.25   | 7.38  | 6.24  | 24.55    |  13.41  |
| `gemma-7B`                | 26.59  | 21.12 |    6.42   | 4.92  | 10.98 | 21.64    |**15.28**|
| `Mistral-7B-v0.1`         | 23.86  | 22.02 |    2.49   | 5.59  | 10.68 | 22.36    |  14.50  |
| `Mistral-Nemo-Base`       | 16.83  | 29.37 |    4.98   | 5.82  | 6.52  | 27.46    |  15.08  |



| `model name`                 |`ARC`|`HellaSwag`   |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average`         | 
|:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:|
| ***Pure SSM models***        |        |           |       |            |            |       |                  |
| `Falcon-Mamba-7B`            |62.03   |   80.82   | 62.11 |   73.64    |   53.42    | 52.54 |  **64.09**       |
| `TRI-ML/mamba-7b-rw`         | 46.48  | 80.24     | 57.72 | 76.40      | -          | 4.70  | -                |
|***Hybrid SSM-attention models***|     |           |       |            |            |       |                  |
| `recurrentgemma-9b`          |52.00   |   80.40   | 60.50 |   73.60    |   38.60    | 42.60 |  57.95           |
| `Zyphra/Zamba-7B-v1`         | 46.48  | 80.24     | 57.72 | 76.40      | -          | 30.78 | -                |
|***Transformer models***      |        |           |       |            |            |       |                  |
| `Falcon2-11B`                | 59.73  | 82.91     | 58.37 | 78.30      | 52.56      | 53.83 | **64.28**        |
| `Meta-Llama-3-8B`            | 60.24  | 82.23     | 66.70 | 78.45      | 42.93      | 45.19 | 62.62            |
| `gemma-7B`                   | 61.09  |   82.20   | 64.56 |   79.01    |   44.79    | 50.87 |  63.75           |
| `Mistral-7B-v0.1`            | 59.98  | 83.31     | 64.16 | 78.37      | 42.15      | 37.83 | 60.97            |

## Throughput

This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands:

```bash
pip install "causal-conv1d>=1.4.0" mamba-ssm
```

Refer to our [FalconMamba blogpost](https://huggingface.co/blog/falconmamba) for more details about performance evaluation.



# Technical Specifications 

## Model Architecture and Objective

Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)).

| **Hyperparameter** | **Value** | **Comment**                            |
|--------------------|-----------|----------------------------------------|
| Layers             | 64        | Number of layers                       |
| `d_model`          | 4096      | Hidden dimension                       |
| `d_state`          | 16        | The SSM state dimension                |
| Vocabulary         | 65024     | Vocabulary Size                        |
| Sequence length    | 8192      | During stages 4 and LR Decay stage     |

## Compute Infrastructure

### Hardware

Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. 

### Software

Falcon-Mamba-7B was trained an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels.

# Citation

You can use the following bibtex citation: 
```
@misc{zuo2024falconmambacompetitiveattentionfree,
      title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, 
      author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid},
      year={2024},
      eprint={2410.05355},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.05355}, 
}
```
```