Files changed (1) hide show
  1. README.md +119 -75
README.md CHANGED
@@ -1,79 +1,137 @@
1
  ---
2
  language:
3
  - en
4
- - fr
5
  - es
6
  - pt
7
  tags:
8
  - falcon3
9
- license: other
10
- license_name: falcon-llm-license
11
  license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
12
- library_name: transformers
13
  ---
14
 
15
- <div align="center">
16
- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
17
- </div>
18
 
19
- # Falcon3-1B-Base
20
 
21
- **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
22
 
23
- This repository contains the **Falcon3-1B-Base**. It achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
24
- Falcon3-1B-Base supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 4K.
25
- It was pruned in terms of depth, width, number of heads, and embedding channels from a larger 3B Falcon model, and was efficiently trained on only 80 GT using a knowledge distillation objective.
 
 
26
 
27
- ⚠️ **This is a raw, pretrained model, which should be further finetuned using SFT, RLHF, continued pretraining, etc. for most use cases.**
28
 
29
- ## Model Details
30
- - Architecture
31
- - Transformer-based causal decoder-only architecture
32
- - 18 decoder blocks
33
- - Grouped Query Attention (GQA) for faster inference: 8 query heads and 4 key-value heads
34
- - Wider head dimension: 256
35
- - High RoPE value to support long context understanding: 1000042
36
- - Uses SwiGLU and RMSNorm
37
- - 4K context length
38
- - 131K vocab size
39
- - Pruned and healed using larger Falcon models (3B and 7B respectively) on only 80 Gigatokens of datasets comprising of web, code, STEM, high quality and multilingual data using 256 H100 GPU chips
40
- - Supports EN, FR, ES, PT
41
- - Developed by [Technology Innovation Institute](https://www.tii.ae)
42
- - License: TII Falcon-LLM License 2.0
43
- - Model Release Date: December 2024
44
 
 
45
 
46
- ## Getting started
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  <details>
49
  <summary> Click to expand </summary>
50
 
51
  ```python
52
  import torch
53
- from transformers import pipeline
54
-
55
- pipe = pipeline(
56
- "text-generation",
57
- model="tiiuae/Falcon3-1B-Base",
58
- torch_dtype=torch.bfloat16,
59
- device_map="auto"
60
- )
61
- response = pipe("Question: How many hours in one day? Answer: ")
62
- print(response[0]['generated_text'])
 
 
63
  ```
64
 
65
  </details>
66
 
67
- <br>
68
 
69
- ## Benchmarks
70
- We report in the following table our internal pipeline benchmarks.
71
- - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
72
- - We report **raw scores**.
73
- - We use same batch-size across all models.
74
 
 
75
 
 
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
78
  <colgroup>
79
  <col style="width: 10%;">
@@ -81,6 +139,7 @@ We report in the following table our internal pipeline benchmarks.
81
  <col style="width: 7%;">
82
  <col style="width: 7%;">
83
  <col style="width: 7%;">
 
84
  <col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
85
  </colgroup>
86
  <thead>
@@ -98,21 +157,21 @@ We report in the following table our internal pipeline benchmarks.
98
  <td rowspan="3">General</td>
99
  <td>MMLU (5-shot)</td>
100
  <td>31.1</td>
101
- <td><b>61.0</b></td>
102
  <td>50.1</td>
103
  <td>42.5</td>
104
  </tr>
105
  <tr>
106
  <td>MMLU-PRO (5-shot)</td>
107
  <td>11.7</td>
108
- <td><b>28.4</b></td>
109
  <td>21.3</td>
110
  <td>16.1</td>
111
  </tr>
112
  <tr>
113
  <td>IFEval</td>
114
  <td>14.8</td>
115
- <td><b>26.0</b></td>
116
  <td>24.2</td>
117
  <td>25.2</td>
118
  </tr>
@@ -120,14 +179,14 @@ We report in the following table our internal pipeline benchmarks.
120
  <td rowspan="2">Math</td>
121
  <td>GSM8K (5-shot)</td>
122
  <td>6.6</td>
123
- <td><b>62.2</b></td>
124
  <td>31.0</td>
125
  <td>34.3</td>
126
  </tr>
127
  <tr>
128
  <td>MATH Lvl-5 (4-shot)</td>
129
  <td>0.2</td>
130
- <td><b>6.7</b></td>
131
  <td>1.4</td>
132
  <td>2.2</td>
133
  </tr>
@@ -135,7 +194,7 @@ We report in the following table our internal pipeline benchmarks.
135
  <td rowspan="4">Reasoning</td>
136
  <td>Arc Challenge (25-shot)</td>
137
  <td>40.2</td>
138
- <td><b>54.8</b></td>
139
  <td>54.1</td>
140
  <td>48.1</td>
141
  </tr>
@@ -143,7 +202,7 @@ We report in the following table our internal pipeline benchmarks.
143
  <td>GPQA (0-shot)</td>
144
  <td>24.2</td>
145
  <td>28.1</td>
146
- <td><b>28.9</b></td>
147
  <td>28.1</td>
148
  </tr>
149
  <tr>
@@ -151,12 +210,12 @@ We report in the following table our internal pipeline benchmarks.
151
  <td>34.5</td>
152
  <td>35.5</td>
153
  <td>34.7</td>
154
- <td><b>41.9</b></td>
155
  </tr>
156
  <tr>
157
  <td>BBH (3-shot)</td>
158
  <td>31.2</td>
159
- <td><b>41.1</b></td>
160
  <td>34.2</td>
161
  <td>36.0</td>
162
  </tr>
@@ -165,13 +224,13 @@ We report in the following table our internal pipeline benchmarks.
165
  <td>PIQA (0-shot)</td>
166
  <td>74.5</td>
167
  <td>76.0</td>
168
- <td><b>77.5</b></td>
169
  <td>74.5</td>
170
  </tr>
171
  <tr>
172
  <td>SciQ (0-shot)</td>
173
  <td>88.5</td>
174
- <td><b>93.1</b></td>
175
  <td>90.8</td>
176
  <td>91.1</td>
177
  </tr>
@@ -179,35 +238,20 @@ We report in the following table our internal pipeline benchmarks.
179
  <td>Winogrande (0-shot)</td>
180
  <td>60.4</td>
181
  <td>63.0</td>
182
- <td><b>66.1</b></td>
183
  <td>61.2</td>
184
  </tr>
185
  <tr>
186
  <td>OpenbookQA (0-shot)</td>
187
  <td>37.4</td>
188
  <td>40.4</td>
189
- <td><b>44.0</b></td>
190
  <td>41.0</td>
191
  </tr>
192
  </tbody>
193
  </table>
194
 
195
- ## Useful links
196
- - View our [release blogpost](https://huggingface.co/blog/falcon3).
197
- - Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
198
 
199
- ## Technical Report
200
- Coming soon....
201
 
202
- ## Citation
203
- If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
204
-
205
- ```
206
- @misc{Falcon3,
207
- title = {The Falcon 3 Family of Open Models},
208
- url = {https://huggingface.co/blog/falcon3},
209
- author = {Falcon-LLM Team},
210
- month = {December},
211
- year = {2024}
212
- }
213
- ```
 
1
  ---
2
  language:
3
  - en
 
4
  - es
5
  - pt
6
  tags:
7
  - falcon3
8
+ license: other
9
+ license_name: falcon-llm-license
10
  license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
 
11
  ---
12
 
 
 
 
13
 
 
14
 
15
+ # Table of Contents
16
 
17
+ 0. [TL;DR](#TL;DR)
18
+ 1. [Model Details](#model-details)
19
+ 2. [Usage](#usage)
20
+ 3. [Training Details](#training-details)
21
+ 4. [Evaluation](#evaluation)
22
 
 
23
 
24
+ # TL;DR
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
+ # Model Details
27
 
28
+ ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
29
+
30
+ ## Model Description
31
+
32
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae)
33
+ - **Model type:** Causal decoder-only
34
+ - **Architecture:** Transformer-base
35
+ - **Language(s) (NLP):** Mainly English
36
+ - **License:** TII Falcon-LLM License 2.0
37
+
38
+ <br>
39
+
40
+ # Usage
41
+
42
+ Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source):
43
+
44
+ ## Using the Pytorch model with 🤗 transformers
45
+
46
+ ### Running the model on a CPU
47
+
48
+ <details>
49
+ <summary> Click to expand </summary>
50
+
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForCausalLM
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained("tiiuae/Falcon3-7B-Base")
55
+ model = AutoModelForCausalLM.from_pretrained("tiiuae/Falcon3-7B-Base")
56
+
57
+ input_text = "Question: How many hours in one day? Answer: "
58
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
59
+
60
+ outputs = model.generate(input_ids)
61
+ print(tokenizer.decode(outputs[0]))
62
+ ```
63
+
64
+ </details>
65
+
66
+ ### Running the model on a GPU
67
+
68
+ <details>
69
+ <summary> Click to expand </summary>
70
+
71
+ ```python
72
+ # pip install accelerate
73
+ from transformers import AutoTokenizer, AutoModelForCausalLM
74
+
75
+ tokenizer = AutoTokenizer.from_pretrained("tiiuae/Falcon3-7B-Base")
76
+ model = AutoModelForCausalLM.from_pretrained("tiiuae/Falcon3-7B-Base", device_map="auto")
77
+
78
+ input_text = "Question: How many hours in one day? Answer: "
79
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
80
+
81
+ outputs = model.generate(input_ids)
82
+ print(tokenizer.decode(outputs[0]))
83
+ ```
84
+
85
+ </details>
86
+
87
+ ### Running the model on a GPU using `torch.compile`
88
 
89
  <details>
90
  <summary> Click to expand </summary>
91
 
92
  ```python
93
  import torch
94
+ from transformers import AutoTokenizer, AutoModelForCausalLM
95
+
96
+ tokenizer = AutoTokenizer.from_pretrained("tiiuae/Falcon3-7B-Base")
97
+ model = AutoModelForCausalLM.from_pretrained("tiiuae/Falcon3-7B-Base", torch_dtype=torch.bfloat16).to(0)
98
+
99
+ model = torch.compile(model)
100
+
101
+ input_text = "Question: How many hours in one day? Answer: "
102
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
103
+
104
+ outputs = model.generate(input_ids)
105
+ print(tokenizer.decode(outputs[0]))
106
  ```
107
 
108
  </details>
109
 
 
110
 
111
+ # Training Details
112
+
113
+ ## Training Data
114
+
115
+ Falcon3-7B is trained on 15 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data.
116
 
117
+ ## Training Procedure
118
 
119
+ Falcon3-7B is trained on 256 H100 nodes (world size 2048).
120
 
121
+ ### Training Hyperparameters
122
+
123
+ | **Hyperparameter** | **Value** | **Comment** |
124
+ |--------------------|------------|---------------------------------------|
125
+ | Precision | `bfloat16` | |
126
+ | Optimizer | AdamW | |
127
+ | Max learning rate | 6e-4 | Following a WSD (warmup-stable-decay) |
128
+ | | | learning rate scheduler |
129
+ | Weight decay | 1e-1 | |
130
+ | z-loss | 1e-4 | |
131
+ | Batch size | Variable | Batch size was gradually increased |
132
+ | | | during the training |
133
+
134
+ # Evaluation
135
  <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
136
  <colgroup>
137
  <col style="width: 10%;">
 
139
  <col style="width: 7%;">
140
  <col style="width: 7%;">
141
  <col style="width: 7%;">
142
+ <col style="width: 7%;">
143
  <col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
144
  </colgroup>
145
  <thead>
 
157
  <td rowspan="3">General</td>
158
  <td>MMLU (5-shot)</td>
159
  <td>31.1</td>
160
+ <td>61.0</td>
161
  <td>50.1</td>
162
  <td>42.5</td>
163
  </tr>
164
  <tr>
165
  <td>MMLU-PRO (5-shot)</td>
166
  <td>11.7</td>
167
+ <td>28.4</td>
168
  <td>21.3</td>
169
  <td>16.1</td>
170
  </tr>
171
  <tr>
172
  <td>IFEval</td>
173
  <td>14.8</td>
174
+ <td>26.0</td>
175
  <td>24.2</td>
176
  <td>25.2</td>
177
  </tr>
 
179
  <td rowspan="2">Math</td>
180
  <td>GSM8K (5-shot)</td>
181
  <td>6.6</td>
182
+ <td>62.2</td>
183
  <td>31.0</td>
184
  <td>34.3</td>
185
  </tr>
186
  <tr>
187
  <td>MATH Lvl-5 (4-shot)</td>
188
  <td>0.2</td>
189
+ <td>6.7</td>
190
  <td>1.4</td>
191
  <td>2.2</td>
192
  </tr>
 
194
  <td rowspan="4">Reasoning</td>
195
  <td>Arc Challenge (25-shot)</td>
196
  <td>40.2</td>
197
+ <td>54.8</td>
198
  <td>54.1</td>
199
  <td>48.1</td>
200
  </tr>
 
202
  <td>GPQA (0-shot)</td>
203
  <td>24.2</td>
204
  <td>28.1</td>
205
+ <td>28.9</td>
206
  <td>28.1</td>
207
  </tr>
208
  <tr>
 
210
  <td>34.5</td>
211
  <td>35.5</td>
212
  <td>34.7</td>
213
+ <td>41.9</td>
214
  </tr>
215
  <tr>
216
  <td>BBH (3-shot)</td>
217
  <td>31.2</td>
218
+ <td>41.1</td>
219
  <td>34.2</td>
220
  <td>36.0</td>
221
  </tr>
 
224
  <td>PIQA (0-shot)</td>
225
  <td>74.5</td>
226
  <td>76.0</td>
227
+ <td>77.5</td>
228
  <td>74.5</td>
229
  </tr>
230
  <tr>
231
  <td>SciQ (0-shot)</td>
232
  <td>88.5</td>
233
+ <td>93.1</td>
234
  <td>90.8</td>
235
  <td>91.1</td>
236
  </tr>
 
238
  <td>Winogrande (0-shot)</td>
239
  <td>60.4</td>
240
  <td>63.0</td>
241
+ <td>66.1</td>
242
  <td>61.2</td>
243
  </tr>
244
  <tr>
245
  <td>OpenbookQA (0-shot)</td>
246
  <td>37.4</td>
247
  <td>40.4</td>
248
+ <td>44.0</td>
249
  <td>41.0</td>
250
  </tr>
251
  </tbody>
252
  </table>
253
 
 
 
 
254
 
 
 
255
 
256
+
257
+ # Citation