jncraton commited on
Commit
42588c6
1 Parent(s): 56dd58d

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - es
6
+ - pt
7
+ tags:
8
+ - falcon3
9
+ base_model: tiiuae/Falcon3-7B-Base
10
+ license: other
11
+ license_name: falcon-llm-license
12
+ license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
13
+ library_name: transformers
14
+ ---
15
+
16
+ <div align="center">
17
+ <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
18
+ </div>
19
+
20
+ # Falcon3-7B-Instruct
21
+
22
+ **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
23
+
24
+ This repository contains the **Falcon3-7B-Instruct**. It achieves state of art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks.
25
+ Falcon3-7B-Instruct supports 4 languages (english, french, spanish, portuguese) and a context length up to 32K.
26
+
27
+ ## Model Details
28
+ - Architecture
29
+ - Transformer based causal decoder only architecture
30
+ - 28 decoder blocks
31
+ - Grouped query attention (GQA) for faster inference: 12 query heads and 4 key value heads
32
+ - Wider head dimension: 256
33
+ - High RoPE value to support long context understanding: 1000042
34
+ - Uses SwiGLU and RMSNorm
35
+ - 32K context length
36
+ - 131K vocab size
37
+ - Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
38
+ - Postrained on 1.2 million samples of STEM, conversations, code, safety and function call data
39
+ - Supports EN, FR, ES, PT
40
+ - Developed by [Technology Innovation Institute](https://www.tii.ae)
41
+ - License: TII Falcon-LLM License 2.0
42
+ - Model Release Date: December 2024
43
+
44
+
45
+ ## Getting started
46
+
47
+ <details>
48
+ <summary> Click to expand </summary>
49
+
50
+ ```python
51
+ from transformers import AutoTokenizer, AutoModelForCausalLM
52
+
53
+
54
+ from transformers import AutoModelForCausalLM, AutoTokenizer
55
+
56
+ model_name = "tiiuae/Falcon3-7B-Instruct"
57
+
58
+ model = AutoModelForCausalLM.from_pretrained(
59
+ model_name,
60
+ torch_dtype="auto",
61
+ device_map="auto"]
62
+ )
63
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
64
+
65
+ prompt = "How many hours in one day?"
66
+ messages = [
67
+ {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
68
+ {"role": "user", "content": prompt}
69
+ ]
70
+ text = tokenizer.apply_chat_template(
71
+ messages,
72
+ tokenize=False,
73
+ add_generation_prompt=True
74
+ )
75
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
76
+
77
+ generated_ids = model.generate(
78
+ **model_inputs,
79
+ max_new_tokens=1024
80
+ )
81
+ generated_ids = [
82
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
83
+ ]
84
+
85
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
86
+ print(response)
87
+ ```
88
+
89
+ </details>
90
+
91
+ <br>
92
+
93
+ ## Benchmarks
94
+ We report in the following table our internal pipeline benchmarks.
95
+ - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
96
+ - We report **raw scores** obtained by applying chat template **without fewshot_as_multiturn** (unlike Llama3.1).
97
+ - We use same batch-size across all models.
98
+
99
+ <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
100
+ <colgroup>
101
+ <col style="width: 10%;">
102
+ <col style="width: 10%;">
103
+ <col style="width: 7%;">
104
+ <col style="width: 7%;">
105
+ <col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
106
+ </colgroup>
107
+ <thead>
108
+ <tr>
109
+ <th>Category</th>
110
+ <th>Benchmark</th>
111
+ <th>Llama-3.1-8B-Instruct</th>
112
+ <th>Qwen2.5-7B-Instruct</th>
113
+ <th>Falcon3-7B-Instruct</th>
114
+ </tr>
115
+ </thead>
116
+ <tbody>
117
+ <tr>
118
+ <td rowspan="3">General</td>
119
+ <td>MMLU (5-shot)</td>
120
+ <td>55.9</td>
121
+ <td><b>72.4</b></td>
122
+ <td>68</td>
123
+ </tr>
124
+ <tr>
125
+ <td>MMLU-PRO (5-shot)</td>
126
+ <td>21.8</td>
127
+ <td>35.8</td>
128
+ <td><b>40.7</b></td>
129
+ </tr>
130
+ <tr>
131
+ <td>IFEval</td>
132
+ <td><b>78.8</b></td>
133
+ <td>74.7</td>
134
+ <td>76.5</td>
135
+ </tr>
136
+ <tr>
137
+ <td rowspan="3">Math</td>
138
+ <td>GSM8K (5-shot)</td>
139
+ <td>78.1</td>
140
+ <td>77.5</td>
141
+ <td><b>79.1</b></td>
142
+ </tr>
143
+ <tr>
144
+ <td>GSM8K (8-shot, COT)</td>
145
+ <td>79.8</td>
146
+ <td>72.7</td>
147
+ <td><b>80.9</b></td>
148
+ </tr>
149
+ <tr>
150
+ <td>MATH Lvl-5 (4-shot)</td>
151
+ <td>10.4</td>
152
+ <td>26</td>
153
+ <td><b>29.4</b></td>
154
+ </tr>
155
+ <tr>
156
+ <td rowspan="5">Reasoning</td>
157
+ <td>Arc Challenge (25-shot)</td>
158
+ <td>46.6</td>
159
+ <td>55.7</td>
160
+ <td><b>65.9</b></td>
161
+ </tr>
162
+ <tr>
163
+ <td>GPQA (0-shot)</td>
164
+ <td><b>33.6</b></td>
165
+ <td>31.9</td>
166
+ <td>32</td>
167
+ </tr>
168
+ <tr>
169
+ <td>GPQA (0-shot, COT)</td>
170
+ <td>9.6</td>
171
+ <td>13.8</td>
172
+ <td><b>22.3</b></td>
173
+ </tr>
174
+ <tr>
175
+ <td>MUSR (0-shot)</td>
176
+ <td>38.6</td>
177
+ <td>40.7</td>
178
+ <td><b>46.4</b></td>
179
+ </tr>
180
+ <tr>
181
+ <td>BBH (3-shot)</td>
182
+ <td>43.7</td>
183
+ <td><b>53.9</b></td>
184
+ <td>52.4</td>
185
+ </tr>
186
+ <tr>
187
+ <td rowspan="4">CommonSense Understanding</td>
188
+ <td>PIQA (0-shot)</td>
189
+ <td><b>78.9</b></td>
190
+ <td>73.7</td>
191
+ <td>78.8</td>
192
+ </tr>
193
+ <tr>
194
+ <td>SciQ (0-shot)</td>
195
+ <td>80.2</td>
196
+ <td>50.9</td>
197
+ <td><b>94.7</b></td>
198
+ </tr>
199
+ <tr>
200
+ <td>Winogrande (0-shot)</td>
201
+ <td>-</td>
202
+ <td>-</td>
203
+ <td>70.4</td>
204
+ </tr>
205
+ <tr>
206
+ <td>OpenbookQA (0-shot)</td>
207
+ <td><b>46.2</b></td>
208
+ <td>42.4</td>
209
+ <td>45.8</td>
210
+ </tr>
211
+ <tr>
212
+ <td rowspan="2">Instructions following</td>
213
+ <td>MT-Bench (avg)</td>
214
+ <td>7.9</td>
215
+ <td><b>8.5</b></td>
216
+ <td>8.4</td>
217
+ </tr>
218
+ <tr>
219
+ <td>Alpaca (WC)</td>
220
+ <td>26.6</td>
221
+ <td><b>31.5</b></td>
222
+ <td>26.1</td>
223
+ </tr>
224
+ <tr>
225
+ <td>Tool use</td>
226
+ <td>BFCL AST (avg)</td>
227
+ <td>90.6</td>
228
+ <td><b>91.4</b></td>
229
+ <td>72.3</td>
230
+ </tr>
231
+ </tbody>
232
+ </table>
233
+
234
+
235
+ ## Technical Report
236
+ Coming soon....
237
+
238
+ ## Citation
239
+ If Falcon3 family were helpful to your work, feel free to give us a cite.
240
+
241
+ ```
242
+ @misc{Falcon3,
243
+ title = {The Falcon 3 family of Open Models},
244
+ author = {TII Team},
245
+ month = {December},
246
+ year = {2024}
247
+ }
248
+ ```
config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": null,
3
+ "eos_token": "<|endoftext|>",
4
+ "layer_norm_epsilon": 1e-06,
5
+ "multi_query_attention": true,
6
+ "quantization_bits": null,
7
+ "quantization_group_size": null,
8
+ "quantization_type": 0,
9
+ "unk_token": ""
10
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 11,
4
+ "eos_token_id": 11,
5
+ "transformers_version": "4.46.1"
6
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76301f14c56169c6ee7bc96f49c2270ddc925af4dd2804ba156668852617dd3d
3
+ size 7463223239
special_tokens_map.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ ">>TITLE<<",
4
+ ">>ABSTRACT<<",
5
+ ">>INTRODUCTION<<",
6
+ ">>SUMMARY<<",
7
+ ">>COMMENT<<",
8
+ ">>ANSWER<<",
9
+ ">>QUESTION<<",
10
+ ">>DOMAIN<<",
11
+ ">>EMAIL_ADDRESS<<",
12
+ ">>IP_ADDRESS<<",
13
+ "<|startoftext|>",
14
+ ">>IP_ADDRESS_0<<",
15
+ ">>IP_ADDRESS_1<<",
16
+ ">>IP_ADDRESS_2<<",
17
+ ">>IP_ADDRESS_3<<",
18
+ ">>IP_ADDRESS_4<<",
19
+ ">>IP_ADDRESS_5<<",
20
+ ">>IP_ADDRESS_6<<",
21
+ ">>IP_ADDRESS_7<<",
22
+ ">>IP_ADDRESS_8<<",
23
+ ">>IP_ADDRESS_9<<",
24
+ ">>PASSWORD<<",
25
+ ">>KEY<<"
26
+ ],
27
+ "eos_token": {
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ },
34
+ "pad_token": {
35
+ "content": "<|pad|>",
36
+ "lstrip": false,
37
+ "normalized": false,
38
+ "rstrip": false,
39
+ "single_word": false
40
+ }
41
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff