RichardErkhov commited on
Commit
20cac76
โ€ข
1 Parent(s): 7d6a73b

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +266 -0
README.md ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ falcon-11B - GGUF
11
+ - Model creator: https://huggingface.co/tiiuae/
12
+ - Original model: https://huggingface.co/tiiuae/falcon-11B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [falcon-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q2_K.gguf) | Q2_K | 3.96GB |
18
+ | [falcon-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.IQ3_XS.gguf) | IQ3_XS | 4.47GB |
19
+ | [falcon-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.IQ3_S.gguf) | IQ3_S | 4.6GB |
20
+ | [falcon-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q3_K_S.gguf) | Q3_K_S | 4.6GB |
21
+ | [falcon-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.IQ3_M.gguf) | IQ3_M | 4.85GB |
22
+ | [falcon-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q3_K.gguf) | Q3_K | 5.06GB |
23
+ | [falcon-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q3_K_M.gguf) | Q3_K_M | 5.06GB |
24
+ | [falcon-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q3_K_L.gguf) | Q3_K_L | 5.41GB |
25
+ | [falcon-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.IQ4_XS.gguf) | IQ4_XS | 5.7GB |
26
+ | [falcon-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q4_0.gguf) | Q4_0 | 5.94GB |
27
+ | [falcon-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.IQ4_NL.gguf) | IQ4_NL | 6.0GB |
28
+ | [falcon-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q4_K_S.gguf) | Q4_K_S | 5.94GB |
29
+ | [falcon-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q4_K.gguf) | Q4_K | 6.38GB |
30
+ | [falcon-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q4_K_M.gguf) | Q4_K_M | 6.38GB |
31
+ | [falcon-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q4_1.gguf) | Q4_1 | 6.57GB |
32
+ | [falcon-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q5_0.gguf) | Q5_0 | 7.21GB |
33
+ | [falcon-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q5_K_S.gguf) | Q5_K_S | 7.21GB |
34
+ | [falcon-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q5_K.gguf) | Q5_K | 7.64GB |
35
+ | [falcon-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q5_K_M.gguf) | Q5_K_M | 7.64GB |
36
+ | [falcon-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q5_1.gguf) | Q5_1 | 7.84GB |
37
+ | [falcon-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q6_K.gguf) | Q6_K | 8.55GB |
38
+ | [falcon-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/tiiuae_-_falcon-11B-gguf/blob/main/falcon-11B.Q8_0.gguf) | Q8_0 | 10.99GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ datasets:
46
+ - tiiuae/falcon-refinedweb
47
+ language:
48
+ - en
49
+ - de
50
+ - es
51
+ - fr
52
+ - it
53
+ - nl
54
+ - pl
55
+ - pt
56
+ - ro
57
+ - cs
58
+ inference: false
59
+ license: unknown
60
+ ---
61
+
62
+ # ๐Ÿš€ Falcon2-11B
63
+
64
+ **Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.**
65
+
66
+ *Paper coming soon ๐Ÿ˜Š.*
67
+
68
+
69
+ ๐Ÿค— To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://huggingface.co/blog/falcon)!
70
+
71
+ โš ๏ธ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
72
+
73
+ ```python
74
+ from transformers import AutoTokenizer, AutoModelForCausalLM
75
+ import transformers
76
+ import torch
77
+
78
+ model = "tiiuae/falcon-11B"
79
+
80
+ tokenizer = AutoTokenizer.from_pretrained(model)
81
+ pipeline = transformers.pipeline(
82
+ "text-generation",
83
+ model=model,
84
+ tokenizer=tokenizer,
85
+ torch_dtype=torch.bfloat16,
86
+ )
87
+ sequences = pipeline(
88
+ "Can you explain the concepts of Quantum Computing?",
89
+ max_length=200,
90
+ do_sample=True,
91
+ top_k=10,
92
+ num_return_sequences=1,
93
+ eos_token_id=tokenizer.eos_token_id,
94
+ )
95
+ for seq in sequences:
96
+ print(f"Result: {seq['generated_text']}")
97
+
98
+ ```
99
+
100
+ ๐Ÿ’ฅ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
101
+
102
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
103
+
104
+ # Model Card for Falcon2-11B
105
+
106
+ ## Model Details
107
+
108
+ ### Model Description
109
+
110
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae)
111
+ - **Model type:** Causal decoder-only
112
+ - **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish
113
+ - **License:** [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html)
114
+
115
+ ### Model Source
116
+
117
+ - **Paper:** *coming soon*.
118
+
119
+ ## Uses
120
+
121
+ ### Direct Use
122
+
123
+ Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
124
+
125
+ ### Out-of-Scope Use
126
+
127
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
128
+
129
+ ## Bias, Risks, and Limitations
130
+
131
+ Falcon2-11B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
132
+
133
+ ### Recommendations
134
+
135
+ We recommend users of Falcon2-11B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
136
+
137
+ ## How to Get Started with the Model
138
+
139
+
140
+ ```python
141
+ from transformers import AutoTokenizer, AutoModelForCausalLM
142
+ import transformers
143
+ import torch
144
+
145
+ model = "tiiuae/falcon-11B"
146
+
147
+ tokenizer = AutoTokenizer.from_pretrained(model)
148
+ pipeline = transformers.pipeline(
149
+ "text-generation",
150
+ model=model,
151
+ tokenizer=tokenizer,
152
+ torch_dtype=torch.bfloat16,
153
+ device_map="auto",
154
+ )
155
+ sequences = pipeline(
156
+ "Can you explain the concepts of Quantum Computing?",
157
+ max_length=200,
158
+ do_sample=True,
159
+ top_k=10,
160
+ num_return_sequences=1,
161
+ eos_token_id=tokenizer.eos_token_id,
162
+ )
163
+ for seq in sequences:
164
+ print(f"Result: {seq['generated_text']}")
165
+
166
+ ```
167
+
168
+ ## Training Details
169
+
170
+ ### Training Data
171
+
172
+ Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data.
173
+
174
+ Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources.
175
+
176
+
177
+ The training stages were as follows:
178
+
179
+ | **Stage** | **Context length** | **Tokens** |
180
+ |--------------|-----------------|-------------|
181
+ | Stage 1 | 2048 | 4500 B |
182
+ | Stage 2 | 4096 | 250 B |
183
+ | Stage 3 | 8192 | 250 B |
184
+ | Stage 4 | 8192 | 500 B |
185
+
186
+
187
+ The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer.
188
+
189
+ ### Training Procedure
190
+
191
+ Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2.
192
+
193
+ #### Training Hyperparameters
194
+
195
+ | **Hyperparameter** | **Value** | **Comment** |
196
+ |--------------------|------------|-------------------------------------------|
197
+ | Precision | `bfloat16` | |
198
+ | Optimizer | AdamW | |
199
+ | Max learning rate | 3.7e-4 | Following a linear warm-up, then cosine decay to 1.89e-5 across 4500 B tokens. |
200
+ | Weight decay | 1e-1 | |
201
+ | Z-loss | 1e-4 | |
202
+ | Batch size | Variable | Batch size was gradually increased during the training |
203
+
204
+
205
+ #### Speeds, Sizes, Times
206
+
207
+ The model training took roughly two months.
208
+
209
+
210
+ ## Evaluation
211
+
212
+ |English Benchmark | **Value** |
213
+ |--------------------|------------|
214
+ | ARC-Challenge-25shots | 59.73 |
215
+ | HellaSwag-10shots | 82.91 |
216
+ | MMLU-5shots | 58.37 |
217
+ | Winogrande-5shots | 78.30 |
218
+ | TruthfulQA-0shot | 52.56 |
219
+ | GSM8k-5shots | 53.83 |
220
+ | ARC-Challenge-0shot | 50.17 |
221
+ | ARC-Easy-0shot | 77.78 |
222
+ | Hellaswag-0shot | 82.07 |
223
+
224
+ We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks.
225
+
226
+ ## Technical Specifications
227
+
228
+ ### Model Architecture and Objective
229
+
230
+ Falcon2-11B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
231
+
232
+ The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
233
+
234
+ * **Positional embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
235
+ * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention-2 ([Dao, 2023](https://arxiv.org/abs/2307.08691));
236
+ * **Decoder-block:** parallel attention/MLP.
237
+
238
+ | **Hyperparameter** | **Value** | **Comment** |
239
+ |--------------------|-----------|----------------------------------------|
240
+ | Layers | 60 | |
241
+ | `d_model` | 4096 | |
242
+ | `head_dim` | 128 | |
243
+ | Vocabulary | 65024 | |
244
+ | Sequence length | 8192 | During stages 3 and 4 |
245
+
246
+ ### Compute Infrastructure
247
+
248
+ #### Hardware
249
+
250
+ Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs in 128 p4d instances.
251
+
252
+ #### Software
253
+
254
+ Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2. More details about the distributed training strategy can be found in [Almazrouei et.al](https://arxiv.org/abs/2311.16867).
255
+
256
+ ## Citation
257
+
258
+ *Paper coming soon* ๐Ÿ˜Š.
259
+
260
+ ## License
261
+
262
+ Falcon2-11B is licenced under [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.
263
+
264
+ ## Contact
265
+ falconllm@tii.ae
266
+