Commit
·
77359f8
1
Parent(s):
c09dfd4
Upload tiiuae/falcon-40b-instruct ctranslate fp16 weights
Browse files- README.md +270 -0
- config.json +6 -0
- generation_config.json +6 -0
- model.bin +3 -0
- special_tokens_map.json +16 -0
- tokenizer.json +0 -0
- tokenizer_config.json +8 -0
- vocabulary.json +0 -0
README.md
ADDED
@@ -0,0 +1,270 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- ctranslate2
|
4 |
+
- int8
|
5 |
+
- float16
|
6 |
+
|
7 |
+
datasets:
|
8 |
+
- tiiuae/falcon-refinedweb
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
inference: false
|
12 |
+
license: apache-2.0
|
13 |
+
---
|
14 |
+
# # Fast-Inference with Ctranslate2
|
15 |
+
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
|
16 |
+
|
17 |
+
quantized version of [tiiuae/falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)
|
18 |
+
```bash
|
19 |
+
pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.16.0
|
20 |
+
```
|
21 |
+
Converted on 2023-06-15 using
|
22 |
+
```
|
23 |
+
ct2-transformers-converter --model tiiuae/falcon-40b-instruct --output_dir /home/michael/tmp-ct2fast-falcon-40b-instruct --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
|
24 |
+
```
|
25 |
+
|
26 |
+
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
|
27 |
+
and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
|
28 |
+
- `compute_type=int8_float16` for `device="cuda"`
|
29 |
+
- `compute_type=int8` for `device="cpu"`
|
30 |
+
|
31 |
+
```python
|
32 |
+
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
|
33 |
+
from transformers import AutoTokenizer
|
34 |
+
|
35 |
+
model_name = "michaelfeil/ct2fast-falcon-40b-instruct"
|
36 |
+
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
|
37 |
+
model = GeneratorCT2fromHfHub(
|
38 |
+
# load in int8 on CUDA
|
39 |
+
model_name_or_path=model_name,
|
40 |
+
device="cuda",
|
41 |
+
compute_type="int8_float16",
|
42 |
+
# tokenizer=AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
|
43 |
+
)
|
44 |
+
outputs = model.generate(
|
45 |
+
text=["def fibonnaci(", "User: How are you doing? Bot:"],
|
46 |
+
max_length=64,
|
47 |
+
include_prompt_in_result=False
|
48 |
+
)
|
49 |
+
print(outputs)
|
50 |
+
```
|
51 |
+
|
52 |
+
# Licence and other remarks:
|
53 |
+
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
|
54 |
+
|
55 |
+
# Original description
|
56 |
+
|
57 |
+
|
58 |
+
# ✨ Falcon-40B-Instruct
|
59 |
+
|
60 |
+
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
|
61 |
+
|
62 |
+
*Paper coming soon 😊.*
|
63 |
+
|
64 |
+
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
|
65 |
+
|
66 |
+
## Why use Falcon-40B-Instruct?
|
67 |
+
|
68 |
+
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).**
|
69 |
+
* **Falcon-40B is the best open-source model available.** It outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
70 |
+
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
71 |
+
|
72 |
+
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
|
73 |
+
|
74 |
+
💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is Falcon-40B-Instruct's little brother!
|
75 |
+
|
76 |
+
```python
|
77 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
78 |
+
import transformers
|
79 |
+
import torch
|
80 |
+
|
81 |
+
model = "tiiuae/falcon-40b-instruct"
|
82 |
+
|
83 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
84 |
+
pipeline = transformers.pipeline(
|
85 |
+
"text-generation",
|
86 |
+
model=model,
|
87 |
+
tokenizer=tokenizer,
|
88 |
+
torch_dtype=torch.bfloat16,
|
89 |
+
trust_remote_code=True,
|
90 |
+
device_map="auto",
|
91 |
+
)
|
92 |
+
sequences = pipeline(
|
93 |
+
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
|
94 |
+
max_length=200,
|
95 |
+
do_sample=True,
|
96 |
+
top_k=10,
|
97 |
+
num_return_sequences=1,
|
98 |
+
eos_token_id=tokenizer.eos_token_id,
|
99 |
+
)
|
100 |
+
for seq in sequences:
|
101 |
+
print(f"Result: {seq['generated_text']}")
|
102 |
+
|
103 |
+
```
|
104 |
+
|
105 |
+
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
|
106 |
+
|
107 |
+
You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
|
108 |
+
|
109 |
+
|
110 |
+
|
111 |
+
# Model Card for Falcon-40B-Instruct
|
112 |
+
|
113 |
+
## Model Details
|
114 |
+
|
115 |
+
### Model Description
|
116 |
+
|
117 |
+
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
118 |
+
- **Model type:** Causal decoder-only;
|
119 |
+
- **Language(s) (NLP):** English and French;
|
120 |
+
- **License:** Apache 2.0;
|
121 |
+
- **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
|
122 |
+
|
123 |
+
### Model Source
|
124 |
+
|
125 |
+
- **Paper:** *coming soon*.
|
126 |
+
|
127 |
+
## Uses
|
128 |
+
|
129 |
+
### Direct Use
|
130 |
+
|
131 |
+
Falcon-40B-Instruct has been finetuned on a chat dataset.
|
132 |
+
|
133 |
+
### Out-of-Scope Use
|
134 |
+
|
135 |
+
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
|
136 |
+
|
137 |
+
## Bias, Risks, and Limitations
|
138 |
+
|
139 |
+
Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
|
140 |
+
|
141 |
+
### Recommendations
|
142 |
+
|
143 |
+
We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use.
|
144 |
+
|
145 |
+
## How to Get Started with the Model
|
146 |
+
|
147 |
+
|
148 |
+
```python
|
149 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
150 |
+
import transformers
|
151 |
+
import torch
|
152 |
+
|
153 |
+
model = "tiiuae/falcon-40b-instruct"
|
154 |
+
|
155 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
156 |
+
pipeline = transformers.pipeline(
|
157 |
+
"text-generation",
|
158 |
+
model=model,
|
159 |
+
tokenizer=tokenizer,
|
160 |
+
torch_dtype=torch.bfloat16,
|
161 |
+
trust_remote_code=True,
|
162 |
+
device_map="auto",
|
163 |
+
)
|
164 |
+
sequences = pipeline(
|
165 |
+
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
|
166 |
+
max_length=200,
|
167 |
+
do_sample=True,
|
168 |
+
top_k=10,
|
169 |
+
num_return_sequences=1,
|
170 |
+
eos_token_id=tokenizer.eos_token_id,
|
171 |
+
)
|
172 |
+
for seq in sequences:
|
173 |
+
print(f"Result: {seq['generated_text']}")
|
174 |
+
|
175 |
+
```
|
176 |
+
|
177 |
+
## Training Details
|
178 |
+
|
179 |
+
### Training Data
|
180 |
+
|
181 |
+
Falcon-40B-Instruct was finetuned on a 150M tokens from [Bai ze](https://github.com/project-baize/baize-chatbot) mixed with 5% of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) data.
|
182 |
+
|
183 |
+
|
184 |
+
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
|
185 |
+
|
186 |
+
|
187 |
+
## Evaluation
|
188 |
+
|
189 |
+
*Paper coming soon.*
|
190 |
+
|
191 |
+
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
|
192 |
+
|
193 |
+
|
194 |
+
## Technical Specifications
|
195 |
+
|
196 |
+
For more information about pretraining, see [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
|
197 |
+
|
198 |
+
### Model Architecture and Objective
|
199 |
+
|
200 |
+
Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
|
201 |
+
|
202 |
+
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
|
203 |
+
|
204 |
+
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
|
205 |
+
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
|
206 |
+
* **Decoder-block:** parallel attention/MLP with a single layer norm.
|
207 |
+
|
208 |
+
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
|
209 |
+
|
210 |
+
| **Hyperparameter** | **Value** | **Comment** |
|
211 |
+
|--------------------|-----------|----------------------------------------|
|
212 |
+
| Layers | 60 | |
|
213 |
+
| `d_model` | 8192 | |
|
214 |
+
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
|
215 |
+
| Vocabulary | 65024 | |
|
216 |
+
| Sequence length | 2048 | |
|
217 |
+
|
218 |
+
### Compute Infrastructure
|
219 |
+
|
220 |
+
#### Hardware
|
221 |
+
|
222 |
+
Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances.
|
223 |
+
|
224 |
+
#### Software
|
225 |
+
|
226 |
+
Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
|
227 |
+
|
228 |
+
|
229 |
+
## Citation
|
230 |
+
|
231 |
+
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
|
232 |
+
```
|
233 |
+
@article{falcon40b,
|
234 |
+
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
|
235 |
+
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
|
236 |
+
year={2023}
|
237 |
+
}
|
238 |
+
```
|
239 |
+
|
240 |
+
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
|
241 |
+
|
242 |
+
```
|
243 |
+
@article{refinedweb,
|
244 |
+
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
|
245 |
+
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
|
246 |
+
journal={arXiv preprint arXiv:2306.01116},
|
247 |
+
eprint={2306.01116},
|
248 |
+
eprinttype = {arXiv},
|
249 |
+
url={https://arxiv.org/abs/2306.01116},
|
250 |
+
year={2023}
|
251 |
+
}
|
252 |
+
```
|
253 |
+
|
254 |
+
To cite the [Baize](https://github.com/project-baize/baize-chatbot) instruction dataset used for this model:
|
255 |
+
```
|
256 |
+
@article{xu2023baize,
|
257 |
+
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
|
258 |
+
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
|
259 |
+
journal={arXiv preprint arXiv:2304.01196},
|
260 |
+
year={2023}
|
261 |
+
}
|
262 |
+
```
|
263 |
+
|
264 |
+
|
265 |
+
## License
|
266 |
+
|
267 |
+
Falcon-40B-Instruct is made available under the Apache 2.0 license.
|
268 |
+
|
269 |
+
## Contact
|
270 |
+
falconllm@tii.ae
|
config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<|endoftext|>",
|
3 |
+
"eos_token": "<|endoftext|>",
|
4 |
+
"layer_norm_epsilon": null,
|
5 |
+
"unk_token": "<|endoftext|>"
|
6 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.26.0"
|
6 |
+
}
|
model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:202af975b547f9baa0adac97797111b4a55c85cdcc385850da33da25f785602a
|
3 |
+
size 41319597494
|
special_tokens_map.json
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
">>TITLE<<",
|
4 |
+
">>ABSTRACT<<",
|
5 |
+
">>INTRODUCTION<<",
|
6 |
+
">>SUMMARY<<",
|
7 |
+
">>COMMENT<<",
|
8 |
+
">>ANSWER<<",
|
9 |
+
">>QUESTION<<",
|
10 |
+
">>DOMAIN<<",
|
11 |
+
">>PREFIX<<",
|
12 |
+
">>SUFFIX<<",
|
13 |
+
">>MIDDLE<<"
|
14 |
+
],
|
15 |
+
"eos_token": "<|endoftext|>"
|
16 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"eos_token": "<|endoftext|>",
|
4 |
+
"model_max_length": 2048,
|
5 |
+
"name_or_path": "tiiuae/falcon_tokenizer",
|
6 |
+
"special_tokens_map_file": null,
|
7 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
8 |
+
}
|
vocabulary.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|