RichardErkhov commited on
Commit
7cb6498
β€’
1 Parent(s): db4c0df

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +254 -0
README.md ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ falcon-7b-instruct-sharded - bnb 4bits
11
+ - Model creator: https://huggingface.co/vilsonrodrigues/
12
+ - Original model: https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ datasets:
20
+ - tiiuae/falcon-refinedweb
21
+ language:
22
+ - en
23
+ inference: true
24
+ widget:
25
+ - text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"
26
+ example_title: "Abu Dhabi Trip"
27
+ - text: "What's the Everett interpretation of quantum mechanics?"
28
+ example_title: "Q/A: Quantum & Answers"
29
+ - text: "Give me a list of the top 10 dive sites you would recommend around the world."
30
+ example_title: "Diving Top 10"
31
+ - text: "Can you tell me more about deep-water soloing?"
32
+ example_title: "Extreme sports"
33
+ - text: "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?"
34
+ example_title: "Twitter Helper"
35
+ - text: "What are the responsabilities of a Chief Llama Officer?"
36
+ example_title: "Trendy Jobs"
37
+ license: apache-2.0
38
+ ---
39
+
40
+ # Resharded
41
+
42
+ Resharded version of https://huggingface.co/tiiuae/falcon-7b-instruct for low RAM enviroments (e.g. Colab, Kaggle) in safetensors
43
+
44
+ Tutorial: https://medium.com/@vilsonrodrigues/run-your-private-llm-falcon-7b-instruct-with-less-than-6gb-of-gpu-using-4-bit-quantization-ff1d4ffbabcc
45
+
46
+ ---
47
+
48
+
49
+ # ✨ Falcon-7B-Instruct
50
+
51
+ **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
52
+
53
+ *Paper coming soon 😊.*
54
+
55
+ πŸ€— To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
56
+
57
+ ## Why use Falcon-7B-Instruct?
58
+
59
+ * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
60
+ * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
61
+ * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
62
+
63
+ ⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`.
64
+
65
+ πŸ’¬ **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
66
+
67
+ πŸ”₯ **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
68
+
69
+ ```python
70
+ from transformers import AutoTokenizer, AutoModelForCausalLM
71
+ import transformers
72
+ import torch
73
+ model = "tiiuae/falcon-7b-instruct"
74
+ tokenizer = AutoTokenizer.from_pretrained(model)
75
+ pipeline = transformers.pipeline(
76
+ "text-generation",
77
+ model=model,
78
+ tokenizer=tokenizer,
79
+ torch_dtype=torch.bfloat16,
80
+ device_map="auto",
81
+ )
82
+ sequences = pipeline(
83
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
84
+ max_length=200,
85
+ do_sample=True,
86
+ top_k=10,
87
+ num_return_sequences=1,
88
+ eos_token_id=tokenizer.eos_token_id,
89
+ )
90
+ for seq in sequences:
91
+ print(f"Result: {seq['generated_text']}")
92
+ ```
93
+
94
+ πŸ’₯ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
95
+
96
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
97
+
98
+ You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
99
+
100
+
101
+ # Model Card for Falcon-7B-Instruct
102
+
103
+ ## Model Details
104
+
105
+ ### Model Description
106
+
107
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
108
+ - **Model type:** Causal decoder-only;
109
+ - **Language(s) (NLP):** English and French;
110
+ - **License:** Apache 2.0;
111
+ - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
112
+
113
+ ### Model Source
114
+
115
+ - **Paper:** *coming soon*.
116
+
117
+ ## Uses
118
+
119
+ ### Direct Use
120
+
121
+ Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
122
+
123
+ ### Out-of-Scope Use
124
+
125
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
126
+
127
+ ## Bias, Risks, and Limitations
128
+
129
+ Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
130
+
131
+ ### Recommendations
132
+
133
+ We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
134
+
135
+ ## How to Get Started with the Model
136
+
137
+
138
+ ```python
139
+ from transformers import AutoTokenizer, AutoModelForCausalLM
140
+ import transformers
141
+ import torch
142
+ model = "tiiuae/falcon-7b-instruct"
143
+ tokenizer = AutoTokenizer.from_pretrained(model)
144
+ pipeline = transformers.pipeline(
145
+ "text-generation",
146
+ model=model,
147
+ tokenizer=tokenizer,
148
+ torch_dtype=torch.bfloat16,
149
+ device_map="auto",
150
+ )
151
+ sequences = pipeline(
152
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
153
+ max_length=200,
154
+ do_sample=True,
155
+ top_k=10,
156
+ num_return_sequences=1,
157
+ eos_token_id=tokenizer.eos_token_id,
158
+ )
159
+ for seq in sequences:
160
+ print(f"Result: {seq['generated_text']}")
161
+ ```
162
+
163
+ ## Training Details
164
+
165
+ ### Training Data
166
+
167
+ Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
168
+
169
+ | **Data source** | **Fraction** | **Tokens** | **Description** |
170
+ |--------------------|--------------|------------|-----------------------------------|
171
+ | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
172
+ | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
173
+ | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
174
+ | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
175
+
176
+
177
+ The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
178
+
179
+
180
+ ## Evaluation
181
+
182
+ *Paper coming soon.*
183
+
184
+ See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
185
+
186
+ Note that this model variant is not optimized for NLP benchmarks.
187
+
188
+
189
+ ## Technical Specifications
190
+
191
+ For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
192
+
193
+ ### Model Architecture and Objective
194
+
195
+ Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
196
+
197
+ The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
198
+
199
+ * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
200
+ * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
201
+ * **Decoder-block:** parallel attention/MLP with a single layer norm.
202
+
203
+ | **Hyperparameter** | **Value** | **Comment** |
204
+ |--------------------|-----------|----------------------------------------|
205
+ | Layers | 32 | |
206
+ | `d_model` | 4544 | Increased to compensate for multiquery |
207
+ | `head_dim` | 64 | Reduced to optimise for FlashAttention |
208
+ | Vocabulary | 65024 | |
209
+ | Sequence length | 2048 | |
210
+
211
+ ### Compute Infrastructure
212
+
213
+ #### Hardware
214
+
215
+ Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
216
+
217
+ #### Software
218
+
219
+ Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
220
+
221
+
222
+ ## Citation
223
+
224
+ *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
225
+ ```
226
+ @article{falcon40b,
227
+ title={{Falcon-40B}: an open large language model with state-of-the-art performance},
228
+ author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
229
+ year={2023}
230
+ }
231
+ ```
232
+
233
+ To learn more about the pretraining dataset, see the πŸ““ [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
234
+
235
+ ```
236
+ @article{refinedweb,
237
+ title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
238
+ author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
239
+ journal={arXiv preprint arXiv:2306.01116},
240
+ eprint={2306.01116},
241
+ eprinttype = {arXiv},
242
+ url={https://arxiv.org/abs/2306.01116},
243
+ year={2023}
244
+ }
245
+ ```
246
+
247
+
248
+ ## License
249
+
250
+ Falcon-7B-Instruct is made available under the Apache 2.0 license.
251
+
252
+ ## Contact
253
+ falconllm@tii.ae
254
+