Triangle104 commited on
Commit
8e37ae3
β€’
1 Parent(s): 71c0b32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +178 -0
README.md CHANGED
@@ -30,6 +30,184 @@ tags:
30
  This model was converted to GGUF format from [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
31
  Refer to the [original model card](https://huggingface.co/tiiuae/falcon-7b-instruct) for more details on the model.
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ## Use with llama.cpp
34
  Install llama.cpp through brew (works on Mac and Linux)
35
 
 
30
  This model was converted to GGUF format from [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
31
  Refer to the [original model card](https://huggingface.co/tiiuae/falcon-7b-instruct) for more details on the model.
32
 
33
+ ---
34
+ Model details:
35
+ -
36
+ Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
37
+
38
+ Paper coming soon 😊.
39
+
40
+ πŸ€— To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost fron HF!
41
+ Why use Falcon-7B-Instruct?
42
+
43
+ You are looking for a ready-to-use chat/instruct model based on Falcon-7B.
44
+ Falcon-7B is a strong base model, outperforming comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard.
45
+ It features an architecture optimized for inference, with FlashAttention (Dao et al., 2022) and multiquery (Shazeer et al., 2019).
46
+
47
+ πŸ’¬ This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-7B.
48
+
49
+ πŸ”₯ Looking for an even more powerful model? Falcon-40B-Instruct is Falcon-7B-Instruct's big brother!
50
+
51
+ from transformers import AutoTokenizer, AutoModelForCausalLM
52
+ import transformers
53
+ import torch
54
+
55
+ model = "tiiuae/falcon-7b-instruct"
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained(model)
58
+ pipeline = transformers.pipeline(
59
+ "text-generation",
60
+ model=model,
61
+ tokenizer=tokenizer,
62
+ torch_dtype=torch.bfloat16,
63
+ trust_remote_code=True,
64
+ device_map="auto",
65
+ )
66
+ sequences = pipeline(
67
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
68
+ max_length=200,
69
+ do_sample=True,
70
+ top_k=10,
71
+ num_return_sequences=1,
72
+ eos_token_id=tokenizer.eos_token_id,
73
+ )
74
+ for seq in sequences:
75
+ print(f"Result: {seq['generated_text']}")
76
+
77
+ πŸ’₯ Falcon LLMs require PyTorch 2.0 for use with transformers!
78
+
79
+ For fast inference with Falcon, check-out Text Generation Inference! Read more in this blogpost.
80
+
81
+ You will need at least 16GB of memory to swiftly run inference with Falcon-7B-Instruct.
82
+ Model Card for Falcon-7B-Instruct
83
+ Model Details
84
+ Model Description
85
+
86
+ Developed by: https://www.tii.ae;
87
+ Model type: Causal decoder-only;
88
+ Language(s) (NLP): English and French;
89
+ License: Apache 2.0;
90
+ Finetuned from model: Falcon-7B.
91
+
92
+ Model Source
93
+
94
+ Paper: coming soon.
95
+
96
+ Uses
97
+ Direct Use
98
+
99
+ Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
100
+ Out-of-Scope Use
101
+
102
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
103
+ Bias, Risks, and Limitations
104
+
105
+ Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
106
+ Recommendations
107
+
108
+ We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
109
+ How to Get Started with the Model
110
+
111
+ from transformers import AutoTokenizer, AutoModelForCausalLM
112
+ import transformers
113
+ import torch
114
+
115
+ model = "tiiuae/falcon-7b-instruct"
116
+
117
+ tokenizer = AutoTokenizer.from_pretrained(model)
118
+ pipeline = transformers.pipeline(
119
+ "text-generation",
120
+ model=model,
121
+ tokenizer=tokenizer,
122
+ torch_dtype=torch.bfloat16,
123
+ trust_remote_code=True,
124
+ device_map="auto",
125
+ )
126
+ sequences = pipeline(
127
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
128
+ max_length=200,
129
+ do_sample=True,
130
+ top_k=10,
131
+ num_return_sequences=1,
132
+ eos_token_id=tokenizer.eos_token_id,
133
+ )
134
+ for seq in sequences:
135
+ print(f"Result: {seq['generated_text']}")
136
+
137
+ Training Details
138
+ Training Data
139
+
140
+ Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
141
+ Data source Fraction Tokens Description
142
+ Bai ze 65% 164M chat
143
+ GPT4All 25% 62M instruct
144
+ GPTeacher 5% 11M instruct
145
+ RefinedWeb-English 5% 13M massive web crawl
146
+
147
+ The data was tokenized with the Falcon-7B/40B tokenizer.
148
+ Evaluation
149
+
150
+ Paper coming soon.
151
+
152
+ See the OpenLLM Leaderboard for early results.
153
+
154
+ Note that this model variant is not optimized for NLP benchmarks.
155
+ Technical Specifications
156
+
157
+ For more information about pretraining, see Falcon-7B.
158
+ Model Architecture and Objective
159
+
160
+ Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
161
+
162
+ The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
163
+
164
+ Positionnal embeddings: rotary (Su et al., 2021);
165
+ Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
166
+ Decoder-block: parallel attention/MLP with a single layer norm.
167
+
168
+ Hyperparameter Value Comment
169
+ Layers 32
170
+ d_model 4544 Increased to compensate for multiquery
171
+ head_dim 64 Reduced to optimise for FlashAttention
172
+ Vocabulary 65024
173
+ Sequence length 2048
174
+ Compute Infrastructure
175
+ Hardware
176
+
177
+ Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
178
+ Software
179
+
180
+ Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
181
+ Citation
182
+
183
+ Paper coming soon 😊. In the meanwhile, you can use the following information to cite:
184
+
185
+ @article{falcon40b,
186
+ title={{Falcon-40B}: an open large language model with state-of-the-art performance},
187
+ author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
188
+ year={2023}
189
+ }
190
+
191
+ To learn more about the pretraining dataset, see the πŸ““ RefinedWeb paper.
192
+
193
+ @article{refinedweb,
194
+ title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
195
+ author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
196
+ journal={arXiv preprint arXiv:2306.01116},
197
+ eprint={2306.01116},
198
+ eprinttype = {arXiv},
199
+ url={https://arxiv.org/abs/2306.01116},
200
+ year={2023}
201
+ }
202
+
203
+ License
204
+
205
+ Falcon-7B-Instruct is made available under the Apache 2.0 license.
206
+ Contact
207
+
208
+ falconllm@tii.ae
209
+
210
+ ---
211
  ## Use with llama.cpp
212
  Install llama.cpp through brew (works on Mac and Linux)
213