maddes8cht commited on
Commit
0b86506
·
1 Parent(s): 1420fc2

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +252 -0
README.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
5
+
6
+ ## I am still building the structure of these descriptions.
7
+
8
+ These will contain increasingly more content to help find the best models for a purpose.
9
+
10
+ # falcon-7b-4k-alibi - GGUF
11
+ - Model creator: [openaccess-ai-collective](https://huggingface.co/openaccess-ai-collective)
12
+ - Original model: [falcon-7b-4k-alibi](https://huggingface.co/openaccess-ai-collective/falcon-7b-4k-alibi)
13
+ The `alibi` version is a version of Falcon-7b extended to 4k context using the RedPajama Sample dataset.
14
+
15
+ # About GGUF format
16
+
17
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
18
+ A growing list of Software is using it and can therefore use this model.
19
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
20
+
21
+ # Quantization variants
22
+
23
+ There is a bunch of quantized files available. How to choose the best for you:
24
+
25
+ # legacy quants
26
+
27
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
28
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
29
+ Falcon 7B models cannot be quantized to K-quants.
30
+
31
+ # K-quants
32
+
33
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
34
+ So, if possible, use K-quants.
35
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
36
+
37
+
38
+ # Original Model Card:
39
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
40
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
41
+
42
+ This is a version of Falcon extended to 4k context using the RedPajama Sample dataset. Please include attributions to this model when releasing finetuned models based on this.
43
+
44
+
45
+ # 🚀 Falcon-7B
46
+
47
+ **Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
48
+
49
+ *Paper coming soon* 😊.
50
+
51
+ ## Why use Falcon-7B?
52
+
53
+ * **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
54
+ * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
55
+ * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
56
+
57
+ ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
58
+
59
+ 🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
60
+
61
+ ```python
62
+ from transformers import AutoTokenizer, AutoModelForCausalLM
63
+ import transformers
64
+ import torch
65
+
66
+ model = "tiiuae/falcon-7b"
67
+
68
+ tokenizer = AutoTokenizer.from_pretrained(model)
69
+ pipeline = transformers.pipeline(
70
+ "text-generation",
71
+ model=model,
72
+ tokenizer=tokenizer,
73
+ torch_dtype=torch.bfloat16,
74
+ trust_remote_code=True,
75
+ device_map="auto",
76
+ )
77
+ sequences = pipeline(
78
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
79
+ max_length=200,
80
+ do_sample=True,
81
+ top_k=10,
82
+ num_return_sequences=1,
83
+ eos_token_id=tokenizer.eos_token_id,
84
+ )
85
+ for seq in sequences:
86
+ print(f"Result: {seq['generated_text']}")
87
+
88
+ ```
89
+
90
+ 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
91
+
92
+
93
+ # Model Card for Falcon-7B
94
+
95
+ ## Model Details
96
+
97
+ ### Model Description
98
+
99
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
100
+ - **Model type:** Causal decoder-only;
101
+ - **Language(s) (NLP):** English and French;
102
+ - **License:** Apache 2.0.
103
+
104
+ ### Model Source
105
+
106
+ - **Paper:** *coming soon*.
107
+
108
+ ## Uses
109
+
110
+ ### Direct Use
111
+
112
+ Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
113
+
114
+ ### Out-of-Scope Use
115
+
116
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
117
+
118
+ ## Bias, Risks, and Limitations
119
+
120
+ Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
121
+
122
+ ### Recommendations
123
+
124
+ We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
125
+
126
+ ## How to Get Started with the Model
127
+
128
+
129
+ ```python
130
+ from transformers import AutoTokenizer, AutoModelForCausalLM
131
+ import transformers
132
+ import torch
133
+
134
+ model = "tiiuae/falcon-7b"
135
+
136
+ tokenizer = AutoTokenizer.from_pretrained(model)
137
+ pipeline = transformers.pipeline(
138
+ "text-generation",
139
+ model=model,
140
+ tokenizer=tokenizer,
141
+ torch_dtype=torch.bfloat16,
142
+ trust_remote_code=True,
143
+ device_map="auto",
144
+ )
145
+ sequences = pipeline(
146
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
147
+ max_length=200,
148
+ do_sample=True,
149
+ top_k=10,
150
+ num_return_sequences=1,
151
+ eos_token_id=tokenizer.eos_token_id,
152
+ )
153
+ for seq in sequences:
154
+ print(f"Result: {seq['generated_text']}")
155
+
156
+ ```
157
+
158
+ ## Training Details
159
+
160
+ ### Training Data
161
+
162
+ Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
163
+
164
+ | **Data source** | **Fraction** | **Tokens** | **Sources** |
165
+ |--------------------|--------------|------------|-----------------------------------|
166
+ | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
167
+ | Books | 7% | 110B | |
168
+ | Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
169
+ | Code | 3% | 45B | |
170
+ | RefinedWeb-French | 3% | 45B | massive web crawl |
171
+ | Technical | 2% | 30B | arXiv, PubMed, UPSTO, etc. |
172
+
173
+
174
+ The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
175
+
176
+ ### Training Procedure
177
+
178
+ Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
179
+
180
+ #### Training Hyperparameters
181
+
182
+ | **Hyperparameter** | **Value** | **Comment** |
183
+ |--------------------|------------|-------------------------------------------|
184
+ | Precision | `bfloat16` | |
185
+ | Optimizer | AdamW | |
186
+ | Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
187
+ | Weight decay | 1e-1 | |
188
+ | Z-loss | 1e-4 | |
189
+ | Batch size | 2304 | 30B tokens ramp-up |
190
+
191
+
192
+ #### Speeds, Sizes, Times
193
+
194
+ Training happened in early March 2023 and took about two weeks.
195
+
196
+
197
+ ## Evaluation
198
+
199
+ *Paper coming soon.*
200
+
201
+ See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
202
+
203
+
204
+ ## Technical Specifications
205
+
206
+ ### Model Architecture and Objective
207
+
208
+ Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
209
+
210
+ The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
211
+
212
+ * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
213
+ * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
214
+ * **Decoder-block:** parallel attention/MLP with a single layer norm.
215
+
216
+ | **Hyperparameter** | **Value** | **Comment** |
217
+ |--------------------|-----------|----------------------------------------|
218
+ | Layers | 32 | |
219
+ | `d_model` | 4544 | Increased to compensate for multiquery |
220
+ | `head_dim` | 64 | Reduced to optimise for FlashAttention |
221
+ | Vocabulary | 65024 | |
222
+ | Sequence length | 2048 | |
223
+
224
+ ### Compute Infrastructure
225
+
226
+ #### Hardware
227
+
228
+ Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
229
+
230
+ #### Software
231
+
232
+ Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
233
+
234
+
235
+ ## Citation
236
+
237
+ *Paper coming soon* 😊.
238
+
239
+ ## License
240
+
241
+ Falcon-7B is made available under the Apache 2.0 license.
242
+
243
+ ## Contact
244
+ falconllm@tii.ae
245
+
246
+ <center>
247
+ <a href="https://maddes8cht.github.com"><img src="/assets/buttons/maddes8cht-github-io.jpg" alt="GitHub" /></a>
248
+ <a href="https://stackexchange.com/users/26485911"><img src="https://stackexchange.com/users/flair/26485911.png" width="208" height="58" alt="profile for maddes8cht on Stack Exchange, a network of free, community-driven Q&amp;A sites" title="profile for maddes8cht on Stack Exchange, a network of free, community-driven Q&amp;A sites"></a>
249
+ <a href="https://github.com/maddes8cht"><img src="/assets/buttons/github-button.jpg" alt="GitHub" /></a>
250
+ <a href="https://huggingface.co/maddes8cht"><img src="/assets/buttons/huggingface-button.jpg" alt="HuggingFace" /></a></p>
251
+ <a href="https://twitter.com/maddes1966"><img src="/assets/buttons/twitter-button.jpg" alt="HuggingFace" /></a></p>
252
+ </center>