TheBloke commited on
Commit
60a0b33
β€’
1 Parent(s): fed9ea9

Initial AutoGPTQ model commit.

Browse files
Files changed (1) hide show
  1. README.md +307 -0
README.md ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - tiiuae/falcon-refinedweb
4
+ language:
5
+ - en
6
+ inference: false
7
+ ---
8
+
9
+ # Falcon-7B-Instruct GPTQ
10
+
11
+ This repo contains an experimantal GPTQ 4bit model for [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
12
+
13
+ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
14
+
15
+ ## EXPERIMENTAL
16
+
17
+ Please note this is an experimental first model. Support for it is currently quite limited.
18
+
19
+ To use it you will require:
20
+
21
+ 1. AutoGPTQ, from the latest `main` branch and compiled with `pip install .`
22
+ 2. `pip install einops`
23
+
24
+ You can then use it immediately from Python code - see example code below
25
+
26
+ ## text-generation-webui
27
+
28
+ There is also provisional AutoGPTQ support in text-generation-webui.
29
+
30
+ However at the time I'm writing this, a commit is needed to text-generation-webui to enable it to load this model.
31
+
32
+ I have [opened a PR here](https://github.com/oobabooga/text-generation-webui/pull/2374); once this is merged, text-generation-webui will support this GPTQ model.
33
+
34
+ To get it working before the PR is merged, you will need to:
35
+ 1. Edit `text-generation-webui/modules/AutoGPTQ_loader.py`
36
+ 2. Make the following change:
37
+ Find the line that says:
38
+ ```
39
+ 'use_safetensors': use_safetensors,
40
+ ```
41
+ And after it, add:
42
+ ```
43
+ 'trust_remote_code': shared.args.trust_remote_code,
44
+ ```
45
+ [Once you are done the file should look like this](https://github.com/oobabooga/text-generation-webui/blob/473a57e35219c063d2fc230cfc7b5a118b448b38/modules/AutoGPTQ_loader.py#L33-L39)
46
+ 3. Then save and close the file, and launch text-generation-webui as described below
47
+
48
+ ## How to download and use this model in text-generation-webui
49
+
50
+ 1. Launch text-generation-webui with the following command-line arguments: `--autogptq --trust_remote_code`
51
+ 2. Click the **Model tab**.
52
+ 3. Under **Download custom model or LoRA**, enter `TheBloke/falcon-7B-instruct-GPTQ`.
53
+ 4. Click **Download**.
54
+ 5. Wait until it says it's finished downloading.
55
+ 6. Click the **Refresh** icon next to **Model** in the top left.
56
+ 7. In the **Model drop-down**: choose the model you just downloaded, `falcon-7B-instruct-GPTQ`.
57
+ 8. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
58
+
59
+ ## About `trust_remote_code`
60
+
61
+ Please be aware that this command line argument causes Python code provided by Falcon to be executed on your machine.
62
+
63
+ This code is required at the moment because Falcon is too new to be supported by Hugging Face transformers. At some point in the future transformers will support the model natively, and then `trust_remote_code` will no longer be needed.
64
+
65
+ In this repo you can see two `.py` files - these are the files that get executed. They are copied from the base repo at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
66
+
67
+ ## Simple Python example code
68
+
69
+ To run this code you need to install AutoGPTQ from source:
70
+ ```
71
+ git clone https://github.com/PanQiWei/AutoGPTQ
72
+ cd AutoGPTQ
73
+ pip install . # This step requires CUDA toolkit installed
74
+ ```
75
+ And install einops:
76
+ ```
77
+ pip install einops
78
+ ```
79
+
80
+ You can then run this example code:
81
+ ```python
82
+ import torch
83
+ from transformers import AutoTokenizer
84
+ from auto_gptq import AutoGPTQForCausalLM
85
+
86
+ # Download the model from HF and store it locally, then reference its location here:
87
+ quantized_model_dir = "/path/to/falcon7b-instruct-gptq"
88
+
89
+ from transformers import AutoTokenizer
90
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=False)
91
+
92
+ model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0", use_triton=False, use_safetensors=True, torch_dtype=torch.float32, trust_remote_code=True)
93
+
94
+ prompt = "Write a story about llamas"
95
+ prompt_template = f"### Instruction: {prompt}\n### Response:"
96
+
97
+ tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
98
+ output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
99
+ print(tokenizer.decode(output[0]))
100
+ ```
101
+
102
+ ## Provided files
103
+
104
+ **Falcon-7B-Instruct-GPTQ-4bit-128g.safetensors**
105
+
106
+ This will work with AutoGPTQ as of commit `3cb1bf5` (`3cb1bf5a6d43a06dc34c6442287965d1838303d3`)
107
+
108
+ It was created with groupsize 64 to give higher inference quality, and without `desc_act` (act-order) to increase inference speed.
109
+
110
+ * `Falcon-7B-Instruct-GPTQ-4bit-128g.safetensors`
111
+ * Works only with latest AutoGPTQ CUDA, compiled from source as of commit `3cb1bf5`
112
+ * At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time.
113
+ * Works with text-generation-webui using `--autogptq --trust_remote_code`
114
+ * At this time it does NOT work with one-click-installers
115
+ * Does not work with any version of GPTQ-for-LLaMa
116
+ * Parameters: Groupsize = 64. No act-order.
117
+
118
+ # ✨ Original model card: Falcon-7B-Instruct
119
+
120
+ **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/LICENSE.txt).**
121
+
122
+ *Paper coming soon 😊.*
123
+
124
+ ## Why use Falcon-7B-Instruct?
125
+
126
+ * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
127
+ * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
128
+ * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
129
+
130
+ πŸ’¬ **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
131
+
132
+ πŸ”₯ **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
133
+
134
+ ```python
135
+ from transformers import AutoTokenizer, AutoModelForCausalLM
136
+ import transformers
137
+ import torch
138
+
139
+ model = "tiiuae/falcon-7b-instruct"
140
+
141
+ tokenizer = AutoTokenizer.from_pretrained(model)
142
+ pipeline = transformers.pipeline(
143
+ "text-generation",
144
+ model=model,
145
+ tokenizer=tokenizer,
146
+ torch_dtype=torch.bfloat16,
147
+ trust_remote_code=True,
148
+ device_map="auto",
149
+ )
150
+ sequences = pipeline(
151
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
152
+ max_length=200,
153
+ do_sample=True,
154
+ top_k=10,
155
+ num_return_sequences=1,
156
+ eos_token_id=tokenizer.eos_token_id,
157
+ )
158
+ for seq in sequences:
159
+ print(f"Result: {seq['generated_text']}")
160
+
161
+ ```
162
+
163
+ πŸ’₯ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
164
+
165
+
166
+ # Model Card for Falcon-7B-Instruct
167
+
168
+ ## Model Details
169
+
170
+ ### Model Description
171
+
172
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
173
+ - **Model type:** Causal decoder-only;
174
+ - **Language(s) (NLP):** English and French;
175
+ - **License:** [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/LICENSE.txt);
176
+ - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
177
+
178
+ ### Model Source
179
+
180
+ - **Paper:** *coming soon*.
181
+
182
+ ## Uses
183
+
184
+ ### Direct Use
185
+
186
+ Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
187
+
188
+ ### Out-of-Scope Use
189
+
190
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
191
+
192
+ ## Bias, Risks, and Limitations
193
+
194
+ Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
195
+
196
+ ### Recommendations
197
+
198
+ We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
199
+
200
+ ## How to Get Started with the Model
201
+
202
+
203
+ ```python
204
+ from transformers import AutoTokenizer, AutoModelForCausalLM
205
+ import transformers
206
+ import torch
207
+
208
+ model = "tiiuae/falcon-7b-instruct"
209
+
210
+ tokenizer = AutoTokenizer.from_pretrained(model)
211
+ pipeline = transformers.pipeline(
212
+ "text-generation",
213
+ model=model,
214
+ tokenizer=tokenizer,
215
+ torch_dtype=torch.bfloat16,
216
+ trust_remote_code=True,
217
+ device_map="auto",
218
+ )
219
+ sequences = pipeline(
220
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
221
+ max_length=200,
222
+ do_sample=True,
223
+ top_k=10,
224
+ num_return_sequences=1,
225
+ eos_token_id=tokenizer.eos_token_id,
226
+ )
227
+ for seq in sequences:
228
+ print(f"Result: {seq['generated_text']}")
229
+
230
+ ```
231
+
232
+ ## Training Details
233
+
234
+ ### Training Data
235
+
236
+ Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
237
+
238
+ | **Data source** | **Fraction** | **Tokens** | **Description** |
239
+ |--------------------|--------------|------------|-----------------------------------|
240
+ | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
241
+ | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
242
+ | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
243
+ | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
244
+
245
+
246
+ The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
247
+
248
+
249
+ ## Evaluation
250
+
251
+ *Paper coming soon.*
252
+
253
+ See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
254
+
255
+ Note that this model variant is not optimized for NLP benchmarks.
256
+
257
+
258
+ ## Technical Specifications
259
+
260
+ For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
261
+
262
+ ### Model Architecture and Objective
263
+
264
+ Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
265
+
266
+ The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
267
+
268
+ * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
269
+ * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
270
+ * **Decoder-block:** parallel attention/MLP with a single layer norm.
271
+
272
+ | **Hyperparameter** | **Value** | **Comment** |
273
+ |--------------------|-----------|----------------------------------------|
274
+ | Layers | 32 | |
275
+ | `d_model` | 4544 | Increased to compensate for multiquery |
276
+ | `head_dim` | 64 | Reduced to optimise for FlashAttention |
277
+ | Vocabulary | 65024 | |
278
+ | Sequence length | 2048 | |
279
+
280
+ ### Compute Infrastructure
281
+
282
+ #### Hardware
283
+
284
+ Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
285
+
286
+ #### Software
287
+
288
+ Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
289
+
290
+
291
+ ## Citation
292
+
293
+ *Paper coming soon 😊.*
294
+
295
+ ## License
296
+
297
+ Falcon-7B-Instruct is made available under the [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/LICENSE.txt). Broadly speaking,
298
+ * You can freely use our models for research and/or personal purpose;
299
+ * You are allowed to share and build derivatives of these models, but you are required to give attribution and to share-alike with the same license;
300
+ * For commercial use, you are exempt from royalties payment if the attributable revenues are inferior to $1M/year, otherwise you should enter in a commercial agreement with TII.
301
+
302
+
303
+ ## Contact
304
+ falconllm@tii.ae
305
+
306
+
307
+