TeeZee commited on
Commit
c37cf0b
1 Parent(s): 40e9668

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -402
README.md CHANGED
@@ -16,23 +16,6 @@ quantized_by: TheBloke
16
  base_model: tiiuae/falcon-180B-chat
17
  ---
18
 
19
- <!-- header start -->
20
- <!-- 200823 -->
21
- <div style="width: auto; margin-left: auto; margin-right: auto">
22
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
23
- </div>
24
- <div style="display: flex; justify-content: space-between; width: 100%;">
25
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
26
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
27
- </div>
28
- <div style="display: flex; flex-direction: column; align-items: flex-end;">
29
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
30
- </div>
31
- </div>
32
- <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
33
- <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
34
- <!-- header end -->
35
-
36
  # Falcon 180B Chat - GPTQ
37
  - Model creator: [Technology Innovation Institute](https://huggingface.co/tiiuae)
38
  - Original model: [Falcon 180B Chat](https://huggingface.co/tiiuae/falcon-180B-chat)
@@ -41,391 +24,7 @@ base_model: tiiuae/falcon-180B-chat
41
  ## Description
42
 
43
  This repo contains GPTQ model files for [Technology Innovation Institute's Falcon 180B Chat](https://huggingface.co/tiiuae/falcon-180B-chat).
44
-
45
- Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
46
-
47
- ## Requirements
48
-
49
- Transformers version 4.33.0 is required.
50
-
51
- Due to the huge size of the model, the GPTQ has been sharded. This will break compatibility with AutoGPTQ, and therefore any clients/libraries that use AutoGPTQ directly.
52
-
53
- But they work great loaded directly through Transformers - and can be served using Text Generation Inference!
54
-
55
- ## Compatibility
56
-
57
- Currently these GPTQs are known to work with:
58
- - Transformers 4.33.0
59
- - [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.0.4
60
- - Docker container: `ghcr.io/huggingface/text-generation-inference:latest`
61
-
62
- <!-- description end -->
63
-
64
- <!-- repositories-available start -->
65
- ## Repositories available
66
-
67
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Falcon-180B-Chat-GPTQ)
68
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF)
69
- * [Technology Innovation Institute's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/tiiuae/falcon-180B-chat)
70
- <!-- repositories-available end -->
71
-
72
- <!-- prompt-template start -->
73
- ## Prompt template: Falcon
74
-
75
- ```
76
- {system_message}
77
- User: {prompt}
78
- Assistant:
79
- ```
80
-
81
- Example:
82
-
83
- ```
84
- Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe
85
- User: Hello, Girafatron!
86
- Girafatron:
87
- ```
88
- <!-- prompt-template end -->
89
-
90
- <!-- README_GPTQ.md-provided-files start -->
91
- ## Provided files and GPTQ parameters
92
-
93
- Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
94
-
95
- Each separate quant is in a different branch. See below for instructions on fetching from different branches.
96
-
97
- All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
98
-
99
- <details>
100
- <summary>Explanation of GPTQ parameters</summary>
101
-
102
- - Bits: The bit size of the quantised model.
103
- - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
104
- - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
105
- - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
106
- - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
107
- - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
108
- - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
109
-
110
- </details>
111
-
112
- | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
113
- | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
114
- | main | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 94.25 GB | No | 4-bit, with Act Order and group size 128g. Higher quality than group_size=None, but also higher VRAM usage. |
115
- | gptq-4bit--1g-actorder_True | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 92.74 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
116
- | gptq-3bit-128g-actorder_True | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 73.81 GB | No | 3-bit, so much lower VRAM requirements but worse quality than 4-bit. With group size 128g and act-order. Higher quality than 3bit-128g-False. |
117
- | gptq-3bit--1g-actorder_True | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 70.54 GB | No | 3-bit, so much lower VRAM requirements but worse quality than 4-bit. With no group size for lowest possible VRAM requirements. Lower quality than 3-bit 128g. |
118
-
119
- <!-- README_GPTQ.md-provided-files end -->
120
-
121
- <!-- README_GPTQ.md-download-from-branches start -->
122
- ## How to download from branches
123
-
124
- - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Falcon-180B-Chat-GPTQ:gptq-3bit--1g-actorder_True`
125
- - With Git, you can clone a branch with:
126
- ```
127
- git clone --single-branch --branch gptq-3bit--1g-actorder_True https://huggingface.co/TheBloke/Falcon-180B-Chat-GPTQ
128
- ```
129
- - In Python Transformers code, the branch is the `revision` parameter; see below.
130
- <!-- README_GPTQ.md-download-from-branches end -->
131
- <!-- README_GPTQ.md-text-generation-webui start -->
132
- ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
133
-
134
- **NOTE**: I have not tested this model with Text Generation Webui. It *should* work through the Transformers Loader. It will *not* work through the AutoGPTQ loader, due to the files being sharded.
135
-
136
- Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
137
-
138
- It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
139
-
140
- 1. Click the **Model tab**.
141
- 2. Under **Download custom model or LoRA**, enter `TheBloke/Falcon-180B-Chat-GPTQ`.
142
- - To download from a specific branch, enter for example `TheBloke/Falcon-180B-Chat-GPTQ:gptq-3bit-128g-actorder_True`
143
- - see Provided Files above for the list of branches for each option.
144
- 3. Click **Download**.
145
- 4. The model will start downloading. Once it's finished it will say "Done".
146
- 5. Choose Loader: Transformers
147
- 6. In the top left, click the refresh icon next to **Model**.
148
- 7. In the **Model** dropdown, choose the model you just downloaded: `Falcon-180B-Chat-GPTQ`
149
- 8. The model will automatically load, and is now ready for use!
150
- 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
151
- * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
152
- 10. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
153
- <!-- README_GPTQ.md-text-generation-webui end -->
154
-
155
- <!-- README_GPTQ.md-use-from-python start -->
156
- ## How to use this GPTQ model from Python code
157
-
158
- ### Install the necessary packages
159
-
160
- Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ.
161
-
162
- ```shell
163
- pip3 install transformers>=4.33.0 optimum>=1.12.0
164
- pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
165
- ```
166
-
167
- ### Transformers sample code
168
-
169
- ```python
170
- from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
171
-
172
- model_name_or_path = "TheBloke/Falcon-180B-Chat-GPTQ"
173
-
174
- # To use a different branch, change revision
175
- # For example: revision="gptq-3bit-128g-actorder_True"
176
- model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
177
- device_map="auto",
178
- revision="main")
179
-
180
- tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
181
-
182
- prompt = "Tell me about AI"
183
- prompt_template=f'''User: {prompt}
184
- Assistant: '''
185
-
186
- print("\n\n*** Generate:")
187
-
188
- input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
189
- output = model.generate(inputs=input_ids, do_sample=True, temperature=0.7, max_new_tokens=512)
190
- print(tokenizer.decode(output[0]))
191
-
192
- # Inference can also be done using transformers' pipeline
193
-
194
- print("*** Pipeline:")
195
- pipe = pipeline(
196
- "text-generation",
197
- model=model,
198
- tokenizer=tokenizer,
199
- max_new_tokens=512,
200
- temperature=0.7,
201
- do_sample=True,
202
- top_p=0.95,
203
- repetition_penalty=1.15
204
- )
205
-
206
- print(pipe(prompt_template)[0]['generated_text'])
207
- ```
208
- <!-- README_GPTQ.md-use-from-python end -->
209
-
210
- <!-- README_GPTQ.md-compatibility start -->
211
- ## Compatibility
212
-
213
- The provided files have been tested with Transformers 4.33.0, and TGI 1.0.4.
214
-
215
- Because they are sharded, they will not yet via AutoGPTQ. It is hoped support will be added soon.
216
-
217
- Note: lack of support for AutoGPTQ doesn't affect your ability to load these models from Python code. It only affects third-party clients that might use AutoGPTQ.
218
-
219
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is confirmed working as of version 1.0.4.
220
- <!-- README_GPTQ.md-compatibility end -->
221
-
222
- <!-- footer start -->
223
- <!-- 200823 -->
224
- ## Discord
225
-
226
- For further support, and discussions on these models and AI in general, join us at:
227
-
228
- [TheBloke AI's Discord server](https://discord.gg/theblokeai)
229
-
230
- ## Thanks, and how to contribute.
231
-
232
- Thanks to the [chirper.ai](https://chirper.ai) team!
233
-
234
- I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
235
-
236
- If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
237
-
238
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
239
-
240
- * Patreon: https://patreon.com/TheBlokeAI
241
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
242
-
243
- **Special thanks to**: Aemon Algiz.
244
-
245
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
246
-
247
-
248
- Thank you to all my generous patrons and donaters!
249
-
250
- And thank you again to a16z for their generous grant.
251
-
252
- <!-- footer end -->
253
-
254
- # Original model card: Technology Innovation Institute's Falcon 180B Chat
255
-
256
-
257
- # 🚀 Falcon-180B-Chat
258
-
259
- **Falcon-180B-Chat is a 180B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B) and finetuned on a mixture of [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and [Airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1). It is made available under the [Falcon-180B TII License](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/ACCEPTABLE_USE_POLICY.txt).**
260
-
261
- *Paper coming soon* 😊
262
-
263
-
264
- 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://hf.co/blog/falcon-180b) or this [one](https://huggingface.co/blog/falcon) from the release of the 40B!
265
- Note that since the 180B is larger than what can easily be handled with `transformers`+`acccelerate`, we recommend using [Text Generation Inference](https://github.com/huggingface/text-generation-inference).
266
-
267
- You will need **at least 400GB of memory** to swiftly run inference with Falcon-180B.
268
-
269
- ## Why use Falcon-180B-chat?
270
-
271
- * ✨ **You are looking for a ready-to-use chat/instruct model based on [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B).**
272
- * **It is the best open-access model currently available, and one of the best model overall.** Falcon-180B outperforms [LLaMA-2](https://huggingface.co/meta-llama/Llama-2-70b-hf), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
273
- * **It features an architecture optimized for inference**, with multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
274
- * **It is made available under a permissive license allowing for commercial use**.
275
-
276
- 💬 **This is a Chat model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-180B](https://huggingface.co/tiiuae/falcon-180b).
277
-
278
- 💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) and [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) are Falcon-180B-Chat's little brothers!
279
-
280
- 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
281
-
282
-
283
- # Model Card for Falcon-180B-Chat
284
-
285
- ## Model Details
286
-
287
- ### Model Description
288
-
289
- - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
290
- - **Model type:** Causal decoder-only;
291
- - **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
292
- - **License:** [Falcon-180B TII License](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/ACCEPTABLE_USE_POLICY.txt).
293
-
294
- ### Model Source
295
-
296
- - **Paper:** *coming soon*.
297
-
298
- ## Uses
299
-
300
- See the [acceptable use policy](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/ACCEPTABLE_USE_POLICY.txt).
301
-
302
- ### Direct Use
303
-
304
- Falcon-180B-Chat has been finetuned on a chat dataset.
305
-
306
- ### Out-of-Scope Use
307
-
308
- Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
309
-
310
- ## Bias, Risks, and Limitations
311
-
312
- Falcon-180B-Chat is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
313
-
314
- ### Recommendations
315
-
316
- We recommend users of Falcon-180B-Chat to develop guardrails and to take appropriate precautions for any production use.
317
-
318
- ## How to Get Started with the Model
319
-
320
- To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
321
-
322
-
323
-
324
- ```python
325
- from transformers import AutoTokenizer, AutoModelForCausalLM
326
- import transformers
327
- import torch
328
-
329
- model = "tiiuae/falcon-180b-chat"
330
-
331
- tokenizer = AutoTokenizer.from_pretrained(model)
332
- pipeline = transformers.pipeline(
333
- "text-generation",
334
- model=model,
335
- tokenizer=tokenizer,
336
- torch_dtype=torch.bfloat16,
337
- trust_remote_code=True,
338
- device_map="auto",
339
- )
340
- sequences = pipeline(
341
- "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
342
- max_length=200,
343
- do_sample=True,
344
- top_k=10,
345
- num_return_sequences=1,
346
- eos_token_id=tokenizer.eos_token_id,
347
- )
348
- for seq in sequences:
349
- print(f"Result: {seq['generated_text']}")
350
-
351
- ```
352
-
353
-
354
- ## Training Details
355
-
356
- **Falcon-180B-Chat is based on [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B).**
357
-
358
- ### Training Data
359
- Falcon-180B-Chat is finetuned on a mixture of [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and [Airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1).
360
-
361
- The data was tokenized with the Falcon tokenizer.
362
-
363
-
364
- ## Evaluation
365
-
366
- *Paper coming soon.*
367
-
368
- See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
369
-
370
-
371
- ## Technical Specifications
372
-
373
- ### Model Architecture and Objective
374
-
375
- Falcon-180B-Chat is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
376
-
377
- The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
378
-
379
- * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
380
- * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
381
- * **Decoder-block:** parallel attention/MLP with a two layer norms.
382
-
383
- For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
384
-
385
- | **Hyperparameter** | **Value** | **Comment** |
386
- |--------------------|-----------|----------------------------------------|
387
- | Layers | 80 | |
388
- | `d_model` | 14848 | |
389
- | `head_dim` | 64 | Reduced to optimise for FlashAttention |
390
- | Vocabulary | 65024 | |
391
- | Sequence length | 2048 | |
392
-
393
- ### Compute Infrastructure
394
-
395
- #### Hardware
396
-
397
- Falcon-180B-Chat was trained on AWS SageMaker, on up to 4,096 A100 40GB GPUs in P4d instances.
398
-
399
- #### Software
400
-
401
- Falcon-180B-Chat was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
402
-
403
-
404
- ## Citation
405
-
406
- *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
407
- ```
408
- @article{falcon,
409
- title={The Falcon Series of Language Models:Towards Open Frontier Models},
410
- author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
411
- year={2023}
412
- }
413
- ```
414
-
415
- To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
416
-
417
- ```
418
- @article{refinedweb,
419
- title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
420
- author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
421
- journal={arXiv preprint arXiv:2306.01116},
422
- eprint={2306.01116},
423
- eprinttype = {arXiv},
424
- url={https://arxiv.org/abs/2306.01116},
425
- year={2023}
426
- }
427
- ```
428
-
429
 
430
  ## Contact
431
  falconllm@tii.ae
 
16
  base_model: tiiuae/falcon-180B-chat
17
  ---
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  # Falcon 180B Chat - GPTQ
20
  - Model creator: [Technology Innovation Institute](https://huggingface.co/tiiuae)
21
  - Original model: [Falcon 180B Chat](https://huggingface.co/tiiuae/falcon-180B-chat)
 
24
  ## Description
25
 
26
  This repo contains GPTQ model files for [Technology Innovation Institute's Falcon 180B Chat](https://huggingface.co/tiiuae/falcon-180B-chat).
27
+ with correct chat template inside tokenizer_config.json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Contact
30
  falconllm@tii.ae