TheBloke commited on
Commit
a6ee390
1 Parent(s): 10f7109

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -16
README.md CHANGED
@@ -1,12 +1,13 @@
1
  ---
 
2
  datasets:
3
  - jondurbin/airoboros-2.1
4
  inference: false
5
  license: llama2
6
  model_creator: bhenrym14
7
- model_link: https://huggingface.co/bhenrym14/airoboros-l2-13b-2.1-YaRN-64k
8
  model_name: Airoboros L2 13B 2.1 YaRN 64K
9
  model_type: llama
 
10
  quantized_by: TheBloke
11
  ---
12
 
@@ -42,8 +43,9 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
42
  <!-- repositories-available start -->
43
  ## Repositories available
44
 
45
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ)
46
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GGUF)
 
47
  * [bhenrym14's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bhenrym14/airoboros-l2-13b-2.1-YaRN-64k)
48
  <!-- repositories-available end -->
49
 
@@ -59,6 +61,7 @@ ASSISTANT:
59
 
60
  <!-- prompt-template end -->
61
 
 
62
  <!-- README_GPTQ.md-provided-files start -->
63
  ## Provided files and GPTQ parameters
64
 
@@ -83,20 +86,20 @@ All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches
83
 
84
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
85
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
86
- | [main](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 7.26 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
87
- | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
88
- | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
89
- | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
90
 
91
  <!-- README_GPTQ.md-provided-files end -->
92
 
93
  <!-- README_GPTQ.md-download-from-branches start -->
94
  ## How to download from branches
95
 
96
- - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ:gptq-4bit-32g-actorder_True`
97
  - With Git, you can clone a branch with:
98
  ```
99
- git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ
100
  ```
101
  - In Python Transformers code, the branch is the `revision` parameter; see below.
102
  <!-- README_GPTQ.md-download-from-branches end -->
@@ -108,13 +111,13 @@ Please make sure you're using the latest version of [text-generation-webui](http
108
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
109
 
110
  1. Click the **Model tab**.
111
- 2. Under **Download custom model or LoRA**, enter `TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ`.
112
- - To download from a specific branch, enter for example `TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ:gptq-4bit-32g-actorder_True`
113
  - see Provided Files above for the list of branches for each option.
114
  3. Click **Download**.
115
  4. The model will start downloading. Once it's finished it will say "Done".
116
  5. In the top left, click the refresh icon next to **Model**.
117
- 6. In the **Model** dropdown, choose the model you just downloaded: `Airoboros-L2-13B-2.1-YaRN-64K-GPTQ`
118
  7. The model will automatically load, and is now ready for use!
119
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
120
  * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
@@ -155,9 +158,9 @@ pip3 install git+https://github.com/huggingface/transformers.git
155
  ```python
156
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
157
 
158
- model_name_or_path = "TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ"
159
  # To use a different branch, change revision
160
- # For example: revision="gptq-4bit-32g-actorder_True"
161
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
162
  device_map="auto",
163
  trust_remote_code=True,
@@ -215,10 +218,12 @@ For further support, and discussions on these models and AI in general, join us
215
 
216
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
217
 
218
- ## Thanks, and how to contribute.
219
 
220
  Thanks to the [chirper.ai](https://chirper.ai) team!
221
 
 
 
222
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
223
 
224
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
@@ -230,7 +235,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
230
 
231
  **Special thanks to**: Aemon Algiz.
232
 
233
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
234
 
235
 
236
  Thank you to all my generous patrons and donaters!
@@ -244,6 +249,8 @@ And thank you again to a16z for their generous grant.
244
 
245
  # Extended Context (via YaRN) Finetune of Llama-2-13b with airoboros-2.1 (fp16)
246
 
 
 
247
 
248
  ## Overview
249
 
 
1
  ---
2
+ base_model: https://huggingface.co/bhenrym14/airoboros-l2-13b-2.1-YaRN-64k
3
  datasets:
4
  - jondurbin/airoboros-2.1
5
  inference: false
6
  license: llama2
7
  model_creator: bhenrym14
 
8
  model_name: Airoboros L2 13B 2.1 YaRN 64K
9
  model_type: llama
10
+ prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
11
  quantized_by: TheBloke
12
  ---
13
 
 
43
  <!-- repositories-available start -->
44
  ## Repositories available
45
 
46
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-AWQ)
47
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ)
48
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GGUF)
49
  * [bhenrym14's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bhenrym14/airoboros-l2-13b-2.1-YaRN-64k)
50
  <!-- repositories-available end -->
51
 
 
61
 
62
  <!-- prompt-template end -->
63
 
64
+
65
  <!-- README_GPTQ.md-provided-files start -->
66
  ## Provided files and GPTQ parameters
67
 
 
86
 
87
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
88
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
89
+ | [main](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
90
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
91
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
92
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4) | 32768 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
93
 
94
  <!-- README_GPTQ.md-provided-files end -->
95
 
96
  <!-- README_GPTQ.md-download-from-branches start -->
97
  ## How to download from branches
98
 
99
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ:main`
100
  - With Git, you can clone a branch with:
101
  ```
102
+ git clone --single-branch --branch main https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ
103
  ```
104
  - In Python Transformers code, the branch is the `revision` parameter; see below.
105
  <!-- README_GPTQ.md-download-from-branches end -->
 
111
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
112
 
113
  1. Click the **Model tab**.
114
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ`.
115
+ - To download from a specific branch, enter for example `TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ:main`
116
  - see Provided Files above for the list of branches for each option.
117
  3. Click **Download**.
118
  4. The model will start downloading. Once it's finished it will say "Done".
119
  5. In the top left, click the refresh icon next to **Model**.
120
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Airoboros-L2-13B-2_1-YaRN-64K-GPTQ`
121
  7. The model will automatically load, and is now ready for use!
122
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
123
  * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
 
158
  ```python
159
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
160
 
161
+ model_name_or_path = "TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ"
162
  # To use a different branch, change revision
163
+ # For example: revision="main"
164
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
165
  device_map="auto",
166
  trust_remote_code=True,
 
218
 
219
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
220
 
221
+ ## Thanks, and how to contribute
222
 
223
  Thanks to the [chirper.ai](https://chirper.ai) team!
224
 
225
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
226
+
227
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
228
 
229
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
 
235
 
236
  **Special thanks to**: Aemon Algiz.
237
 
238
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
239
 
240
 
241
  Thank you to all my generous patrons and donaters!
 
249
 
250
  # Extended Context (via YaRN) Finetune of Llama-2-13b with airoboros-2.1 (fp16)
251
 
252
+ [TheBloke](https://huggingface.co/TheBloke) has kindly quantized this model to [GGUF](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GGUF) and [GPTQ](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-YaRN-64K-GPTQ).
253
+
254
 
255
  ## Overview
256